Online Learning in the Second Half
Education
EP 18 - Dr. Brandeis Marshall talks about AI in the classroom, making assignments un-AI-able, data science, and the new digital AI divide.
In this episode, John and Jason talk with Dr. Brandeis Marshall about making online assignments Un-AIable, understanding data science, concerns & opportunities of using AI in the classroom, and the new digital AI divide. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - Online Learning Podcast
Dr. Brandeis Marshall Links and Resources:Dr. Brandeis Marshall Bio:
Brandeis Marshall is Founder and CEO of DataedX Group, a data ethics learning and development agency for educators, scholars and practitioners to counteract automated oppression efforts with culturally-responsive instruction and strategies. Trained as a computer scientist and as a former college professor, Brandeis teaches, speaks and writes about the racial, gender, socioeconomic and socio-technical impact of data operations on technology and society. She wrote Data Conscience: Algorithmic Siege on our Humanity (Wiley, 2022) as a counter-argument reference for tech’s move fast and break things philosophy. She pinpoints, guides and recommends paths to moving slower and building more responsible human-centered AI approaches.
TranscriptWe use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
Intro
[00:00:00] Jason: Some banter on the front end.
[00:00:02] Brandeis: Oh, I'm great at banter.
[00:00:03] Jason: Oh, good.
[00:00:04] Brandeis: I've been teaching for 23 years, so you have to have that conversation with the students before classes begin.
[00:00:13] Jason: If you like banter, then you've come to the right place because This podcast is mostly banter
[00:00:18] John: I'm John Nash here with Jason Johnston.
[00:00:21] Jason: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast.
[00:00:27] John: Yeah, and we are doing this podcast to let you all in on a conversation we've been having for the last two and a half years about online education. Look, online learning's had its chance to be great, and some of it is, but a lot of it still isn't. So how are we going to get to the next stage, Jason?
[00:00:43] Jason: That is a great question. How about we do a podcast and talk about it?
[00:00:47] John: I love that idea. What do you want to talk about today?
[00:00:50] Jason: um, to talk with you, of like usual,
[00:00:53] John: That's overrated, but
that's
[00:00:54] Jason: but I would also love
to talk to you. a very special guest with us, Dr. Brandeis Marshall. Welcome.
[00:01:01] Brandeis: Thank you both for having me.
[00:01:03] Jason: And is it okay if we call you Brandeis?
[00:01:05] Brandeis: Yes, feel free to
[00:01:06] Jason: Okay. Thank you. It's so great to have you here. And Brandeis, I'd love for you to introduce yourself, but just in, in general she's the founder, CEO of Data edX Group, a data ethics learning and development for educators. scholars and practitioners to counteract automated oppression efforts with culturally responsive instruction and strategies. Not only that, but she has a background in education. And we'd love to talk to you a little bit about that. What would you like to say about yourself here today?
[00:01:40] Brandeis: listen. I am an educator, a data person. Like I think everyone is at this point in this age of AI and whatnot what, and what it isn't and what it is. And yeah, I'm also just, I just lead black women in data as well, which is really focused on increasing the number of black women in the data industry.
So that's all I want to say about myself. I have books and I write things and I talk to people, but. thOse are the main things about me.
[00:02:07] Jason: You're humble. She writes books. She talks about some things. She has excellent posts. She is continues to be an educator for us and ways in which we have connected with some of her writings that we'll talk about. But yeah, thanks so much for.
being with us.
Un-AIable Assignments
[00:02:23] John: I'm just going to get it out of the way. I'm gushing a little bit, but I'm very excited to get to talk to you today, Dr. Marshall. And so there, I just got it out of the way. But yeah, but mostly because, the number one. piece of reading that I've been telling everybody I know, particularly those who are in education circles and worrying about AI, is to read your Medium piece called "What's Un-AIable."
[00:02:47] Brandeis: Yes. I keep telling people to just calm down, and I'm now seeing commercials that are like, we're going to be using AI basically as an assistant. I'm thinking I've been saying that since March, but yes. Thank you for sharing the piece and hopefully people get something good out of it. It seems as though it has been very well received and people are yeah, that's right. AI can't do context. AI cannot. AI cannot do conflict resolution. It cannot. What happens? AI will literally get to a place where it has a fork in the road.
And then what does it do? Abort. It just aborts. You can't abort as a human. You gotta decide what you're gonna do. Doing nothing is still a decision.
But AI will be like, it'll just end the program. And you'll be like, what the, what happened? It's like cyber, it's what is that, Cyber Monday? That happens, or, right after the holidays? It's start up, everything's just frozen. That's what it, and just abort.
[00:04:00] John: I'm in the business of preparing P 12 school principals and superintendents. And so these are, my students are adult teachers who are going to be leaders of schools.
And so that puts me in circles of people who are talking about. What are students going to do in my classrooms now, and what are they going to create, and what am I going to do to be able to thwart this? And my response was, perhaps don't try to thwart this, but how might you look at things that students can do that are un AIable?
And then I share your piece on that. And you cover three big things, which is that AI does not have contextual awareness, it cannot do conflict resolution, and it cannot do critical thinking. And when I mentioned that, these two, the teachers just they like, lean back a little and their shoulders relax and they go, yeah, you're right.
It can't. And we can still teach that. And we can ask students to demonstrate that to us. Talk a little bit about what drove you to write that piece and why we should always be thinking carefully about what's un AIable.
[00:05:05] Brandeis: Yeah. So I wrote the piece because I was having similar conversations because I do teach adults as well. And some of them are instructors, right? Some of them are new instructors. Some of them have been in the education industry for a while at all levels. And I just one day in this conversation just sat back and was like, there are things that this AI cannot do.
And I was in a room with people who were just so enamored with. all of the generative AI new tools that had just come out. Cause this was like April, May, but everything had hit the scene. And I was just like, y'all are excited for no reason. And so I sat down and I just thought about what can't AI do?
And as with many people who are writing pieces, you get your best ideas when you're not trying to get the idea. So I think I was like in the shower or something. And I started just to list like these things in my head. And those, these were the three things that bubbled up. And then saying this needs to be front and center for a lot of instructors and just a lot of people in general, just everybody is trying to adopt AI without understanding its limitations.
And so I wrote the piece as a way to provide like a grounding and a practicality on what you cannot make AI do, nor do we want to, which is the other part of the piece, which is we don't want AI to do any of this stuff.
Understanding Data Science
[00:06:34] Jason: Tell me, what does it really mean? I'm not a data scientist. I have a sense of what that means because, as an educator, of course, we work with data, but I realize that I'm not a data scientist. What does that really mean to be a data scientist?
[00:06:49] Brandeis: Data scientist as a profession has changed over the last five years or so. Originally, a data scientist was just a big umbrella for anyone who worked in data. If you were working on an Excel spreadsheet, you were coding in a particular language, or you were a journalist talking about different types of visualizations and figures and charts and statistics. And a data scientist now has elevated to be someone who is more of like a data architect, really trying to deal with the strategy behind how data is modeled and organized, and then how is it interpreted by a team to help decision makers, help human and automated decision makers. And most times they tend to be managers, and They also can be more on the technical, statistical, and computer science side, where they're actually doing some coding and managing projects.
So it really does depend on the organization of what a data scientist is. I call myself a data person because I think it's important to think about the full ramifications of data and how unifying and divisive it can be, right? Because data is everywhere. Every company is essentially a data company now, because they're trying to get your data.
They're trying to understand it in ways in order to market better to you. Capitalistic society. So I call myself a data person that I, my real niche is in data engineering, which is all the data modeling side. So I'm that database person that everyone says, Oh, that's boring stuff. I be I'm here for it.
Give me an SQL query and a table. And I'm happy as a, I'm happy as a plant, but I do dabble in some of the other areas of data, which is, analysis, So a little bit of statistical background , as well as visualization and everyone knows the pros and cons of data visualization, right?
So long answer for a very easy question. So sorry, you are talking to a fellow educator.
[00:08:56] Jason: No, that's a great answer. And I think it's interesting for those of us that are not in it understanding the shifts as well. This kind of shift in role is that I'm assuming that's partly just from the massive amounts of data that we have nowadays, as opposed to even 10 years ago.
[00:09:12] Brandeis: Yeah, that's, it's because data science came on the scene. What about, let's say about 2008 or nine, like pretty heavily. It got more into the public eye about 2014, 15. That's when I started to really see myself in the data science realm. But then companies had to figure out what a data team consisted of inside their organization.
And they started to figure out that a data team is different than a software development team, because they were merging the two. And a lot of companies merged the two. They think, Oh, if you're a coder, you also know data. No, you don't. Because there's statistics, there's implications of that data, that's where the data is being sourced.
There's where the data is being output and how is it being interpreted and communicated to the public or to stakeholders within an organization. So there's A whole ecosystem of data that then started to really tease out about 2015 2016. So then you had these different roles really populate. You had data engineering on that back end.
Then you had more of the data analysis piece. So all of the data analysts that are really dealing with statistical models and machine learning engineers and scientists, and then you had data visualizers that were dealing with Tableau and all other types of data tools. And then you also had people who were just doing the communication component, who was there, it was their job.
They were like the marketing and PR people, but just for all of the data work that had been done. So then it all got separated and then you needed someone to be in charge of all these different roles and that's where you have a data scientist or a data like project manager. So more roles just came out because everyone started to notice like, it's not about software development.
This is literally about how do you deal with data.
[00:11:09] John: So when you think about approaching the question of AI in the classroom, maybe differently now than maybe you would have a decade ago?
[00:11:18] Brandeis: Most definitely.
Almost definitely. Because AI was extremely dumb. It's still dumb, right? Let's be clear. Let's level set. AI is dumb. It's just a parrot, right? Whatever input is given, it churns out a variation of it to you as a human. But 10 years ago, it was very much in its infancy.
10 years ago, we were just being able to go onto the interwebs and type in a phrase and then it auto complete. And that autocomplete wasn't done well. Now we can type in a phrase to pretty much any search box, and it has a pretty good likelihood of being what we want. Or at least providing us several options, right? At this point. But 10 years ago, I would have been like, Okay, AI doesn't, isn't really a factor in my life as much. It's not a day to day. Interaction, there was a little bit less surveillance, a little bit less of grabbing my data and then using it for nefarious purposes, because there's data breaches that are happening every single day.
Now we have generative AI, which is automated generation of content. 10 years ago, that wasn't a thing. It was in process, but it wasn't. public. And now it's public. You can create images, you can create words, you can create essays, you can create tests, you can create paragraphs, right? And that is a different set of AI algorithms than it was 10 years ago.
So it's scary. It's also good. Because you can now tell what is AI and what isn't if you really know how to attune to that stuff. But it's, yeah, it's just very different. Ten years ago it was very different. I could easily change my questions around and the students wouldn't be able to find the answer.
Now they can dump the question into an automated system of generative AI and produce a response that may or may not be accurate, and I might not be able to tell.
How LLMs are Trained
[00:13:45] John I would love to talk about the data that these LLMs are trained on. And that great article recently in Rolling Stone about Timnit Gebru. But yeah,
really
[00:13:54] Brandeis: Yeah, Yeah there's a lot of lawsuits that are in, in process because a lot of the data has been just. Web scraped and no one's given the consent. There's even a lawsuit for GitHub for their co pilot software. Because GitHub co pilot is generating computer code.
They use all the github repository code in order to then create copilot. And then those who had created, , code using github then said, Hey. This is supposed to be open source and therefore you use my data in order to now create an automated tool in order to create software, you're infringing upon my copyright.
And so there's a lawsuit there. And then of course, Microsoft slash GitHub is of course making a subscription model for copilot. What is the royalties for those software developers and other coders? Yeah, so there's, it's, that Rolling Stone article just is the tip of the iceberg of all the different issues.
when it comes to how these LLMs were trained. And the fact that of course, the main part of the article was all about the discrimination and the bias that's built in. So it's very skewed toward certain demographics and then anyone that's outside of that demographic therefore isn't represented at all.
Or it's very oppressive in the way that It returns results, right? To black and brown people in particular and also to women. And so there's a, there's, yeah, that's a whole 20 minute conversation for me because there's a lot there,
[00:15:39] John: yeah. Were they shocked when they scraped Twitter and Reddit and then they ended up with white supremacist misogynistic responses?
[00:15:47] Brandeis: right? Because I like in my book, Data Conscience, I have a whole chapter that talks about like the discrimination piece of it, right? There's a whole chapter that talks about like algorithmic influencers. That's basically the name of the chapter. And I just, yeah. Go down the list of all the content moderation failures on the human side and the automated side.
There's just a lot of breakdown of how content moderation does not moderate the content, right? The people who are trying to say, Hey, this content is bad. If they do it in the wrong way that the system doesn't understand, then they become the harasser instead of for the harassed.
[00:16:29] John: Remarkable.
[00:16:31] Brandeis: Yep. So yeah, do not put a comment. Under a post that you believe is harassing because you become the harasser because the way most of these platforms work is that the originator of the comment is the victim in the situation. So it's just so if you want to say, "hey, this is bad content," you want to repost with your own thoughts. So you become the originator and not the commenter or the replier.
[00:17:08] John: Wow. I had no idea
[00:17:09] Brandeis: fascinating and terrible all at the same time.
[00:17:12] John: Yeah.
[00:17:14] Jason: In context again, right? It just takes this basic hierarchical kind of approach. They can completely miss it.
[00:17:21] Brandeis: Yeah, I
[00:17:22] Jason: In terms of understanding what the power dynamic is or what the context is or anything, they completely miss it.
[00:17:28] Brandeis: Exactly. And so my example that I put in this chapter was from one very well known tweeter. Her name is Shana White. So Dr. White was commenting on a video. of a congressperson, I believe, who was talking about indigenous people in an oppressive way. And she made a comment and say, this is not right. No, indigenous people are Americans and you need to stop doing this. And why don't you, I don't know, walk out into traffic," or something. That comment got her banned from the platform because she commented. And so based upon the rules at the time of that platform, which was Twitter, she was a commenter, so therefore what she said was considered bullying and harassment and inciting someone to kill themselves, and therefore she was banned from Twitter, not looking at the fact that the original post was content that was inaccurate and derogatory. Yeah. It's terrible. But this is the way content moderators work because it's looking for a pattern. It's an algorithm. It's trying to find a pattern. And a lot of patterns, they work well for very niche problems. But when it comes to general purpose, they fall apart. To your earlier, point, Jason.
Where, data just, there's a certain point where just having everything in a general arena just doesn't work anymore. Because you completely lose the context.
[00:19:02] John: set of data that is ripped from sources that are already very white, very male. To your point earlier that, we don't have, we don't have a corpus of data from Sub Saharan Africa that could be brought in to think that all through. And so we have to start in Silicon Valley, unfortunately,
[00:19:25] Brandeis: Unfortunately, and then some people believe that there should be people that should get data from Sub Saharan Africa and put it in a digital, into our digital infrastructure. And I go, why would you want to do that? Would the people of Sub Saharan Africa want that to happen? Would they want their stuff digitized?
Like, do they understand ramifications would be?
[00:19:46] John: Yeah. To your agency point.
[00:19:47] Brandeis: Yeah, to the agency point, exactly. So there might be causes where you might not want your data in there. EvEry time you go to purchase something, do you always register? Or do you sometimes continue as guest.
[00:20:02] John: Right.
[00:20:02] Brandeis: And sometimes we just want to continue as guest. We don't want to have a profile ID and a password with our name and our email and our address and a credit card on file and
[00:20:13] John: Yeah.
[00:20:14] Jason: Do you envision a world, you're a data scientist, you have a good understanding of how this how this all works, how it's growing. Do you envision a world where there's enough data to draw upon that they can understand things like context?
[00:20:31] Brandeis: No. I can't even imagine that. And here's why. When we think about AI learning our patterns, We tend to forget as humans that our patterns change at the same time or even faster than AI, right? And so as AI evolves we evolve so we're going to be evolving Faster than the AI because how can we create any AI tool without us evolving ahead of it? So I imagine a world where We would have these AI tools that could be a compliment and help support us. I think using some of these AI tools and even some of the canned questions that you give in a classroom and putting it inside of these AI tools and have the AI tools provider response and really having the students critique it.
I think that would be a very interesting application. right? How do you know that this literature, right, synopsis is terrible or that these sources are not accurate, right? I think that is a great learning option for instructors, but People tend to think that like humanity evolution is going to stop somehow, and then AI is going to just go beyond no, this is not minority report.
[00:21:57] Jason: Thank you.
[00:21:59] Brandeis: We live in the real world where every single time we are evolving faster. And then, given the fact that our world is so global, there's many different cultures that are trying to be represented by this general AI. And it's very hard to capture the nuances of different cultures. You can't capture the culture that might be in Nigeria, versus that in Cameroon, versus that in the Bronx, New York, versus that in Akron, Ohio, where I'm from. There you can't capture those pockets of culture. within an AI system because the AI is built for general, right?
Because it's trying to find a pattern, and that's what everyone needs to understand about algorithms. They're looking for a pattern. They can't find a pattern in all of these different cultures in order to be accurate enough to circumvent your own human understanding of how that culture interacts.
Like as soon as someone says pop, I know that they're from the Midwest.
[00:23:01] John: Yeah.
[00:23:02] Brandeis: how they say it, I can have an idea if they're from Ohio like me, or from Michigan, so there's a difference. And even though it's the same word, it has a different tone, there's a different way you say it. I had to break myself out of saying pop when I got to college in New York, because everyone made fun of me. And now I some, I don't slip up, but then when other people say, they're like, Oh, where are you from?
So you from Chicago? You're from Ohio? Like, where are you from? And then there's a different conversation.
[00:23:28] Jason: , I think that is a great point about imagining an AI when sometimes when people imagine an AI that say out humans us. Okay, I'm just gonna put that as a general sense.
They're thinking about a very static kind of At this point in time that they are somehow able to achieve the singularity of everything that we know at this moment But It's does not seem possible because we continue to move forward as human beings and so on And it seems like Tell me this from a data standpoint It seems like data, when generalized, may always get it wrong.
[00:24:10] Brandeis: Yes. because there's no context around the data,
right?
And there's also another issue when it comes to data is that people think that data exists forever. And right now it does. You can pretty much store data and put it on some cloud somewhere. You can pay for that storage, but there will become a point where, environmental justice is going to become more well known.
And people are going to start to say, Why am I storing junk? Because if we think about if I were astrologer, and I'm talking about the cosmos and the galaxy and how much debris of all of the satellites that exist around Earth is, we're essentially doing the same thing with data. We just have a whole bunch of junk that we're holding on to, and we meaning companies, people, etc.
And we're going to need to start to be more deliberate and intentional about what data we collect and what data we store and actually deleting. data.
Environmental Concerns of Data
[00:25:19] Jason: You mean my 2010 YouTube cat video isn't going to live forever.
[00:25:23] Brandeis: Oh God, I hope not. I think we need to think about how are we going to remove data because the computing power to store and maintain the data in the digital infrastructure is harming our world. And we're not talking about that. The heat that we've experienced this summer, there's data centers that are just combating, how do we cool?
Because it's just so hot outside that they're, AC units are being taxed, right? So there's a lot of residual effects of having all of this data. And yes, data needs to have a limit. Yes, data needs to, we need to think more broadly about how the ramifications of data and how we deal with that infrastructure works and doesn't work for us as humans.
[00:26:22] Jason: I saw a report the other day, and I had not even thought about this direction. I obviously had thought about the amount of cooling that it takes for data centers and so on. But they talked about the amount of water that it actually takes to maintain a data center as a data person and how you, I assume throughout your.
Life will continue to use large amounts of data and be a expert in that direction. Do you feel some conflict with that? Like just how, moving forward. My assumption for you anyways, is that you'll just find yourself managing large sets of data for all time,
[00:27:07] Brandeis: Eons.
[00:27:08] Jason: but, and I don't think it is, I don't think of it as a boring thing, per your previous conversation, I think that It's a very creative thing to think about how might we wield this data into ways that we can improve humanity and so on.
However, do you feel some conflict with the more environmental issues around that?
[00:27:29] Brandeis: I Do. But what I try to do is hold on to devices for much longer than I need to. So like my computer that we're on right now, it's from 2017. I don't need the newest version, right? I do have external drives. instead of storing everything on my laptop. I make sure to shut down my laptop, as, as often as I can.
So it tends to be five to six days a week. An order and I try to unplug even the charger, right? And so I, I do my best to be conscious of those little things. I make sure that my, even my mobile device is not the newest. I can't remember what iPhone I'm on. It's old. aNd so I do have conflict. But I do my best to make the individual effort in order to take the machinery to its limit, right? Rather than being the person who is every time a new gadget comes out that I need to have it. Or like every 18 months I need to have a new machine. I used to be like that and then I stopped. I was like, I really don't.
I really haven't used all the gigs. And of these gigs that I'm using, I could just delete some of this stuff. Mind you, I still have my dissertation in digital form from 2007. But but a lot of the other stuff is gone.
But it's hard. Yeah, but there is conflict.
[00:28:57] John: So I hear you saying in some the data uses a massive amount of electricity and this is a problem. And to Artificial General Intelligence, AGI, the next terrible thing that's supposedly coming when the robots rise is probably not on the horizon in our lifetimes or at all.
[00:29:15] Brandeis: I would say so. I think people are excited about this general purpose AI, and I roll my eyes. I'm like, really? You really want general purpose AI? Hasn't AI been really trashy thus far? You want that on steroids? Really? I don't. I'm not a fan.
And realize that a lot of our core systems don't subscribe to all of the AI hype. I'm talking about banking and healthcare. Every time you want to get access to their system, you have to do a separate login. You're not doing a login using your Gmail credentials. or some other single sign on, you have to create a completely separate sign on because it's a different security level.
[00:30:12] John: Yeah,
[00:30:14] Brandeis: if the core systems aren't subscribing to it, then that should make you have cause for pause.
[00:30:23] John: in fact, the core systems are probably still they're pretty dumb, actually, I think, because they're just. when I log into my health care system or my bank, and again, I'm not a data person, but aren't I just querying an SQL database probably or something like that? And it's just spitting stuff back and forth to me.
[00:30:42] Brandeis: mOst likely. Like on their back end, they probably do have a pretty sophisticated data repo that probably is hosted on some type of cloud, but it's under such severe security that we don't see it at the commercial level. For this reason it has to maintain a privacy. So a lot of the data is cartementalized.
It's not just put in a general pool and can therefore be used to train
[00:31:09] John: Yeah.
[00:31:10] Brandeis: some type of gendered AI "system, which is a good thing.
[00:31:13] John: I wonder if we could talk a little bit about that tangent a bit in a couple of ways, and that's, that is the underlying data on which these new generative AI systems, then the LLMs are trained. You have a newsletter called the Rebel Tech Newsletter, and in a recent issue called "How Not to Use AI," you wrote about Washington Post contributor Jillian Brockel's interview, and I'm using air quotes with interview, with Harriet Tubman, and this appeared in the Washington Post.
She used Khan Academy's Kahnmigo to have this interview, and you noted a few things, three key issues that it was a little bit of hubris on her part to assume the right for AI to generate an interview with Harriet Tubman without consent. That the learning goals for the AI interview exercise were pretty vague and not measurable. and there was a lack of an authentic Tubman source material to train the AI. I see. System, and that led to some pretty superficial outputs. Smiled a little along with notions of hubris because Brockell said she was so relieved to find that the Tubman simulation used modern conversational language.
So in that piece, you have some suggestions for a more methodical approach to testing AI interview generation, like getting consent, customizing the model. It made me think more and wondered if you could say something about AI generated content and how teachers and instructors in general should be thinking about this carefully as they present it to learners.
[00:32:53] Brandeis: Yeah, so I think when it comes to online educators, they need to think about Where this AI tool goes wrong and how to provide some clarity, like guardrails for their students to understand where it goes wrong and why. So what I mean by that is, and I wrote about this in a Medium piece as well as saying, "Hey, if you're going to use These generative AI tools in the classroom. I suggest if you're allowed, if it's not banned by your institution or the organization, is to present the solution of the AI tool, and then have that be commentary. What do the students, what do your learners think? And this is for adult learners or even K 12. Is this insightful? Is it basic? Is it bringing up other things for you? What is it?"
I think that's a good conversation piece. And then especially if they're sources, then there's another level of vetting of research. This is getting to that critical thinking arena. How do you know if this source is really good or not? What makes a source good in your discipline?
Like in my discipline, you use Wikipedia, people are like, "really? You're using Wikipedia?" Or if you're using a seminal work, but the seminal work doesn't have the quotation that is being noted by the generative AI system. How are you going to catch that? So I think that is one way that we can, as educators, really leverage these AI tools and ways in order to spark conversation.
So then students have a better understanding and grounding of what the limitations are and then how to handle output. But this is what I find when it comes to, let's say, Like I, I taught college for a number of years. So let's say the college student is that they don't want to spend that much time thinking about it.
But. As an educational tool, when they get into their first job, they might be asked in order to leverage these general AI tools and therefore they're going to need to have that skill set. So that's how you can try to position it. But also with adult learners, they are using it in their day to day interactions at their full time employment job.
So then they are now going to apply it right away because they're going to go, yes, I was wondering how am I supposed to use this? Or is this supposed to be like helping me brainstorm? Or is this helping me like figure out like what my next steps are? So then I don't do some of this AI generated stuff or do I embed some of it?
And that's again, another conversation. Yeah, I think there's opportunity in order to make this phase of generative AI that is really in its infancy. People don't know how to deal with it is to spark the conversation and then start identifying your own criterias on what makes sense.
On when to use it. And what criteria does it mean when to not use it, right? You can't use it for a project. And this is the limitation on the project side. So you can't use it on the project. If you try to use it in a writing assignment, here's the issues that you're gonna fall into, is that you're gonna spend more time vetting the responses than actually if you would've spend time writing it. And if you're caught plagiarizing the AI system, then there's gonna be repercussions.
[00:36:40] John: Yeah.
[00:36:41] Brandeis: That's a whole nother conversation about how do you know whether or not it's AI generated content and things like that, but just want to put that out there.
[00:36:50] John: I think we're kindred spirits here. Your first comment about if the instructor, the teacher can actually have students almost cheat on purpose. Another air quotes again with cheat on purpose. And have them then vet and analyze critically what the system gave them and then talk about that.
Then that sort of takes the pressure off of wondering what the system is going to do with your assignments.
[00:37:17] Brandeis: And if I were teaching like in a college environment, I might actually provide assignments that would go, "okay, use the AI tool to give the result, give me the result, and then tell me how you vetted it for its accuracy and you tell me what grade you would give it."
[00:37:34] John: Nice.
[00:37:34] Brandeis: And so then they will see it's about 30 percent accurate or 40 percent accurate.
Okay. If it's that, if it's at that 30 to 40%, that would be your grade. Is that what you want? And of course, students are going to say "no," but I think that's how I would take an, like a practically taken assignment and go, yes, cheat on purpose, quote unquote, as you said, and then tell me how you vetted it.
Take me through those logical steps. So that makes the students have to go through context, right? awareness makes them have to then understand conflict because if there's something that is in the response that they cannot validate or verify, then what is that? An AI hallucination? Yes or no. And then lastly, it makes them critically think. And I think I would have done my job as an educator if I make them be un AIable.
[00:38:34] Jason: That's good. I love that approach. And I think what I also love about it is that it puts you on a better side of the equation, so to speak, in terms of where AI is and where your students are, rather than being The side of banning it and then trying to detect it among your students.
You're pulling AI over onto your side, showing ways to use it, ways to flip it in a sense to make it push students higher. And I would hope creating more of a transparent relationship with your students and with AI as you're working through these assignments.
[00:39:15] Brandeis: Yes, I would think so as well. And more importantly, and I think all of that, is that the classroom is a culture. Every classroom that we teach, every group of students that we teach, it's a certain culture that we are fostering for whatever time period we have them. And we have to build that trust.
And so by banning AI in the classroom, especially these generative AI tools, what we're doing is almost criminalizing students because if we think it's, Genitive AI or it's automated in some way, then there it's plagiarism. Now there is some plagiarism that will always be, but as an instructor, we are the role model to ensure that the environment is healthy. And so I think it's important that we keep that in mind as instructors, that we are trying to do our best to cultivate the trust, so that they understand why we're doing what we're doing. And not just saying Oh, it's AI generated. You're cheated. You're now have to, go off to whatever the Dean's office or whatever that next step is that we insist that they are trustworthy people from the beginning. And I think the second thing that is important with bringing AI more to our side is that when AI changes. So we can capture that as well in the classroom because, for instance, BARD is being updated more regularly than, let's say, CHAT GPT. It's the way that these two LLMs are structured. And so if one, Set of students would use chat GPT.
Another students would use BARD. Would the responses be the same or different? And depending on when the students might have done their assignment, would the responses be different? And then that's another conversation piece.
sTudents understand that AI is evolving. And so you provide an answer through, and just rely on AI, here's some of the fallacies, right?
Because maybe there's an update that's made that actually is more accurate. So one student would say, this is about 40 percent right out of this BARD, response. And then another student would say my BARD response, I thought it was about 50 percent right. But their responses would be different because they did them at different times.
[00:41:50] Jason: That's a great point. And just while you were saying that, I was just thinking about what could be a another digital divide, like John and I have at least enough means that we can buy, uh, GPT 4, right?
Which is significantly superior to 3. 5 and more consistent than the free versions of BARD and the free version that you can find on Bing, right? I think I like that from a transparency side of things to be able to talk and compare with students as well so that you don't have people coming to, in a sense, coming to the classroom better equipped than other people, just simply by the fact that they have means to spend 20 bucks a month on, on whatever.
[00:42:37] Brandeis: Exactly. And I think especially given the way that these AI tools are being updated so regularly and how it's more and more behind a paywall, there's a possibility. And I think it is, it was happening from the beginning. And I think it was really COVID that really provided a break point where people really understood the digital divide a whole lot better. But for CHAT4 and then whatever the next versions of the chat a GPT is going to be. There will be a divide. If you can afford, in order to get access to the most up-to-date version of the generative AI tool, you're gonna be a step ahead. But that is something that we are gonna, we've been battling in different variations for decades when it came to just who had the internet and who didn't.
There's still rural parts of the United States that don't have access to the internet, which just seems preposterous to me in this year of 2023. Like, why is this such an issue? So it's just going to be exaggerated to your point. And I think it's happening. I just think we haven't had the conversation.
There hasn't been articles written about it, but it's happening.
[00:43:49] John: I'm noticing already that the the the web apps that have a wrap around the GPT 4 engines they're very expensive because OpenAI charges an arm and a leg for its use to those vendors. And I think you're right. I think that the divide is going to get larger as more tools start to have niche products with the chat engines.
And then to have that value add, they're going to have to charge a lot more to the user for that.
[00:44:18] Brandeis: Exactly. And it's interesting. It's called OpenAI, but it's not. open. It started out open, but then once they figured out their business model, then all of a sudden it became closed, and all of a sudden there became tiers in what you could pay for and how much access you can get again, following very much the trajectory of how the internet access works, right?
Internet service providers, it used to be relatively open, and then it became like, oh, are you residential or are you business?
[00:44:51] Jason: And
[00:44:51] Brandeis: want residential plus? Do you want business plus? Or do you want enterprise? So We've had, and then there was a whole net neutrality conversation that happened about five, four, five, five years ago or so, where we were teetering, not having net neutrality.
Yeah, I think OpenAI is not quite doing what it said its mission was.
[00:45:15] Jason: Yeah. And we were just talking with Dr. Kristen DeCerbo, who is the chief learning officer at Khan Academy, and it probably will be the episode before your episode. So this will be a really nice little like back and forth here. Yeah, wonderful. Because one of the things that you said, we didn't even get into this really is this idea, which is driven by chat 4. She said a little side note about how, it costs money.
[00:45:44] Brandeis: Mmmhmmm
[00:45:44] Jason: Up to this point, Khan Academy has been pretty open handed with its with its tool, and their mission is to help schools and to provide learning for everybody, but there's a little side it does cost money to access CHAT4, and that's what it's trained on, and that's who they're partnering with, and so what happens moving forward for those schools or even whole, school districts that maybe can't afford the, even if it is 10 a year per student.
It could amount
[00:46:18] Brandeis: becomes a lot when there's a lot of students.
[00:46:20] Jason: Exactly. And students that have come to depend on it. And then do we have a new digital divide of say these chat tools do actually help to accelerate and give people a sense of. Competency and with this one on one tutoring and , help people achieve better results or whatever like that.
Now we're starting to see this access thing happen again and again, right?
[00:46:45] Brandeis: Yeah, exactly. And I think I've been saying this since the beginning of the pandemic, which is there is an attack on education, and the attack is how can we, it's the business model of education isn't tenable. It never was. It always was based on free access to the content and almost privatizing the tutoring. And now that we're full on into the AI realm, that divide, as you are both mentioning, is becoming even better because it's less about the content and more about the tutoring, right? You can go on YouTube and get a lot of the content, but... The context and the understanding of the content happens in tutoring. That's where the digital leader, the online educator, the in person educator, the tutor that you might have, like traditional tutor might have, actually comes into play. And so I think as an industry, education professionals as a whole should really think about how do we fund this. To be more equitable.
Do we think about banding together these different units, whether it's like Canada that has it so that Everyone gets access to Jupiter Notebooks. Jupiter Notebooks is hosted on the cloud. It has a lot of programming as computer programming type of resources to help all the students.
So as soon as you are part of a Canadian university, you have access to Jupiter Notebooks. That is something that the government level did, but is there something in the United States that we could do similarly in order to make sure that the cost of access to these platforms, any type of AI assisted platforms, becomes part of the standard and not part of an add on. And that would take us banding together, having conversations with institutions K 12, really making it so that it is equitable. that there is this conversation that happens and there is access for all people to have this ability in order to receive at the very least the opportunity. To be able to use the same tools no matter which district that you're in or what part of the district that you're in.
[00:49:31] Jason: Leave it to Canada to give universal access to important things
[00:49:34] Brandeis: I think that's the reason why, Canada has been a forerunner in a lot of the development of some of the data tools is because of the fact that all the, everyone that has a dot-edu that is in Canada. It now has access to Jupyter Notebooks, so they can learn how to code, they can create their own projects, they can write the comments notes, they can share their Jupyter Notebooks with each other.
It's not this thing where Like in the United States, we have, you have to have the right Google account with the right amount of storage in order to be able to share it. And that other person has to have the right storage in order to share it with you and see it and all this other type of roadblocks in order to be able to work collaboratively.
[00:50:14] Jason: I think this has been a great conversation so far. Very challenging. I'm going to admit and I'm going to speak for John.
Maybe a little bit here as well that we're both a little bit AI fanboys, like we love digging in and we're, we think it's pretty cool and we go into it wide eyed and we're always texting each other saying, Hey, check this out, kind of thing, right? So we do think it's cool. I think that your voice today has helped challenge us to rethink it because I think you're critical in a good way. we probably don't come at it critically enough. And I'm just wondering do you have other big concerns we've talked to, we've talked about the quality of the data we've talked about transparency, we've talked about proper use of it in the classroom. Do you have other big concerns when it comes to AI in, in education,
[00:51:04] Brandeis: I think the biggest conversation that I think we haven't touched on is just how do you vet what a student actually knows and given, even if you do all the things that we've talked about and talk to them and be critical about AI and share. Now, once you get their paper, sorry, their digital submission, what do you do as an instructor? How do you then examine it and assess it? And I think that's a whole separate conversation. But that's something that is front of mind for me because it's hard. especially if these students are new to you, if you've had them a couple of times and you know how they will respond to a class, you have an idea, the class isn't too big, right? But if you're in a large class, that's, a hundred or more students and you're not even the one grading it, you have a a team of undergraduate and graduate students, that's grading it. How do you then help the graders? Then assess whether or not the learner actually has learned. So that's like the biggest concern.
Is that what does that look like in the digital leader space? And that's what I'm gnawing on right now because it's a hard problem.
[00:52:24] John: It is a hard problem. I'Ve been gnawing on that too. I was in a room recently where the question was about, how do we scale, un AIable learning outcomes? How do we scale public demonstrations of learning, for instance, in a large lecture? aNd it's a tough nut to crack.
[00:52:43] Brandeis: it's a very tough nut. I did it in my classrooms by doing projects. And that does mean that you're going to have to divide up the students into smaller teams in order for them to work on a project and then provide those milestones, those tasks and milestones to see if they reach them.
But yeah, in a very large classroom, I don't, For see it being scalable. I think you're still going to need that human in the loop in order to really handle that context and that conflict resolution piece. But as I said, I'm just gnawing on that factor is that it's difficult because as an instructor you trust your learners.
But there will be people who are going to push the envelope and you don't want them to get a pass. so what do we, what do we do? Again, I think that's a whole separate, you have to have me back then. So we can just
[00:53:36] Jason: sounds like a good problem for another another podcast. But yeah, I'm gnawing on that in a similar way alongside of like the large classrooms thinking about fully asynchronous because a lot of my work is in helping programs, teachers develop fully asynchronous online learning which I think then presents another layer.
Of of challenge, partly because, not that people can't learn that way, but when we're talking about authentic assessments and really trying to figure out if the students have learned it, it does provide another layer of, possibility for the use of AI and in this asynchronous space. And because a lot of the solutions that I've seen people throw out there are things like, One on one interviews with them or, synchronous, synchronous group projects or recording videos, which you can do some of that stuff or using, using blue books or whatever, while you watch them, so
a lot of those things aren't very
[00:54:38] Brandeis: practical.
[00:54:40] Jason: They aren't very practical, exactly, and and I always feel it because I'm a horrible handwriter, like I always did terribly in those high pressure blue book situations, because it also if I got nervous, and if I was trying to rush it it was like reading doctor's
[00:54:56] Brandeis: yeah, it's chicken scratch at that
[00:54:58] Jason: it's horrible.
And yeah. Yeah, so I don't know what the answer is, but I think you pose, that's a really great, more of a question here at the end and give us something more to
[00:55:07] Brandeis: more to think about. Yeah, a lot of the solutions I thought of as well is very similar just trying to figure out like, even just grading scale is another way that I thought like you don't you grade the assignments which are very much like Tutorials and demonstrations much lower than you do like exams and projects but then again, you still have to deal with how do you grade those exams and projects and create them so that it will help suss out anyone who is trying to be a bad actor, right?
yeAh, as I said, I just, I got questions. I don't claim to have a lot of solutions, but I do have questions.
[00:55:48] Jason: I did want to ask, just as we're closing off here, some of our big focus obviously is on online learning, on humanizing online learning. It doesn't have to be, the answer doesn't have to be AI and, or anything to do with that.
It, to step away a little bit from that. But what thoughts do you have in that direction as we're, for us, for our listeners, as we're thinking about re imagining what online learning might look like in the second half of life as we move forward? What are some of your thoughts?
[00:56:22] Brandeis: I think when it comes to learning, we have to get into this notion that we're always going to be learning. Having a practice of having newsletters that we read on a regular basis. So like medium is a good place for that. There's also just like weekly or monthly newsletters that are good. I think also being intentional about having conversations with like real humans. lEt's go back to the coffee shop where you sit down and you are just literally just talking about whatever is the current news within your discipline. I think those... Those forms of learning, I think we need to enhance because I think that's where we get the richness in order to help us deal with all the other stuff in the education industry. And I don't know if we have, we, we are taking our time in order to do those things, right? We're so into the like AI ness of it that we're not going back to some of the regular traditional or old school ways,,
[00:57:38] Jason: , those collegial conversations may be more important than ever. The more siloed we get and the more automated we get. Yeah. Yeah.
[00:57:47] Brandeis: yeah. And I think that's going to be something that we'll, I think =
we'll get to as educators that will come back and have these like conversation pieces. But I want us to accelerate that. Can we get back to that? Because there's times which when, especially when I was new in the industry.
I was like talking to people and I'd be like, you did what? Like I had this great line in my syllabus, which was after one week, the greatest final. So then students wouldn't come back and try to challenge questions that, you know how students are at the end of the semester. And that was because I was literally on campus.
I was at Purdue at the time. I walked down the hall two doors down and I'm talking to my colleague and he tells me, I put this in my syllabus because someone else told me and I was like, aha. And since then, when I've shared my syllabus with other people, they're like, I love that line. And I'm like, I got that from someone, 10 years ago, 15 years ago, and so I think that is what we need more of.
[00:58:51] Jason: Yeah. tHat's one of the reasons why we do this podcast, frankly, is just to continue our conversation and then have. These amazing conversations with people that we wouldn't, we might not get a chance to otherwise, right? And this is just a good excuse to get together and to talk about these important things.
So thank you. For listeners out there as well. We try to continue the conversation on LinkedIn So you'll find this podcast posted there and if you have any comments, suggestions, corrections, challenges, Please put them in there and let us know what you think as well.
[00:59:26] John: Yeah,
[00:59:26] Brandeis: Yes, please do.
[00:59:27] John: I think it's great. We're talking about online learning and we were talking about your article in Medium and how I thought it was so useful for so many other educators. And then here, lo and behold, you'll be a keynote speaker at the Online Learning Consortium meeting in Washington, D.
C. Where we'll really get to bring some of these points home because I don't think that this has really hit in the online learning space as much as it has in the sort of classroom space. And so it's going to be a great conversation. I'm really excited to see you out there too.
[01:00:03] Brandeis: Yes, I'm excited to meet both of you in person, and I'm excited to, this is my first time as a speaker at, OLC, and so hopefully I bring these nuggets out and some other things as I noodle around some other ideas about how do you teach with generative AI, and what does that mean for navigating this space as an instructor, and then how do you assess the learner's knowledge.
[01:00:32] Jason: great. We'll be doing a session, actually two sessions, the day after you speak. I think you're on the Wednesday and we're on the Thursday. So it'll be great because we'll do a lot of talkback in both of those sessions, a lot of conversation. And so I am sure your session will come up in the points that you bring up there.
So I'm excited about the dynamic, excited about being there and learning and yeah,
[01:00:57] Brandeis: Awesome. Yes. Very excited.
[01:01:00] Jason: Yeah. I think that's about it for those listening OnlineLearningPodcast. com. You can find all of our episodes and find this one as well as any show notes where we'll put in links for Brandeis and to our LinkedIn as well as to the articles we've referenced and so on and then you can join us, of course, on the LinkedIn as well.
Thank you so much, Brandeis, for joining us. This has been a great conversation.
We've learned a lot from you.
[01:01:28] Brandeis: Thank you, Jason. Thank you, John. This has been fun and hopefully we can do it again because there's much more to talk about.
[01:01:35] John: Oh, a lot. Thank you so much. Great.
[01:01:38] Brandeis: All right.
[01:01:40] Jason: Thank you..
Create your
podcast in
minutes
It is Free