Each episode we discuss the latest news regarding how to reduce the emissions of software and how the industry is dealing with its own environmental impact. Brought to you by The Green Software Foundation. Hosted on Acast. See acast.com/privacy for more information.

Episode List

A Greener Internet that Sleeps More

Jul 25th, 2024 7:00 AM

Host Chris Adams and guest Romain Jacob delve into the often-overlooked energy demands of networking infrastructure to discover A Greener Internet that Sleeps More. While AI and data centers usually dominate the conversation, networking still consumes significant power, comparable to the energy usage of entire countries. They discuss innovative practices to make the internet greener, such as putting networks to sleep during low usage periods and extending the life of hardware. Romain talks about his recent Hypnos paper, which won Best Paper at HotCarbon 2024. He shares his team’s award-winning research on how energy demand for networking kit powering the internet can be reduced by simply by powering down links when not in use.Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteRomain Jacob: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterResources:SCION Architecture [11:30]Environmental Impacts of Internet Technology (eimpact) [17:15]Why we should be intentional about the mental models we use for thinking when we think about digital sustainability | Chris Adams [18:30] A Sleep Study for ISP Networks: Evaluating Link Sleeping on Real World Data | Romain Jacob, Lukas Röllin and Laurent Vanbever [18:59]Network energy use not directly proportional to data volume: The power model approach for more reliable network energy consumption calculations | David Mytton [38:55]Co2.js - The Issue | The Green Web Foundation [42:57]Rethinking Allocation in High-Baseload Systems: A Demand-Proportional Network Electricity Intensity Metric — University of Bristol | Daniel Schien [43:53] Introducing Web Sustainability Guidelines | 2023 | Blog | W3C [49:31]Greening of Streaming [52:16]Network Power Zoo | ETH Zurich [54:46]Other source material: A Primer on Optimistic UI | ImhoffResponse Time Limits: Article by Jakob Nielsen | NN GroupOptimistic UI Patterns for Improved Perceived Performance | Simon HearneReducing the Energy Footprint of Cellular Networks with Delay-Tolerant Users | IEEE Journals & Magazine If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Romain Jacob: We used to consider that energy is cheap. Energy is there. We don't need to worry too much about it. So it's just simpler to plug the thing in, assume energy is there. You can draw power as much as you want, whenever you want, for as much as you want. And it's time to get away from that.Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams.Hello, and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. Back in episode 10 of this podcast in September 2022, we did a deep dive into the subject of green networking, because while a lot of the time people talk about the energy demands of AI and data centers, in 2024, in absolute terms, the amount of power consumed by networking was still larger.Back then in 2022, the best figures, when we looked at this, came from the AI, which put the energy usage of data networks at around 250 terawatt hours per year. So that's about the same as all of Spain's energy usage in 2023, so that's not nothing. Now, it's a few years later, 2024, and the best figures from the same agency, the IEA, now give us a range of between 260 and 360 terawatt hours, which could be anything up to a jump of 50 percent in three years now.Now because of much of this power is coming from fossil fuels, this is a real problem, climate wise. So what can we do about this? With me to explore this once again, is my friend Romain Jacob, who helped guide us through the subject in 2022, along with Dr. Yves Schouler at Intel at the time.His team's recent research won the Best Paper Award at HotCarbon, the conference that has fast become a fixture on the green IT and digital sustainability circuit. So he seemed a good person to ask about this. Romain, thank you so much for joining me for this podcast. Can I give you the floor to introduce yourself before we revisit the world of green networks?Romain Jacob: Chris, welcome. I'm very happy to be back on the podcast to talk a little bit more about this. Hello, I'm Romain, I'm a postdoctoral researcher at ETH Zurich in Switzerland. I've been working in sustainability for two to three years now, more or less full time, as much as full time research happens in academia. And yeah, I was, I had the pleasure to present some of our technical work at HotCarbon and I'm sure we're going to deep dive into a bit more in the podcast.Chris Adams: Okay. Thank you, Romain. And for people who are new to this podcast, my name is Chris Adams. I am the executive director of the Green Web Foundation. We're a Dutch nonprofit focused on reaching a fossil free internet by 2030. I also work as one of the policy chairs inside the larger Green Software Foundation, which is why I'm on this podcast.Alright, if you're also new to this podcast, every single project and paper that we mention today we'll be posting a link to in the show notes. So for people who are on a quest to learn more about reducing the environmental impact of software engineering, you can use these for your own practices and your own research.Okay, Romain, are you sitting comfortably?Romain Jacob: I am.Chris Adams: Okay, great. Then I'll begin. Okay, Romain, when we last spoke, we covered a range of approaches that people are using right now to rein in the environmental impact of networking. And but before we spend too much time, I wanted to kind of see if you could help set the scene to help folks develop a mental model for thinking about, say, networking versus data centers, because I touched on this a little bit, but it might not be obvious to most people.So maybe if you could just provide the high level, then we can touch on some of these Differences between the two and why you might care, or how you might think about these differently. So yeah, let's go from there.Romain Jacob: Yes. Sounds good. It's true, you mentioned in your introduction that we sometimes get numbers that oppose or compare data center corporate footprint or usage and the ones from networks. But now, what does network really mean? It's not really clear what we mean by that. Because there is networking in data centers and, you know, the rest of networks are not completely detached from it.But at a high level, you have a set of networks that are meant to provide internet access to individuals and to other networks. What this means is that you have companies that are specialized to just make your laptop, your phone, or other appliances you have, being able to talk through the internet. And typically when we refer to networks, without further details, this is the type of network you're talking about.And data centers, on the other hand, are something that is in the scale of IT fairly recent, where we have this idea of if we centralize in, in one physical location, a lot of powerful resource machines that have a lot of compute, that have a lot of storage available, then we can use that as a remote computer.And just offload tasks to those data centers and just only get the results back. In today's ecosystem, data centers are a very core element of the internet. The internet today would not really work without data centers. Or at least a lot of the applications we use over the internet only work thanks to data centers.So from a networking perspective, in a data center, you also have, you need to exchange information and bits and packets between those different machines that live inside the data center, but the way the network looks like is very different from the cellular network that is providing mobile connectivity to your smartphone.Chris Adams: Ah, I see.Romain Jacob: So those are very different type of networks, and they have different, very different type of way they are designed, way they are operated, and how they are used. Typically, in a data center, you tend to have a quite high usage of the data center network because you have a lot of exchange and interaction between the different machines that live inside the data center, whereas in the networks that provide internet access, so the networks that are managed by entities we call ISP, for Internet Service Providers.Chris Adams: Mm hmm.Romain Jacob: Tends to be much less utilized. There is a lot more capacity. in those network that what is really demanded by the end user.Chris Adams: Okay.Romain Jacob: That's a fundamental difference between, between the two networks and something that we try to leverage in our research.Chris Adams: Okay, so if I just check if I understand this, so you said that you might have networks which might be like, say the ISPs and things, they are individually not that high themselves, but because there's so many of them and because they're so diffuse, in aggregate, this can work out to be a very large figure, for example, and, and, that also speaks a little bit about, I guess, how you might power some of these.So, like, when we think about a large hyperscale data center, that's something in the region of maybe, if you're looking at a large one, which is maybe the high tens to maybe low hundreds of megawatts, that's maybe thousands of homes. That's a lot of power in one place, whereas with a network, you don't have quite so much, but it's because it's distributed, you might have to have different approaches to managing that.So, for example, you might and, you might have to take different strategies to either decarbonize that or deal with some of that, some of that load that you actually have. Okay, and I guess one of the questions I might have to ask you about this is then, when you have this, split between the maybe it's just worth talking a little bit about the different kinds of networks that you have here.So for example, as I understand it, there's maybe an ISP I connect to, maybe my ISP, but then they need to connect to some other cable. And if I'm going across, maybe if I'm connecting to a server across in the Atlantic, then I'm going through some like backbone or something like that, maybe you could talk a little bit about the different layers there, and what some of the, and what the kind of, how much they might make, or if there's any differences in how those ones need to be powered, for example, how they're used?Romain Jacob: Yeah, totally. So the, the name internet stands for an interconnection of networks. So the internet by name and in private space is a network of networks. Right? So when you connect to the internet. What it means is that you connect to another machine somewhere on the planet that has also access to the same global network.But these global networks, the connection between those two endpoints has to go through, most of the time, several different networks. And so, typically, if we look at the internet infrastructure, there are several ways of representing this, but one division that we usually use is You have Core, IAP, so the one that kind of sit more in the middle and they provide transit for many, many different interconnections.Then you have networks that are qualified to be more belonging to the metro area. So this is where it's getting closer to the user, but it's not yet the network that provides direct connectivity to, let's say, your phone or your laptop. And then you have the edge network. And the edge network is really there to provide what is called the last mile connectivity key to the end user.And those categories exist and were proposed to Helsinki because this share of the network has different characteristics. The core tends to look a bit more like the data center. Like, it's more dense mesh, so there are more interconnections between the different points in that network. And the utilization tends to be higher and kind of constant because it's a global network.Whereas, the closer you go to the end user, the more you're going to see filtration, because, for example, while users are awake during certain hours during the day, and this is where they tend to use their machine. You will see peak of usage during, you know, TV show primetimes in the evening, but much less at 5 a.m., where most people are deep and not using their phone or their laptops. And so, those networks look differently in terms of What they are used for and how they are built and designed, because we try to adapt the design for the particular use case.Chris Adams: Okay, thank you for that. So it's a little bit, if you squint, it's a little bit like how you might have motorways and then A roads and B roads and maybe smaller roads, for example.Romain Jacob: Very like that. It's very much like that.Chris Adams: All right. Okay. So that's actually quite helpful. And when I think about other kind of systems, I think a little bit about say, like electricity networks, which have, you know, big fat transmission ones which send lots of things, but then you have like the smaller distribution networks which are, so it's somewhat comparable. Okay.Romain Jacob: It very similar in principle.Chris Adams: Okay, so this is helpful for developing a mental model about some of this. Alright, okay, so last episode when we spoke about the different techniques people spoke about, you, we spoke about things like carbon away networking, different protocol designs like I think SCION, which was one of the, projects proposed.And you kind of coined this phrase that an internet of the future needs to grow old and sleep more. And I really, I found this kind of quite entertaining and it stuck with me. But for people who are new to this, maybe you could just unpack what you meant by that because not everyone has read the paper or seen the talk. And I think it's quite helpful for thinking about this subject in general, actually.Romain Jacob: Yeah, sure. So, so two years ago in the first edition of the HotCarbon Workshop, we, we outlined this vision of what could be relevant to work on in sustainable networking area. And the two ideas that emerged were essentially captured by this growing old and fleeting more aspect. So what do we mean by that?Growing old is essentially the idea that we tend to be using the hardware we buy not long enough. So if we take an end user perspective, we tend to change phones every couple years. Numbers are changing about this, but we can debate whether this is a good thing or a bad thing. In the networking area, so for the hardware that operators buy to make up the network, so devices that we call routers and switches, it tends to be a bit of the same thing.Devices were changed, the standard used to be every three years. So in three years, the entire infrastructure would be renewed. So you would buy new hardware to get higher speed or better energy and so on. And there are various reasons for doing that. We can detail it if you're interested afterwards. But it has a very significant cost, financial cost, but also in terms of carbon cost.Because one people need to understand is that every time you manufacture a product. Not just for networking, but for any product, there is a carbon footprint associated to it. This is where we typically refer to the embodied carbon footprint. And so this embodied carbon is a one time pay, but if you buy more often, well, you pay this price more often.Now, there's a bit of a tricky thing, which is that you, you can argue that If I buy a new device, a new phone or a new router that is 10 times more energy efficient, then over time, I will then do more saving that would compensate for the embodied cost. The problem is that you, it's very hard to estimate how much you would save and how much is the embodied footprint.It's a very firmware, a break even point where you're doing this upgrade, buying this new hardware, start paying off from a carbon perspective, but it's not necessarily clear ahead of time when that happened. Generally speaking though, what was pretty clear to us is that we could and we probably should be using the hardware longer.And this is what we meant by the grow old idea.Chris Adams: Ah, okay. Thanks for that.Romain Jacob: Now, to describe more, this is kind of simpler. This is the idea that is very common in other fields like what we know as embedded systems or the Internet of Things. Think about devices that run on battery, to make things simple, that are more or less small but run on battery. And because they need to run on battery, for decades, engineers have been trying to optimize the energy efficiency of those devices.And the most efficient way of doing this is essentially turning off everything you don't need when you don't need it. So if you think about your phone, your, the screen of your phone is off, I don't know, maybe 90 percent of the time. And this is to save the power drawn by your screen, which is by far the most expensive or power hungry element in a smartphone.And we do this in order to save on power, on the average power and so on energy at the end of the day. And we are arguing that in the networking world, this is not done too much. And it should probably be done more. So now I need to be quite precise here when I talk about the networking world, I'm talking about the wired networking domain.In the mobile domain, so in cellular communication that connect to your phone, or also in Wi Fi and so on, the idea of sleeping is already used quite a lot. But in the wired domain, it did not transfer too much. And so the reason why it did not transfer is because we used to come to the idea that energy is cheap. Energy is there. We don't need to worry too much about it. So, it's just simpler to plug the thing in, assume energy is there. You can draw power as much as you want, whenever you want, for as much as you want. And it's time to get away from that.Chris Adams: Okay, alright, so if I just play that back to you to make sure I understand it correctly. So, the growing old part is essentially a reference to the embodied energy that goes into making various kinds of hardware. So, like, when we're looking at a laptop, around 80 percent of the carbon footprint, it comes from the manufacturing, compared to the running of this.And, if I keep that laptop for a short period of time, It's a great proportion of the life cycle, lifetime emissions, for example. And we see the same thing in data centers as well. So, for example, Facebook and company, you know, some companies and hyperscalers, they might have had this three year period that you spoke about before, but in the 2010s, we saw figures anecdotally, but not published ever.It was like 20 years ago. But sometimes these would go down to as much as as little as 18 months for some service because they wanted to get the maximum usage of kind of compute for the power they're using for example. So they had incentives to change like that. So that's what that part is a reference to.And I think on the eImpact mailing list, where I've seen a lot of the discussion. I will share a link to this in the show notes of this really cool 3D chart showing how, where the break even points are that you mentioned about that. And the sleeping part seems to be this reference that, in many ways, networks are often designed for kind of maximum amount of usage, not necessarily what the average usage might be, similar to how, say, the electricity grid in America, for example, is designed currently designed for everyone to be using aircon at the same time, when normally it's maybe 40 percent utilization.So there's all this kind of headroom, which doesn't need to be accounted for. And we, and it's a bit like service, you know, we have, as software engineers, we're taught generally to size for the maximum output because the loss of business is supposed to be worse than the cost of having that extra capacity.But in 2024, there are new approaches that could be taken. And we do things like serverless and scaling things down. And these ideas are - they've been slower to be adopted in the networking field essentially, right?Romain Jacob: Yeah, that's kind of the, that's kind of the idea.Chris Adams: Brilliant! Okay, that is good. We'll share a link to the paper because it's quite a fun read and I really helped, it stuck with me ever since I saw you speaking about that.Okay, so we've kind of set some of the scene so far. We've got some nice mental models for thinking about this. We referred to the energy and the embodied part and I guess the thing we didn't mention too much was that the growing old thing is going to be, you know, more of an issue over time because while we're getting better at decarbonising the electricity of the internet, we're not doing such a good job of decarbonising the extremely energy intensive process of making electronics right now. So we're only, this is only going to become more acute over time.So, maybe I can allow you to just talk a little bit about this Hypnos paper, because as I understand it, it was an extension of some of this vision going forward, and I know that it wasn't, you weren't presenting yourself, but I know it was your team who were presenting it at HotCarbon, so maybe you can talk a little bit about that, and maybe say who was presenting, or some of those things there, because, yeah, I enjoyed reading this, it was quite fun, it, similar way, I enjoyed it as well, basically.Romain Jacob: So Hypnos is a, a recent proposal that we've made to essentially try to quantify this sleeping principle. So in two and a half years ago, we said, okay, we could look at the embedded aspect, we could look at the operational aspect and how to improve them. And we thought back then that the one way was to just try to apply those principles of heaping to wired networks.And so together with some math students from ETH, we started looking into this and say, okay, in theory, we know how to do this. Let's try. You know, let's try for real, let's take some hardware, let's design a prototype, protocol that would just put some things to sleep and see what happens. What was surprising to us was that the, the theory of how you would do the sleeping in a wired network and how much you would expect to save by doing that was old.It was the first papers go back 2008 or so, so there's been a while and back then, people were saying, okay, assuming we have hardware that, that allows us to do everything we want, then we could implement seeding in this way and then we would save so much. So they knew that the proposals that were made, they were making back then were not readily applicable.And so we felt like 15 years later, it's kind of interesting to see where are we today? Like how can we do things? And the key element, key there, was how quickly you can turn on something. You turn it off, you can always turn off something, you take some time, you save some power, okay? But then, eventually, if you need it back on, you want it to react quickly.You know, I talked about the screen of your phone before. It's always off, and it's fine, because as soon as you press the button, and you touch with your finger, the screen lights up, right? It feels instant, right? So it needs to happen quickly to be usable. Except that in networks, it's not like that, it's, I mean, not, not, at least not today.So if, if you think about a link that connects two routers, this was the first, the first thing that we started considering. Okay, let's put that to sleep. It's essentially the smallest unit in a network that you could put to sleep.Right.Chris Adams: A bit like a lane in a, like a multi-lane in a multi-lane car road.Romain Jacob: Yeah. I would think about the road network, like, turning off a port or turning off a link in a network would be like. Cutting one road in your network. You know, like here in the city, you have many different ways to go to different end points and you would just say, okay, this street is closed. So you can't use it.Chris Adams: Ah, I see. Okay.Romain Jacob: That's kind of like the simplest thing one can do from an networking perspective, except that to turn the thing back on, to reopen the street would take multiple seconds.Chris Adams: Mm.Romain Jacob: And it doesn't sound like much, but in the networking area, multiple seconds is a lot of time because a lot of traffic can be sent during this time.And. If you make things short and not too technical, it's way too long.Chris Adams: So we're looking, you want milliseconds, which are like thousandth of a second, and if something, it takes two or three, it's two or 3000 times slower than you'd like it to be basically.Romain Jacob: Exactly. So, without getting too nerdy and too technical, the problem is, we can't do what we're suggesting in the literature because we cannot sleep at short timescales as we were planned. And so we're like, okay, so is it over or can we still do something? And so what we were thinking is maybe we cannot sleep, you know, at millisecond time scales, but we can still leverage the fact that networks, some networks, are a lot more used during the day than during the night.So we, we have a lot of patterns that are daily or hourly that we can leverage to say, okay, well, we have a predictable variation in the average use of the network. And so when we reach the value to declare night time. Then maybe there are some things we can share. And so we, we try to implement a protocol that we do do.We say, okay, let's do the simplest thing possible and see how well it works. And Hypnos is essentially the outcome of that. So in essence, it's a very simple tentacle that looks at all of the roads, so all of the links in the network and how much they are used. And then we start turning off the, the unused one.Until we reach some kind of like stopping condition that we say, okay, now it's enough. Like the rest we really need to keep it. At a high level, this is what we do. So one, one challenge was to get actual data to test it.Chris Adams: Mm-Hmm.Romain Jacob: Because if you stimulate a network and you stimulate the utilization of your network, you can make things as pretty or as, as ugly as you want, you know, depending on how you look at things.And so it was, what was really missing from the literature was precise case study that says, okay, here is the data from a given ISP. Here's what the network looks like, and here's what the utilization looks like. In this network, what can we do? So, there has been a long, very long effort to actually get this data.And then, the Hypnos paper is essentially say, okay, we have the protocol, we have the theory, now we have the data. Let's match the two things together and see where that takes us. And, we looked at two internet service providers that, that belong to the access part. So, those are, networks that are very close to the end user, where you would expect more of D&I fluctuation.And we do see that. What we were a bit surprised to confirm is that those networks are effectively underutilized. You want to dare a guess what's the average utilization in those networks?Chris Adams: I literally couldn't, I have no idea what the number might actually be to be honest.Romain Jacob: Guess!Chris Adams: Okay is it like Okay, so I said the national grid was about 40%. Is it like 40, 50%? Like, that's, like, not-Romain Jacob: Four, four zero?Chris Adams: Yeah, four zero is like what I, is what national, electricity grid is. So maybe it's like, something like that, maybe?That's my guess.Romain Jacob: Now you're an order of magnitude too high. So we are talking a couple of percent.Chris Adams: Oh, wow! Okay, and the whole point about the internet is that if you don't have one route, you can still route other ways. So you've got all these under, you've got all these things which people are currently on that almost no one is using ever, basically, at like 2%. Okay, alright.Romain Jacob: I need to modulate this, right? Okay. For the couple of networks that we got, we managed to get access to the data, right? So I'm not claiming this is the general number. I would love to know, if you have data, please let us know.Chris Adams: MmRomain Jacob: But for the networks we could get access to, this is the type of numbers you would see. An average utilization of a couple of percent. And again, going back to what we were saying before, in a data center, things would be different.Chris Adams: Mm hmm.Romain Jacob: I actually don't know because I was there a little in data center networking, but I would expect things to be more in the 40 50 percent kind of like what you were mentioning before.But in an end to end service provider network, the underutilization is extreme. There are various reasons for that, but it tends to be the case.Chris Adams: That's really interesting, because when you look at data centers, so like, I can tell you about the service that I run, or that our organization runs, the Green Web Foundation, so we run a checking service that gets around between 5 and 10 million, like, checks every day, right? So that's maybe In the order of like 400 million per month, for example, something like that.It's, a relatively high number, for example, and even when we have that, we've got around 50 percent, we, we did, we started working out the environmental impact of our own systems recently, and that's with us with utilization around 50 percent for our systems and in cloud typically you'll see cloud providers saying oh we're really good we're 30 or 40 percent like the highest I've seen is Facebook's most recent stuff about XFaaS and they say oh yeah we can achieve utilization of as high as 60 odd percent right but for lots of data centers the kind of old Data centers would have been in the low digits.And you've had this whole wave of people saying, well, let's move to the cloud by making much better use of a smaller number one. So it sounds like the same kind of ideas of massive underutilization and therefore huge amounts of essentially hardware, you know, it seems like it's somewhat similar in the networking field as well.And there's maybe scope for reductions in that field as well. Okay.Romain Jacob: Exactly. And so, I want to make it clear, like, it's not happening this way because operators are idiots, right? It's just, there are a number of reasons why you have such underutilization. One what I would say is probably the main one from a decision point of view is that you want to provide high performance for number of connections in your network.So to reach from point A to point B, you want to make sure that you want to have the lowest delay typically. And that requires to have a direct line. Do you have other concerns that are that things do fail in networks. Link failures happen and they can be quite drastic. You operate like a physical infrastructure in a country where people leave and work, you have incidents, fibers get cut, and those are things that take a long time to fix and so on.So you want to have some resiliency in your network. So that if some part of the network goes down, you can still reroute the traffic the other way around and still have enough capacity to serve that traffic. So you have some names that are not used by default intentionally. So you get to a 2 percent or a couple of percent average.But you don't want to be at 50 percent because if you are at 50 percent and something really goes down, then you may run into a situation where you don't have enough capacity left to run your business. So the point is, we have such a high underutilization and something that I don't think we explained so far is in network equipment, so routers and switches, you have very little proportionality.What I mean by that is that the amount of power that is drawn by a router. It's essentially, from at a height, it's not exactly true, but at a high level.Chris Adams: Hmm.Romain Jacob: The amount of power drawn is almost independent or varies very little if you send no traffic at all or if you send at 100%.Chris Adams: Okay.Romain Jacob: So, what it means is that if you have a router that you use at 1 percent of its total capacity, you pay almost 100 percent of the power.Chris Adams: It's almost like one person in that plane going back and forwards, for example. Like, if I'm going to fly, there's, you know, if I'm going to fly somewhere and I'm the only person, it's going to be the same footprint as if that plane was entirely full, for example.Romain Jacob: Kind of, yes. It was the same kind of idea. And this is why for us, investigating this clipping was kind of interesting because we know we have such method underutilization, although I probably would not have guessed it was that low, and it wastes a lot. Third, we are essentially operating most of those links at the worst efficiency point possible. And so we try to remedy this and it goes one step in that direction.Chris Adams: I see. And I think one thing that you mentioned before was this idea, you can power these things down and you know there's very, because there are alternative routes through the network at any time, it may be the case that even if you do have these things powered off in response to upticks in demand, just like with, say, national grids, people might. You know, switch on batteries to, or feed power into the grid from a battery or possibly a peak of gas plant.You have, you still have the option of switching these route, these links back on when there is mass a, a, a big peak in power, for example.Romain Jacob: Exactly. And it's the same with the kitting protocol we proposed, right? So, it still makes sense to have those redundant links deployed, you know, those fibers laid out. And then you may say, yeah, but we've paid all this effort to actually install this and it's there. Why should I not turn it on? Well, because it consumes energy whether you use it or not. That's for one. And second, It's good to have it in case you need it. But you can turn it off so that you save energy. If you can turn it up quickly, right, then it goes back to what I was saying before. The turning of quickly part is still problematic today.Chris Adams: Hmm.Romain Jacob: So orders of magnitude that the time it takes to actually do this would be in 10, 10 seconds, roughly a few seconds, let's say, up to more, a minute or so.So then what, what do you have to wait? Yeah. The benefit you gain by turning links off in terms of energy versus the time you may have to wait until you go back to a good state in your network in case you have some failures in your network. And of course you need to multiply that risk by the likelihood of getting such link failures.So if, let's say, if you have a doomsday event that, you know, will just kill the network error, but that happens one every hundred year. Maybe you can be fine having a day to day management policy that says, okay, to manage this doomsday event, we will need an hour long, but in all the rest of the time, it will be fine and we'll save energy every single day.You know, you, you have weighed the pros and cons of a strategy in terms of performance and in terms of energy usage.Chris Adams: And presumably, one thing that you've entioned is because you mentioned that you often have these regular, kind of, predictable cycles, like, most people don't, you know, fewer people use the internet when they're asleep than when they're awake, for example. Like, it sounds really silly, but like, yeah, you're going to see these predictable patterns.Some of the work with the Hypnos paper was basically, essentially taking some of these things into account. So you can say, well, you need to have this buffer, but we don't have to have the buffer massive you don't need to have every single car in the world engine on idling just in case you need to use it you can turn off some of these car engines for it so that was the kind of idea behind this.Okay neat so I've used this car model a few times but it suggests that actually think about how, the amount of energy usage and how it scales with how we use the Internet. It might not be the correct mental model. And I just want to kind of run this by you, because this is one thing that I've been thinking about recently is that a lot of us tend to instinctively reached to a kind of car and driving and burning fuel model, because that's how a lot of expo experience costs of energy a lot of the time, right?But it feels almost like if you've got this thing, it may be a different model might be like, I don't know, like bike lanes where there's a matter of time that you need to build something. You might need to light, make sure a bike lane is well lit, for example, if you do use, but the amount of people using the bike lane that isn't the big driver of emissions in this, for example, maybe something like that.Romain Jacob: Yeah, I was about to say, I think the analogy is not wrong per se, it's just a matter of the trade off between the, the infrastructure cost and the driving cost, let's say. So let's assume you, you're using a, a mean of transport, whatever that may be. That has a cost X per kilometer, but then you need light and you need, I don't know, cooling or if you're using something that works on under like, I don't know, superconductive environment, then you need extreme cooling. And so the cost for the environment gets very high. I think the superconducting thing is actually a, a, a pretty, a, a much, very much closer analogy to how the way network works.And you need to spend a lot of power, or to draw a lot of power, just to get the infrastructure on. But once the infrastructure is on, once you get your superconductive environment, then traversing this environment is very cheap.Chris Adams: Ah, okay.Romain Jacob: And networks are a bit like that today, right? So turning the wires on costs a lot, but when it's on, sending the bits through the wire, it's pretty cheap.Chris Adams: Ah, I see, and if I understand it, when I've spoke to other people who know more about networking than me, they've basically told me that at some levels, even when you're not sending any data, there is a signal being sent that basically says, I'm not sending data, I'm not sending data, I'm not sending data, just to make sure so that you've got that connection so that when you do send some data, there's a fast response time.So, just because we aren't perceiving something doesn't mean there isn't energy use taking place for example. So there's maybe some leakiness in the models that we might instinctively just use or intuitively try applying when we're trying to figure out, okay, how do I make something more sustainable for example?Romain Jacob: Yeah, this is very true. And it also, it gets a bit more detailed than that. It also depends on the type of physical layer you use for sending your information. In networks today, you have, I think, I guess we could differentiate between three main types of physical layer. One is the electrical communication, so you send an electrical signal through a power rail.You have optical communication, so essentially using light that you modulate in some way. And then there's everything that is kind of wireless and radio wave communication. So I'll leave the wireless part out because I know less about it and it's a very complicated bee. But if you compare electrical to optical, things work kind of differently.In the electrical environment, you, you can, you have essentially a physical connection between the two points that try to talk to each other. And so, when the physical connection is there, you may send messages as you were saying before, like, I'm not sending, I'm not sending, I'm not sending, but you can do this, for example, once it be, I don't know, 30 seconds or so.It will be enough to, yeah, keep the connection alive. Whereas if you use optical, it's different. Because if you use optical communication, the line does not exist. The line between the two exists only because you have a laser that is sending some photons from one end to the other. So, where it's different is that a laser is an access component.You need to, to, to send energy to create this link between the two ends.Chris Adams: Mm.Romain Jacob: And so, now, it's not that every 30 seconds you need to say, I'm not sending anything. It's like, all the time, you need to have this laser on so that the two endpoints know they are connected to, to each other.Chris Adams: Ah, okay.Romain Jacob: And it's actually one of the reasons why the early ideas about tweaking are not so much in use today is because they don't work nicely with optical communication.And optical communications are the de facto standard in networks today, in the, in the core of the internet and in data centers as well. For reasons that we don't have time to detail, optical is the primary means of communication. And it is by design the laser needs to be on for the link for the communication to exist.Whether you send data or not.Chris Adams: Alright, okay, thank you for elucidating this part here. So it sounds like the models we might use a lot of the time, as lots of, when you're working with digital sustainability, it's very common to look at a kind of figure per gigabyte sent, for example, and like in some cases It's better than having nothing, for example, but there is a lot of extra nuance here.And, there is, we have seen some new papers, I think there was one paper by David Mytton, who, that we'll share a link to, he's been speaking, he's, he shared one recently about the fact that, there are other approaches you might take, for example, for this. There, if you could, just brief, it'd be really nice to just touch on some of that, if we could, and then just, and then to add some extra nuance, realize that, like, It's not that there is no proportion, because there is something you need to do.Maybe we could just talk a little bit about some of the things that David Mytton's been proposing as an alternative way to figure out a number here, because I'm mainly sharing this for developers who get access to these numbers, and they want to make a number go up or down, and it's useful to understand what goes into these models so that you are incentivizing the correct interventions essentially.Romain Jacob: Yeah, of course. So that's actually a very important point, I think. You will often find if you look in the, on the web or anywhere. Figures are in energy per bit, or energy per X, or energy per web search, or energy per email sort of thing. Whatever we can think about whether computing such numbers make sense, what, what is very important to understand is that those numbers were derived in an attributional way.That means that you take the total power cost of a system. And then you divide by the number of bits that were transmitted. If you take a network, you take the sum of the energy consumption of all the routers and all the links and all the calling and all of everything. And then you look at the total amount of traffic you've sent over your reporting interval, like a year.And you take one, you divide by the other and ta da, you get energy per bit. That is interesting. That is interesting to get an idea of how much, how much energy you spend for the useful work you've done in that network. But it should not be interpreted as, this is the cost for a single bit, because if you do this, then, and that would be a different type of reasoning that we call consequential reasoning.If you do this, you would then draw the conclusion, the wrong conclusion, that if I have a network that has, I don't know, a hundred kilowatt hour, if I send a hundred gigabit more, I will use ten kilowatt hour of-Chris Adams: It increases entire system by that rather than my share of this, for example.Romain Jacob: Exactly. Except that it's not true. It's not true because the total number in watt, in energy per bit accounts encapsulate all the infrastructure costs. And those infrastructure costs are constant, they are independent of the amount of traffic. And so this summary statistics is useful in order to track the evolution of how, how much is used, how much is your network used over time.But it's not good to predict the effect of sending more or less traffic. And it's a subtle thing that if you overlook this, you can make the very wrong statement and make bad decisions. And so this is what kind of like these papers you refer to try to highlight and explain. And say that you need to have a finer view on the, the energy per unit that you're interested in.It's a bit more subtle than that. People should read the paper. It's a great paper. It's very accessible. It's not too technical, I think. And it's great for people that are interested in this area to get a good primer on the challenge of computing the energy efficiency of a network. I think it's really a great piece.Chris Adams: Okay, cool. So basically the, I think one of the implications of that is that let's say I'm designing a website, for example, if I make the website maybe half the size, it doesn't necessarily mean I have half the carbon footprint of it because some of the models we use and they're popular, they are an improvement on having nothing, but there's extra nuance that we might actually have.I say this as someone who works in an organization where we have a library, we have one, we have a software library called CO2.js. We have a transfer based model for this because this is one of the ones that's most common that is just like one of the defaults. We also have like an issue open specifically about this paper because there is, when you're starting out, you will often reach for some of these things for this.And while there's benefit and there's some value in actually having some of these models to help you work out, it's also worth understanding that there is extra nuance to this. And they can end up with slightly different incentives for this. This is something we'll talk about carbon aware as well because again, different ways you measure the carbon intensity of electricity can create different incentives as well. So, like, this is one thing we'll be, I guess, we'll be developing over time, but the thing that I just, if we may, I'm just going to touch on this other thing before we move on to kind of wrap up on this.This can give the impression that there is no proportionality between using digital tools, and, like roll out of extra infrastructure and if you said there's no link that would be an oversimplification as well and I think we're gonna one of the previous guests Daniel Schien he came on he spoke about some of this and maybe you might paraphrase some of this because I think his this perspective is also very helpful and kind of illustrates why we need to be doing coming up with better models to represent this stuff.Romain Jacob: Yes, exactly. Thanks for bringing that up. That is also extremely important. So I've said before, power is kind of constant. It doesn't depend so much on how much you send. Two, two things to keep in mind. First, there is some correlation. So if you do send more traffic, there will be an increase in power and so you will consume more energy. That is true. And the work that I'm doing and fuel make that even more so in the future. So what we are trying to do is essentially say. We tried to find ways of reducing the power draw when you're under low utilization. And if we were successful in doing that, by sleeping and by other methods, then it will create a stronger correlation between traffic and power.Right? So, and this is actually good, right? For energy efficient theories, the closer you are to proportionality, the better. So if we are successful, then the correlation will increase. And then sending more bits or, or having smaller website will have a stronger impact, in energy consumption in carbon footprint.So that's one aspect.Chris Adams: Mhmm.Romain Jacob: The second aspect was also extremely important. You mentioned already the, the work of Daniel Schien that is great about this is to think about the internet in, in, in a different timescale. If you look at one point in time right now, the network is the static element. There's so many nodes in the network.There's so many networks and therefore today, if I send more traffic; be low impact. However, if you put a longer timescale and you look at a one year, six month, or ten year horizon, what happens is that when people spend more traffic, you see the utilization of the links going up, and that will have a future consequence of incentivizing people to deploy new links, to increase the capacity of the network.Chris Adams: Hmm.Romain Jacob: Which means that over a year, so over time, as you send more traffic, you create more demand. As you create more demand, you will create more offer. That means scaling up your network, and every time you scale up, almost away, you will increase the energy consumption of the network, right? So, you will further increase the infrastructure costs.So, as you send more traffic, as you watch more Netflix today, It does not consume more energy, not so much, but it will incentivize the network to be scaled up, and that will consume more energy. So, there is a good reason to advocate for what is known as digital sobriety, to be, to try to use less of the network or to make a more sensible use of the network, because if we use it more, It will incentivize future increase of the digital network size and therefore future increase of the energy consumption.Chris Adams: I see. Okay. So basically, if you like set these norms of all this extra use, even though you're not making these changes in the meantime, on a kind of large, on a multiple, multi year timescale that people make investments, like infrastructure investments on, they would then respond to make sure that they've got that kind of headroom available over time.And I think that's actually some of the work that Daniel has been doing to model, because there is no way that we can do that. We're going to deploy new infrastructure where there being zero carbon footprint, even if everything is green. So there is a, there's an impact there that we need to be mindful of.That's some of the work that he's referring to there. We'll share a link to that paper as well, because I'm not quite sure how Okay, I know that we can't model that in co2.js, for example, in our library, but he's, this is literally the cutting edge work that I think he's been doing. And he's, last time he was on, he was hiring for some researchers to find out, okay, how do you represent this stuff?Because when we think about large organizations of the scale of Amazon or Microsoft who are spending literally tens of billions of dollars each year, then you do need to think about these kind of multi year, decade style infrastructure kind of investment scale. Okay. All right. So, we've gone really into the details, then we've spoken at the kind of macroeconomic level now.I wonder if I can just bring this back to the kind of frame for developers who are like, oh, this sounds really cool. How do I use some of this? Or what would I do? Like, if you wanted to have an internet that was able to kind of sleep more and could grow old, are there any ideas you might use? Like how might it change how you build, for example?Are there any kind of sensibilities you might take into account? Because as I understand it, some of the things with Hypnos were primarily designed to say, this is how you can do this without forcing people to make too many changes at the end user level. But there may be things that as a practitioner you might make things more conducive to or something like that, for example.Romain Jacob: Yes, definitely. So I think it goes back to the question we had just before about Daniel's work, about looking at the longer time scale perspective. I think as an end user, as a software designer, I think thinking about sobriety is something that everybody should be doing. It's not just for sustainability.I think one very recent Environment Variables podcast was about the alignment between the sustainable practices and the financial operations, and in many cases, those two things align. In a similar mindset, if you think about web design, this, if you look at the system, the WebW3C sustainability guideline, they align pretty much almost perfectly with the accessibility guideline and with the performance optimization guideline. Why? Because a smaller website will also load faster and, you know, get people faster to what they want. This is the content they really want to consume. So, there's general value into being as, modest in your demand from the system or from the network as, as possible and for the compute as well.It's the same thing. Today, it does not yet translate into net benefits. At the network level, but it might in the future. And, you know, somebody has to stop. So you need to, I need the efforts of all sides in order to, you know, make that work. In the networking domain, I know there have been some people studying this from a theoretical point of view where you would say, the end user could be able to say, I want to, I want to place a phone call.But I'm willing to wait for, I don't know, 20 seconds or 30 seconds before my call is being played. And if you have an ecosystem of users of that network, where the sufficiently large share of users are so called delay tolerant, then you can optimize your network in order to save in resources. So saving energy and ultimately in reducing your carbon footprint.In the more traditional networking domain, one could envision something like that. There is no work in this area, as far as I know. One way you can think about this would be, the incentive would be pricing. That you, you, you could say, okay, I'm, I'm winning. So not so much to wait the most, most likely to cap the bandwidth I can get out of the network.But you would, if you were to say, I'm winning to get at most, I don't know, 100 megabits per second in high utilization times, then you would get a discount on your internet deal. I think that's sizable, that's possible. One quick working thing that could happen. But that would be more like a global effect, like it's between the user and the internet service provider.If you're a software developer, If you think about how your application could be built in such a way, I honestly don't know. I think it's extremely easy today, technically speaking, to have any sort of flagging where your application can say, I'm data intolerant, I can wait, I will not use more than an egg.One can do that, that's easy. Your network can get that information. The tricky bit is, how would the network then use that information to route traffic in a way that would save energy? That is much trickier.Chris Adams: Ah, okay, so that sounds like a possible route that people might choose to go. Because I think, I, I know, for example, there is some work in the world of streaming, where there was a notion of a, I think it was the gold button, that was put together by the Greening of Streaming group. They were basically saying, look, most of the time, I, if I'm looking at television from across the room, I can't really tell if it's 8K or 4K.So, allow me to, you know, have a default which lets me kind of, reduce the resolution or the quality so that when there's lots of people trying to use something, we can see the amount of data reduce somewhat. And that reduces the amount of kind of extra peak capacity people might need, for example. These are some ways to kind of make use of the existing capacity that lives inside the entire network to kind of smooth off that peak as it were, for example.That's some of the stuff that we might be looking at. So it's very, in some ways, it might be, kind of providing hints to when you send things over the network. And I think there's actually some work that we've seen from some existing tools. I know that Facebook's serverless platform does precisely this.And there is also some work in Intel, we'll share some links to this, where when you have a computing workload, you can basically say, well, I'm not worried about when this gets delivered, for example, or I have a degree of, as long as it happens before this time, it's okay. And this does provide the information for people running these systems to essentially, like, move things around to avoid having to increase the total capacity, for example.So you make better use of the existing capacity you have before you have to buy new capacity or deploy new wires or anything like that. That seems to be what you're kind of suggesting.Romain Jacob: In the cloud computing world, that does exist for real. Yeah, for sure.Chris Adams: Okay, so there's a possible path for future, future research. And maybe this is the thing I, this is what we can kind of wrap up on. So we've spoke and we've done a dive into sleeping and getting old. Right. But that's not the only tool available to us. Are there any kind of papers or projects or things that you would direct people's attention to that you think is really exciting but may not necessarily be in your field that you think is worth, that you're excited about?For example, because you spend a lot more time thinking about networks than I do, and I'm pretty sure there's some things you might say that that, that other people listening here might, might enjoy following, for example.Romain Jacob: Yeah. Yeah. So I think two things come to min

The Week in Green Software: Tackling the Energy Challenges of AI

Jul 18th, 2024 8:00 AM

Producer Chris Skipper is joined by guests Marjolein Pordon of ladylowcode.com fame and Andri Johnston from Cambridge University Press & Assessment to discuss the sustainability challenges associated with AI's increasing energy demands and the role of data centers in addressing these challenges. Marjolein emphasizes the need for sustainable infrastructure and the potential synergy between low-code platforms and AI. Andri shares insights from CUP&A's efforts to understand and mitigate digital carbon emissions, highlighting the importance of transparency and accurate reporting from cloud service providers like AWS. Learn more about our people:Andri Johnston: LinkedInMarjolein Pordon: LinkedIn | WebsiteChris Skipper: LinkedIn Find out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:AI energy demand is ruining Google's environmental goals [06:06]How AWS helps reduce carbon footprint of AI workloads [12:58]Roundtable: Data Center Sustainability Plays for the AI Era [23:33]AI is starving for more power. Can quantum computing help? – Computerworld [34:28] Events:Metatalent.ai on LinkedIn: "Thought Leadership Webinar For AI: Your Replacement, or Your Advantage?" [40:10]Climate-Conscious Websites for a More Sustainable Net [40:38]Resources:AI water footprint suggests that large language models are thirsty [24:15]Underwater data center voyage hits the doldrums - Verdict [26:16]How data centers at public pools can keep swimmers warm - The Verge [26:36]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn! Hosted on Acast. See acast.com/privacy for more information.

Preview: CXO Bytes

Jul 11th, 2024 7:00 AM

In this episode we preview the first episode of CXO Bytes. Join host Sanjay Podder as he talks to leaders in technology, sustainability, and AI in their pursuit of a sustainable future through green software. Joined by Dr. Ong Chen Hui, Assistant CEO of Singapore's Infocomm Media Development Authority (IMDA), the discussion focuses on Singapore's comprehensive approach to digital sustainability. Dr. Ong highlights IMDA's efforts to drive green software adoption across various sectors, emphasizing the importance of efficiency in data centers and the broader ICT ecosystem. Listen to the full episode via the links below.Listen to CXO Bytes:SpotifyAppleYouTubeFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterIf you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Chris Skipper: Hello, and welcome to Environment Variables. This is Chris Skipper, the podcast producer, bringing you a very special episode of Environment Variables. Today, I'm excited to share with you a preview from our new podcast series, CXO Bytes. Hosted by Sanjay Poddar, Chairperson of the Green Software Foundation.CXObytes dives deep into the intersection of innovation and sustainability within the tech industry. In each episode, Sanjay will be joined by industry leaders from the C Suite to explore practical strategies for greening software while driving enterprise growth. So without further ado, let's get started.Let's listen in on a sneak peek of the first episode. Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.I am your host, Sanjay Podder.Hello everyone. Welcome to CXO Bytes. This is our inaugural podcast on how do you use green software for building a sustainable future. This is a new podcast series and the whole idea behind it is, you know, embracing a culture of green software, it needs to come from the top. And we therefore want to talk with decision makers, with business leaders, with leaders who are running nation states like Singapore, for example, at C level.You know, how are they driving this culture change when it comes to digital sustainability and green software, for example?Today I am super excited to invite Dr. Ong. She is the Assistant CEO of IMDA, which is the Infocomm Media Development Authority of Singapore. And we are going to chat on how IMDA is championing digital sustainability as well as green software. Welcome, Dr. Ong.Dr. Ong Chen Hui: Thank you for having me on your inaugural podcast on green software.Sanjay Podder: And you know, I had my own selfish reason for inviting you because while the Green Software Foundation has been interacting with many, many large businesses across the world, IMDA and Singapore GovTech, these are two members of Green Software Foundation who represent the government, right?And we all know the very important role that government will play in sustainability in general. So I wanted to understand from you, you know, how are you looking into this space? So we will talk a lot about that. The other aspect is probably to begin with, for our audience, a perspective on what is IMDA.You know, what is your specific remit, what you are trying to do in Singapore, if you can give us, you know, a few insights into that.Dr. Ong Chen Hui: Okay, so here in Singapore, of course, climate change is actually something that is a bit of a existential thing for us, us being a small nation state and we're also an island, to us, climate change and the associated rising sea level is a matter of concern. Right? So, as a result, we have put in a green plan that states our, sustainability goals by the time we reach 2050. And this is actually a whole government effort. So, I don't think it is a case where it's one ministry or one agency that's responsible for the whole world. It is about the whole of government working together in order to make sure that we meet the goals of our Green Plan.Now, what are some of the things that we are doing? Many things, for example, the National Environment Agency is actually rolling out some of the regulations. We have things like e-waste management, for example. Just now you mentioned GovTech, which is our sister agency. GovTech is also rolling out green procurement when they're actually procuring software solutions. Within IMDA, we are responsible for some of the industry development. We're also what we call a sectoral lead of the ICT sector. So, our own green strategy, comprised broadly of three different strokes. The first is about greening ourselves as an organization.The second is really about greening the sector that we are responsible for, that we are leading. So, in that case, there will be things like the telecommunications sector, the media sector. And the third thing we want to do is to enable our ICT solution providers to provide green solutions to the broader economy so that we can scale the adoption, we can ease the friction out there in the ecosystem.So essentially, that's greening ourselves, greening the sector, as the lead. And the third is really to kind of provide solutions through the ecosystem so that the wider community can actually benefit.Sanjay Podder: Now this is really a full 360 degree kind of approach and it is phenomenal. And, I was, I was wondering, you know, and you mentioned briefly on Singapore being an island state. I was thinking, why digital sustainability? What will happen if Singapore decides not to do it, for example, right? Do you have a point of view, say, because, you know, there are many different levers of, sustainability, you know, I can understand the larger sustainability, but what is the importance of digital sustainability?Do you think it's an important enough lever or maybe you can look at nature biodiversity or something else, right? So specifically for digital sustainability. What is it that triggers IMDA that this is a important initiative? And I'm, I'm seeing this is my second year in Asia Tech that, you know, this is something you give a lot of importance to.Bringing in leaders from various organizations. Doing deep deliberation. I also remember last year, you brought out your new data center standards, I think increasing the temperature by one degree that has an implication. If you could throw a little bit more light on digital sustainability in particular,Dr. Ong Chen Hui: Mm hmm.Sanjay Podder: why do you feel that's a very important lever for a country like Singapore and maybe for many other countries around the world?Dr. Ong Chen Hui: Yeah. Well, I think you're actually exactly right that when we are trying to drive sustainability, actually there are many different strokes. Some of it includes looking at energy sources and all that, which actually is also very important for Singapore because we are small. We do, have to look at, different kinds of energy sources and how we can potentially actually import some of them, right?Now, when it comes to digital sustainability, actually our journey, I would say started many years ago. Maybe more than a decade ago, when we started looking at, some of the research work within the research community about, making sure that our data centers, can operate more efficiently in the tropical climate.Now, data centers, comprise of almost a fifth, of the ICT carbon emissions. And because they are such a huge component of the carbon emissions, of course, their efficiency has always been top of the mind. Now in the tropical climate like ours, a large part of the energy sometimes is attributed to the cooling systems, right?The air conditioning that's actually needed to bring the temperatures down. So as you rightly pointed out, what we found out is that actually if you were to increase the temperature by one degree, that can lead to a savings of between two to five percent off. Carbon emissions. So, and that as a result, we have been investing in research within our academia, funding some of the innovation projects with our ITC players, in order to look at what actually works and what doesn't.Because I think in Singapore, regulations always need to be balanced with innovation. So that have kind of, led to what happened last year, which was that we released the first, standards for tropical data sensors. But we wanted to go a lot more, right, because some of those standards, around cooling and all that, that's kind of like looking at how efficient the radiators are in a car.But we also need to look at how efficient the engines are. And the reality is that, if you look at the trends of ICT usage of software applications. I mean, so much of our lives, whether it is watching videos, watching TikTok, right, our education, around all that, most of this have moved to become, to be enabled by digital technologies.And when we look at the consumption of, data centers and the kind of workload in it, it is increasing year by year. Now, with the explosion of AI, we know that the trend is probably that there will be more consumption of digital technologies. And those are the engines that sits withinssb the data centers.And we need to make them efficient. And as a result of that, we have decided that we need to also get onto this journey of greening the software stack. And greening the software stack means a few things. The first is, of course, I think this is still a fairly nascent area. How do we make software more measurable, so that there's a basis of comparison, so that we can identify hot spots that I think is important.The second part that I think is important is also, given all the trends today, GPUs, CPUs all needing to work together, how do you make them work efficiently? How do you process data efficiently? How do you make sure that the networks and the interconnects within the data centers are efficient.I think all of these are worthy problems, to look at. Some of it will rightfully stay, still in the research stage. So we'll be funding, research programs, called the Green Computing Funding Initiative around it. But at the same time, we also think that there are some practices that may be a bit more mature already, and we should encourage companies to actually innovate on top of it.So we're also conducting green software trials.Chris Skipper: Hey, everyone. I hope you enjoyed that preview from CXO Bytes. If you want to listen to the rest of the episode, please go over to the CXO Bytes page on wherever you find your podcasts. Just search for CXO Bytes and enjoy the rest of this insightful conversation between Sanjay and Dr. Ong Chen Hui of the IMDA.And to listen to more episodes of Environment Variables, Please visit podcast. greensoftware. foundation. Bye for now! Hosted on Acast. See acast.com/privacy for more information.

The Week in Green Software: AI’s Power Problem

Jul 4th, 2024 7:00 AM

In this episode of Environment Variables, host Chris Adams is joined by Asim Hussain to dive into the complexities of AI's growing energy demands and its environmental impact. They discuss innovative approaches to sustainability, such as using fungi to manage building waste in data centers and the potential for greener materials and practices. The conversation also covers software optimizations to reduce AI's carbon footprint, emphasizing that energy inefficiency cannot be outsourced. They highlight the importance of integrated sustainable practices in tech development, particularly in the face of increasing AI power consumption projections.Learn more about our people:Chris Adams: LinkedIn | GitHub | WebsiteAsim Hussain: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:Mushrooms eat building waste at Meta's Gallatin data center - DCD [03:35]Building with Mushrooms to Reduce Drywall Waste — or Cooking Up a New Future for Data Center Construction - Meta Sustainability [06:59]Solutions for AI's Energy Inefficiency Can't Be Outsourced - Bloomberg [15:42]BitNet: Scaling 1-bit Transformers for Large Language Models - Microsoft Research [16:34]Balance effects of AI with profits tax and green levy, says IMF | International Monetary Fund (IMF) | The Guardian [31:09]Fiscal Policy Can Help Broaden the Gains of AI to Humanity [31:24] [2406.09645] Carbon accounting in the Cloud: a methodology for allocating emissions across data center users [37:52]Events:HotCarbon - Workshop on Sustainable Computer Systems - July 9, 2024 [48:33]IEEE CLOUD 2024 International Conference on Cloud Computing - July 7 to 13 [48:53] Masterclass: Become a Sustainable UX Designer - July 8 [49:44]The Software Measurement Landscape - Workshop 2 - July 9 [50:12]Resources:Software Carbon Intensity (SCI) Specification [10:58]Episode 56 - Mushroom recycling with Joanne Rodriguez, Mycocycle - DCD Mushrooms eat building waste at Meta's Gallatin data center - DCD Options to make software greener without changing the code, and how to remember them | Chris Adams [30:17]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Asim Hussain: And now there's a big hoorah in our space because like AI's now gone andour automation has now gotten the point where even our jobs are now we're all really like nervous and upset and but you know this has been the pressure that we've been applying to the rest of the world with all the industry for decades and decades and it's just now coming to us and affecting us so you know we don't really have a leg to stand on I'd say.Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams.Hello, and welcome to Environment Variables; This Week in Green Software, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. Today, in our news roundup, we're diving into some of the pressing issues at the intersection of AI and sustainability.So, with AI rapidly advancing, the energy demands of training and running these models are also seen to be skyrocketing. Posing significant challenges for the environment. We'll also be touching on some legislation for promoting sustainable business practices amongst AI companies and the potential for a green levy to drive investment for greener eco-friendly technologies.We'll also be talking about some of the latest papers that have been published for people trying to understand and get grips on cloud carbon emissions. And finally, we'll touch on some of the exciting events in the green software community, including conferences, workshops, and masterclasses aimed at fostering sustainable development practices.Joining me today for today's news roundup is my longtime friend, Asim Hussain of the Green Software Foundation. Asim, for people who've never listened to the podcast before, can I give you the floor to introduce yourself and some of your background? Asim Hussain: Yeah, sure. So my name's Asim Hussain. I am the executive director of the Green Software Foundation. And yeah, I've been at the intersection of sustainability and software. I've been very lucky to be thinking about the same question about the intersection of sustainability and software for quite a few years now. And yeah, I've been well, mate. I've been, I've just come back from a vacation, which has been long time coming. I do have a little bit of a, cold, so that's what would explain the slightly nasally, annoying nasally sound the audience members are going to have to experience for this podcast.Chris Adams: So this is with your hot beverage and Nurofen chaser Asim Hussain: hot beverage. Yeah, that's what I like to do. That's how I like to start every podcast episode is a coffee and a Nurofen, every conversation with Chris Adams has to have a, both a coffee and a Nurofen.Chris Adams: Wow, that's, one thing to take away with me. All right, folks, I should just briefly introduce myself before we dive in. I am the executive director of the Green Web Foundation. It's a small Dutch nonprofit based, well, in the Netherlands, where we are working towards an entirely fossil-free internet by 2030.I also am one of the hosts of this podcast here. As well as one of the organizers of ClimateAction.tech, which is an online community, where actually me and Asim first met online before he basically headed off to set up the GSF in its current state. Alright, Asim, are you sitting comfortably?Asim Hussain: I'm standing uncomfortably.Chris Adams: uncomfortably at your swanky standing desk.That's good enough for me. Should we start with news then?Asim Hussain: Yeah, let's go forChris Adams: Alright, okay, so Asim, I was thinking of you when I saw this paper, this story. The first one is a story about mushrooms eating building waste in data centers. So this is a link to the Data Center Dynamics website talking about specifically the use of these, of essentially building-waste-eating mushrooms at the Meta data center and other ones.And the general spiel of this is that there are now a number of companies which are, essentially deploying fungi, various kinds of fungi, to deal with all the building waste that ends up being created when you might kind of read, when you're essentially demolishing a building or creating a new one.And, it essentially takes all this waste, and the fungi are able to Essentially, deal with the toxins, and then create something like, kind of, fungi-style bricks that can then be used as a kind of circular building material going forward. And Asim, given that you're our kind of resident mushroom fan, I wanted to just like, see what you thought about this, or if you had any particular immediate like, hot takes or things when you saw this one? Asim Hussain: No, that's all. it's a great application. In fact, it's not an uncommon application of, you know, what people are applying, you know, fungi in this technology for. It's actually one of the, one of the, one of the, one of the very exciting kind of broader sustainability solutions in this space that there is.I mean, there's a couple of different types of fungi. You're going to have to pause me at some point. There's a couple of different types of fungi, but there's one particular type was kind of saprophytic, which is effectively what Fungi, the purpose it has in kind this life that we lead is it basically, it's the thing that destroys things that have died. And if it didn't exist, then we'd be basically living on top of this massive mountain of logs that aren't decomposing. So that's one of the things that they're really, really good at. And it's been, it's, there's been a lot in many different spaces, been a lot of active research, a lot of startups, a lot of organizations exploring how to use Fungi to decompose things that they don't normally decompose. And it's actually quite an interesting technique. Because even people, what's so fascinating about the the fungi space is it's, driven in large part by citizen scientists, which is one of the things I love about it. And there's a lot of citizen scientists out there who are doing things like trying to find a strain of fungi, which can decompose certain types of plastics and you would literally do this. You would literally grab a selection of these plastics. You put them in a, you know, you can go online. It's as simple as this. You put it in a blender, you blend up a plastic in your home blender. And then you just have like, as you can probably see behind me, I have lots of jars out there, which have like different strains of fungi, and you would just put that with other material in the jars.And then you'd collect lots of different strains. Like. Every single kind of mycelium is like of a different strain of it of the same one. So you can have like millions and millions of different types of strains. You go into the forest and you see a type of mushroom you've never seen before. And you're like, ah, maybe that will absorb this plastic. And so there's a lot of interest in this trying to like find those strains of fungi that can kind of absorb and transform, you know, different materials. Obviously certain fungi only in the forest, they only work on certain types of trees. They have like a relationship with them, but you can actually find strains of fungi, which do different things. And kind of the interesting thing about it is about turning them into, bricks like that as well. There's actually organizations out there trying to replace packing material for boxes. But what you do is you basically, you create like, so you take some time. It's not like a foam that you stick in and like 30 seconds later, there's a thing you basically have to Have the thing you want to pack in a box Inside the box you put like a substrate which can be the thing that mycelium grows on you then like almost like impress inside this substrate the shape of Chris Adams: like theAsim Hussain: book or something the mold and then you inject it with the mycelium and you put it in basically an oven for like a month. And it comes back out and then you basically spray it off and like the actual mycelium has grown into the shape of the thing. And then you've got something which you can put in the packing crate. And then at the end of the day, you just, it's a mycelium. You just break up and put it into your garden and you, and it decomposes.There's lots of like wonderful stuff. There's a, great guy called Paul Stamets, who's quite a character. But he's done a lot of, he's done some great TED talks in the use of like fungi and mycelium in, kind of waste and getting rid of waste even getting rid of oil and there's a lot of kind of very active stuff in this space.Chris Adams: Can I stop you one second there, mate? Cause, cause there's, so you just said Stamets, right? So Stamets hit, so in real life, there's someone called Stamets and in the whole, and that's, so the whole, when they had this whole Star Trek discovery thing, where there was this guy called Stamets who was using like the mycelial internet space thing, that's a direct reference to this dude, basically.Asim Hussain: That's a direct reference to Stamets, which is kind of, which joins two of my biggest nerd bubbles together in the most beautiful way, but yeah, there's the engineer, and in Star Trek Discovery, how they, instantly, there's a new type of drive, and they instantly can, move to one part of the universe to another, and it's called a spore drive, and you need to kind of enter this kind of psychedelic realm to connect transmission, so yeah. Chris Adams: My word, Asim, I was not expecting us to dive down that myco rabbit hole for that, but that was a lot of fun. Thank you very much. So you basically said. By doing this, so in this case of packaging, this basically removes the need for like, say, fossil-based expanded polystyrene in packaging, and in the case of materials here for buildings, you would use that instead of having to get a bunch of virgin materials, for example.This would be like a circular, that's the approach that they'd be using here, right?Asim Hussain: So what, so the specific approach they're using in this particular article is, I believe it's more about, it looks like they're using, basically trying to get rid of the drywall that they have inside the data centers. So I don't know if it's a particularly from a decommissioning a data perspective or renovating a data center perspective, but they're ending up with a lot of material, which typically you would just dump in a waste fill, but now they've basically got a form of mycelium, which can eat drywall and generate something that's, decomposable, maybe edible.Yeah Chris Adams: All right, cool. And so we, this is, mentioning, referencing the metadata center in, I think, Tennessee. But we've also seen Microsoft, as far as I'm aware, Microsoft has also been a bit of, it's been dipping their toes into this field as well. And one of the reasons why you might care about this is that, well, last year, Microsoft's reported emissions, when they released their sustainability report, it was like, up 30 percent and a significant chunk of that came from buildout of data centers.So we are now starting to think a lot more about the embodied energy in the facilities that are created so that we can actually have data centers, so we can actually use compute, a lot of the kind of compute power available to us, or even some of the AI power, or the kind of sources of AI and stuff like that, because you need, they need to be in a building somewhere to get this stuff built.And like, this kind of made me wonder actually, Asim, surely I imagine some of this might show up in an SCI score, a Software Carbon Intensity score, if you're purchasing cloud from a certain place. If you've seen a massive buildup of data centers, surely that might have an impact on the embodied carbon for the compute you might be using, right?Asim Hussain: It could do, it depends on what your, cause in the SCI, there's kind of two components. One is the software boundary, which is "what are you going to include and not include in that score?" I remember us having quite a few conversations in the early days of the SCIs, which "should you include the, like the concrete that was used to pour the floor of the data center?" And I'm not too sure we really, I don't, think anything particularly made its way into the specification, but it has to be, if you're measuring something, it has to be something which drives a choice or a behavior. So, you know, I suppose what I'm going to with this is, if there was a data center, which was particularly built like a zero carbon, maybe built with mycelium or something.I don't know, but like, if there was a data center, it was particularly built with that choice, then maybe it is something you want to include in the score, because then that can drive an action of choosing one option over another. But if every single data center is effectively built exactly the same way, the discussions we were having was, well, that's just overhead of adding a sec, effectively a coefficient, which wouldn't really drive a decision-making factor. So, I suppose as what I'm going to this is excitingly, if there are data centers that are being built, which are going to havevastly different embodied carbon profiles, and then you, and then if that was included in an SCI score... and I think as we move forward with SCI, because one of the things that's happening... SCI on its most, we, launched kind of the version one and now 1,1 of the SCI and it's, and it was, it's very bare, very basic. It's designed to be built upon. And so now what the teams are having conversations around is like, if you were to apply SCI to AI, specifically, what does that include? And one of those questionsthey want to answer is what, Chris Adams: building, Asim Hussain: yeah, like what you include so, so when someone reports an AI score, we can actually start getting to like some apples to apples comparisons. So that's, a conversation that, you know, it's interesting. Maybe we should bring that, bring it up again is do you include the embodied of the data center? But then you also get into the headache of "my God, it's hard enough trying to figure out what the embodied of a chip is."Chris Adams: Yeah.Asim Hussain: And now going to ask to figure out like what the embodied, so there's has to be some practicality aspect to this as well.You know, you know, we have to, there should at least be some models. I don't know.Chris Adams: So there are models that, as I understand it, there are models for working out the, essentially like the carbon emissions for a kilo of concrete, for example, or stuff like that. Some of these exist. And,Asim Hussain: exists from aLCA, Chris Adams: and there are companies, and we know there's like, in Europe at least, I know there's a company called LeafCloud.That are explicitly using, are doing, making reused, or, they're reusing heat, but also a very specific kind of data center, which isn't like a very large outta town thing. They have like essentially shipping containers put into places like say greenhouses, where the heat is being reused, for example, and where they're not having to build a whole bunch new buildings. There's also, I think in Switzerland, there's one company, because we maintain, where I work, we maintain a directory of green data centers. And one of them is a, they basically reused a old factory building with a waterwheel that used to be kind of like a clothing factory, and now it's a data center.So they've essentially reused the whole building shell. They haven't built a load of stuff as a result. So this is one place where this might show up, but in order to do this, you need to have access to the numbers for this. And that's still a bit of a challenge because, yeah, we don't have the, we don't have easy access to these numbers, and like you do say, it's a challenge just thinking about chips, let alone expanding the boundary to the actual buildings instead.Asim Hussain: Yeah. I mean, at least you have some information when you're running software, like what, you know, you can, now that we've done, a lot of that workings out so you can figure out, you know, perhaps it's this chip, but I think given the secrecy around data centers, I don't know, I think there's going to be,I don't know. Chris Adams: It's gonna be a challenge, because while we have this practice of, essentially, water usage and electricity use, so many things being under NDA, it'll be very hard to come up with some numbers without using, like, a basic number. Okay, alright. We have totally gone past talking about mushrooms and data centers into all these other things, but I guess this is part of the whole thing about sustainability and technology.It's all interconnected.Asim Hussain: It's all interconnected.Chris Adams: Shall we go to the next story?Asim Hussain: Yeah. Let's go for it. Chris Adams: So this is a piece from Bloomberg, actually, so this is the topic of this is Solutions for AI's Energy Inefficiency Can't Be Outsourced, and this is an opinion column from Bloomberg talking about this projected demand some of the figures which are pretty, pretty impressive, they basically say, in the US at least, it's, there's a projection saying that AI, the growing demands of energy are like, is projected to make up around 8% of the US' power consumption, electricity consumption up from 3% in 2022.Now these numbers seem a little bit high and they are citing a kind of this arms race of different kinds of organizations, essentially building out these massive data centers but also buying loads and loads of chips, but it does talk about some of the approaches that we're seeing now to kind of rein in some of this growth.So one of the things was this idea of one-bit architecture, which is essentially, I'm not going to pretend to understand it. And I'm not sure if you are similarly informed in this one, butAsim Hussain: I'm going to definitely pretend to understand it.Chris Adams: In that case, I'll hand over for you to confidently bluff it around, just like a ChatGPT would actually, Asim, the floor is yours.Asim Hussain: I'm going to, I'm asking GPT. No, I'm guessing, and this is, I haven't really, I've, seen it, but because it's one bit and a bit can only be one or zero, I'm guessing what this is, that, you know, instead of like pumping in a number between one and 256 as one of the inputs to a node in a model. Maybe you could just try one or zero. And then output one or zero and then see if that actually still gives you some pretty reasonable results and from what it looks like it might do and you know for those of us I kind of it's like there are extreme inefficiencies you can do when you're working at the bit level in terms of programming and computation and instructions on the chip and things like that because it's so much lower level than the architecture.I presume that's what it is, which actually to me is. It's really exciting from a, from the level of, this is a software architectural solution, which is effectively, I think what we've been advocating for, a large part of the time, which is, you know, we do need to I hate to use the word code because I don't want people to dive down the, you know, building more. I, well, AI is one of the, one of the few areas I would say where actual code efficiency is extremely important. But yeah, this is kind of, it's interesting. Now that there has been a pressure applied to optimize, the optimization has happened.Chris Adams: Yeah, that's, so basically, Asim, I think you're about right. Now, when I, remember when I skimmed over this paper before, one of the key ideas was, the one bit approach was, essentially, when you, would be able to, you would use this to encode the difference between different parts of, like a dataset rather than showing absolute numbers.And one of the things that this allowed you to do was allowed you to just use addition rather than multiplication in some cases. Now, I'm not an AI specialist and I'm not a hardware specialist, but the general idea was by representing things in a more somewhat simple fashion here, you avoided having to make some of the expensive calculations that you would otherwise need to do.And this basically reduced the energy that you might need to run some of these calculations. So this was like one example. I was quite impressed to see this inside Blumberg because it was a very quite new research, but also really, technical and actually quite promising. So yeah.Asim Hussain: It's, it, the title is interesting though, isn't it? Cause it's not like, it's not like software making, it's kind of talking about energy inefficiency can't be outsourced.Chris Adams: Yeah,Asim Hussain: I just thought it was an interesting, it's like no one really knows or cares or thinks about the software side of the, this whole equation.To me, this is just like a software optimization. So you would just say like software can be optimized to reduce any AI's energy footprint. It's not, not expressed in that way. It's interesting. Kind of expressed Chris Adams: there's a couple of things that I think are also really interesting about this piece, in my view, was that it talks about the kind of economics around some of this, and basically the idea of outsourcing this is essentially how we have a bit of a tendency in the technology industry to say, "well, we're just going to have, like, We realize that data centers use loads and loads of power, so what we're going to have to do is just somehow get loads more power."And so you basically have people talking about, oh, obviously the solution is to deploy loads and loads of nuclear, for example, right? Never mind that these take between minimum 10 years to get built, right? So, what are we going to do in the meantime? A lot of the time it's likely to be coming from things like gas, if you're going to be using something like that.So that's an issue there. But it's also worth thinking a little bit about these figures that were mentioned in this story. We've seen numbers like 8 percent of the USA's energy consumption by 2030. It's worth bearing in mind that these numbers are often coming from the utility providers in various states, all right?So like, say, in, say, Virginia, I think it might be, I forget the name of the actual monopoly provider, but there's only one provider over there. And basically I think that's Pacific General. I think that's actually on the, other coast, basically.Asim Hussain: We've played this game before where you can name allChris Adams: was Caiso last time, which was, which is, that's the Independent Systems Operator, which is not the energy company.The energy companies are somewhat different because it's a, because in many, because they have very specific, if you're in a state where you've got a single provider, they are allowed The only, the reason you only have a single provider is that you have basically had that state agree to have a, what's referred to as a natural monopoly.So, they basically, the agreement is, we will give you a guaranteed 10 percent net profit plus for your organization, alright? But you need to basically, yeah, as long as you agree to share your plans, For the new infrastructure you're going to build over the next few years, but also you need to justify this in each of these cases.And when you think about this, if you're going to get a 10 percent net profit from that for any of the energy you, get. Now, what you, if you want to increase your profits, what you need to do is you need to say, "well, I have loads more demand coming. I need to like double my expected demand to double my profits inside this."Asim Hussain: Oh, I see.Chris Adams: is one of the things, because this is a lot of the existing providers, they're used to saying, "well, we've got all this extra demand. What we need to do, we need to build a bunch of new gas, fired power stations. And 'cause we know we're gonna make a 10% guaranteed profit on all the infrastructure we, build.That's basically, you know, we are incentivized to say it's gonna be really, high" basically. So. You, it's, really worth looking at a paper by one, one gentleman, John Kumi, who's actually, who's spoken a lot about this, because 20 years ago, we had the similar thing when you had people in the coal industry saying, "well, coal was what powers the internet, so you need to have more coal fired power plants if you want more internet."We have a very similar thing happening. In this case as well actually. So it's worth bearing in mind that yes, we do see these kind of apocalyptic forecasts for energy, but you also see that when you do have constraints on this because it's so difficult to build, then we do end up with a renewed interest in energy efficiency.And even at the kind of like energy level, right, there are different ways that you can basically meet demand. You can meet demand by adding new supply, but you can also meet demand by investing in energy efficiency. And that's, and this is very much what it looks like, so a lot of the ideas you might see at the energy sector, I think, are at least applicable, or at least relevant in what we talk about with cloud, because essentially you're looking at a kind of commodity that you pay for on an hourly basis, or something like that.Asim Hussain: Well, that's kind of one of the... All I see is there's, a significant amount. There's not a significant, there's a fixed amount of investment and focus that organizations can put into something. And if you present them with an option, either put all this engineering effort to make something more efficient, which costs 10 or buy renewable energy, which costs five and then, well, I'll choose the five one. So I think that's kind of, that's. That's why kind of investment goes in one way or the other. Whereas I suppose what's happening now is that energy is now, we're reaching the point where energy is, and I'm just throwing out numbers here for energy is now costing 12, but the development still costs 10.So like, well, maybe we'll put some money into developmentChris Adams: Yeah.Asim Hussain: So that's kind of, and that's interesting that's why you want things like we were talking about levies and money last time. That's kind of why you want to change that balance of it a little bit. And that's also significantly why in the SCI specification, the decision was made not to include any energy offsets or anything like that.Just because if you gave somebody the option of spending 1 instead of spending 10, they would spend 1. And we want people to spend the 10 to actually make things moreChris Adams: Yeah, to address the consumption issue, rather than just think about the intensity. Asim Hussain: Exactly, yeah. But this is exa I'm really excited. And the other thing I was thinking about as you were talking was I was just remembering about my time at Intel. I think this maybe links a little bit to the Nvidia's statement as well. So, I think I might have, I always love telling people this story because I just think it's such a cool word and it's such, it tickles my sci-fi bone so much. But there's a statement they used a lot, which was dark silicon. Have I told you this? IChris Adams: No, you haven't. Asim Hussain: people. There's a dark silicon. And I was like, "Oh, that sounds good. What's the, what's dark silicon?" And what dark silicon is when they're kind of looking at a chip and they put load on it.And the key thing with a chip is how much can it expel heat and still function at that level. So looking at heat on a chip. And so when they're running a certain software on a chip, they'll put like a, what you call the heat detecting camera on. And you know how they look like this. It's very red, it's very red on the bits that are hot.And it kind of looksChris Adams: Ah, okay, yeah, look at a thermal house, Asim Hussain: Thermal imaging. yeah. Yeah. Even though the black might not be like icecold, quite hot, but like relatively it's cold. And so the things that they would be really like thinking through is like, how come this software, how come half the chip is black?Like, why aren't you using the rest of the chip? Like you've maxed out the chip, but half the chip is black. And really what it kind of, you know, what it goes back down to is that, you know, we called it, I think I might call it the silicon gap, which is the gap between what engineers are building and what silicon manufacturers are enabling on their chips. And there's this disconnect between, you know, they're, all building, "why aren't you using this, these more advanced chip sets that are more efficient? Why are you using this stuff on this side of the chip?" And so I think that's something that we need to get down and tighten that gap to use this infrastructure more efficiently, I think over the years, from a developer's perspective, it's always been about time to market.How do we beat our competition? It's never around, how do we use this chip more efficiently? And so I think that one bit architecture is, it sounds like an example of that. It sounds like an example of, we want to leverage the instruction set on this chip to be as efficient as possible. We need to change. Fundamentally how we're architecting and even thinking through algorithmically this code to take advantage of that. And that's, I think it's also like this, other area, which is completely, we're just ignoring, you know, there is this dark silicon and honestly, the silicon manufacturers are like, why are developers not, I don't understand, we put so much energy and time into likebuilding Chris Adams: we're only using a of it, right? Asim Hussain: a The percentage of it, and there's that also, and I'm going to, I'm rambling on for a second, but just one more, one more point. I thought it was really interesting. One of our organization's Entity Data, they did a really great report. It was kind of two years ago now, I think. I don't think we really circulated.I don't think we circulated that well. And it was, they just, they just looked at Java, you know, Java, like still, there's still a lot of very antiquated Java applications running out there in the world. And they just said, what is the energy difference if we just upgraded, not the code, but the JVM, the underlying JVM.And that's all they did was I think they upgraded, I cannot remember. I'll find a link to the article and the paper. It was like several steps up. But they were like, "look, most apps are still running on whatever the JVM was they were built with like 10 years ago. And it was a seven, it was a 60 or 70% energy efficiency improvement.It was unbelievable. The energy efficiency improvement just from grading the JVM. And that wasChris Adams: Ah, okay. Asim Hussain: That, and if you think about what that means, what happens was the chips evolve to have different instruction sets. The JVMs, only the modern JVMs are built to the new one. And so if you're running on the old ones, it's just using the old instructions.So you're not really leveraging the infrastructure the same way, which is why like recompiling software with like, you know, the latest version of the compiler against the latest version of the chip. It's really important. And it again, that was Intel's, when I was at the time, that was their big push.They were like, "use the latest bits, use stuff that's compiled now using the latest optimizations." 'Cause they saw a lot of people were still just kind of compiling, leaving that binary, letting it run for like four or five years. And then, that's it. And yeah, I'm going to stop ranting now.Chris Adams: No, that's actually, I didn't realize, I was somewhat aware of things like the JVM, there's like hotspot or different kinds of flavors of the Java Virtual Machine to run this code, and it's somewhat similar to like in PHP land, like when a new version of PHP came out, because it made much better use of the underlying code, the underlying hardware, you saw a massive increase in performance.And like, you kind of see something a bit like that with Python as well, with the whole global interpreter lock. Like, I can have a piece of Python that'll be running, and it won't be able to use all the other cores in my machine, in my computer, right? So, rather than lighting up the rest of the silicon, it's got just, it's, most of my computer is dark, basically, in that same kind of approach.All right, yeah. Cool, alright, so that's like one of the approaches that we have, and this is one thing that you could plausibly do. I've shared a link to a blog post that I've, I was trying to explore this to find a way to explain it, to basically explain the fact that you can reduce the emissions associated with code without actually changing the code, by thinking about what options you might have in terms of, like you said here, like you change the VM or something like that, or change when you run it, or anything like this.And I'll share a link to that, because I've kind of framed it in terms of If, there are three, three things you can change, basically. You can change the time of running something, which is kind of speaks to carbon awareness. You can change the speed, the amount of compute, computation you might be using, the number of cores you might be using something, or you might change the place, like where in the world you choose to run this for the carbon intensity of the underlying energy.So I'll add that to the show notes because it might be another nice helpful addition to this. Alright, okay, that was quite, that was fun. Shall we look at the next story?Asim Hussain: Go on. Yeah.Chris Adams: Okay, the next story is, this is actually from The Guardian, and this is talking about the balancing some of the incentives of the kind of profits that are projected to come from deploying AI with something like a green levy on these profits, basically.And this is actually, came from that, Left leaning organization, the International Monetary Fund, Asim Hussain: Oh, wonderful. Great. Great to see them in this space.Chris Adams: And basically what they are, the argument from the IMF is basically saying, well, you've got all these very profitable AI firms, and they, and we know there's both the social and environmental impact that's taking place here. So what you should actually have is some kind of green AI win for tax, essentially, that will be used to fund some of the, sustainability initiatives.And like, to be honest, I have a lot of sympathy for this because what we've seen from the largest providers in the last like year is that given the choice between investing in efficiency or investing in more capacity and building loads more data centers, we've seen all the big providers go for building more capacity and like emphasize profits rather than the environmental impact here.So it looks like we're not the only people thinking about this. The IMF is thinking about this as well. And they're saying, about this. You need something a levee to do this. Do you have any kind of particular thoughts on this? First of all, Asim, because I was really surprised to see this come up from the IMF of all organizations.Asim Hussain: Yes, I thought it was a very important point. I love the fact because the use of the word levy, and I remember us having the conversation last podcast about like use of the term levy rather thantax instead of, explaining it, but then in the article actually uses the word tax, like all over the place. Yeah, I think it's really important. Like one of the things I'd say is that like, why is everybody so excited about AI in the first place? In my most cynical moments as as a software engineer, I would say, Our purpose in life is to either find solutions that help people waste more of their time or get rid of jobs and automation.If you think about kind of why we have been like one of the most highly paid sectors for quite a significant amount of time, it's because building automations, yes, you could argue and helps you deliver kind of projects faster, but it also helps you to do more with fewer employees. It decreases the.You know, the earnings potential, the, a lot of this stuff from that perspective, and now there's a big hoorah in our space. Cause like AI is now gone andour automation has now gone the point where even our jobs are now we're all really like nervous and upset. And, but, you know, this has been the pressure that we've been applying to the rest of the world with all the industry for decades and decades.And it's just now coming, to us and affecting us. So, you know, we, don't really have a leg to stand on, I'd say. We also. There's nothing we can do. It's happening is the only other thing I'd say. There's no, you know, we have to just accept it and move forward. But yeah, I really liked the idea of that because like, let me put it another way.Like there's this huge kind of bro down AI tech bro showdown with like, well, I think Sam Altman posted something a year ago now, which deeply disturbed me. And he said he wouldn't be surprised, I think within the next couple of years, if there's a one person unicorn startup, which is a single, like, billion dollar organization run by one human being. And I was thinking to myself, like, that might actually be true. And I would, you know, I do, there is a chance that would be true. might happen in the future, but how do I feel about that? What is the human impact to that? I mean, what is the green impact to that? So I'm now going beyond green because I think that there's like this AI is going to make a few people and organizations immense amounts of power and wealth. How do we have ways to, redistribute all of that and to kind of add a level of fairness to, the rest of society? Is it okay? And so from a green level, absolutely. But I'd also argue from a societal level as well, like, like, you know, like when we talked, you know, how about this? When we spoke about the green transition, it was impossible for us to have like a proper conversation about the green transition without having a real conversation and talking about how we're going to transition the people who are employed in the fossil fuel industry over to other areas. I don't see having, I don't see us having that conversation here as well.Like it's just ignored. And so I think that's something that we need to have is like, is if you're, if you want to have. The opportunity to get this much power and money and wealth, I think it should come with a certain amount of social responsibility to, you know, be a green levy in terms of the green ones, but I think it actually should be broader than that. It should be, you know,Chris Adams: So address some of the. Some of the inevitable costs that might be incurred upon society to provide to, like, ease that transition. Okay,Asim Hussain: yeah.Chris Adams: I was not expecting you to go there, Asim, but Asim Hussain: Well, I feeling it. Chris Adams: Bang yeah, that's usually me kind of jumping up and down, actually. All right, okay, that Asim Hussain: How do you feel about it?Chris Adams: I think, so you said this idea, like, I feel uncomfortable about a billion dollar startup with a single person.And I, okay, how many, we're not that far from it, I think, because if, actually, no, we've been, so if you think, how many people work for WhatsApp? Asim Hussain: WhatsApp! I was WhatsApp, I think it 11 Chris Adams: was purchased for $12.6 billion by Facebook a few years back, right? So that's not that far away on a kind of per person basis, but that's not a single person.But you've got to realize that like, you know, if that was probably a lot better for the people who own shares in that than the people who are working for this. And we have seen multiple cases. We've seen cases like when a company has to choose between keeping on staff to work on something and getting rid of them, and then spending multiples of the staff's wages on buying their own shares to kind of in increase the cost, increase the share price. We've seen the decision that people have been taking, and there is the, I think that this is a thing that needs to be addressed. 'Cause the current, if we're going to assume that, if we're gonna accept that digital is gonna be the, this thing which is just as important to our lives as access to water or you to or energy or anything like that then you probably want to have a discussion about okay well how are the dividends shared well how is the upside shared in an equitable fashion so that we don't end up with people rushing outside with guillotines or in the very least right like it's not good for social cohesion basically so that's my the view that i might actually have on some of thisAsim Hussain: No, that's a, really, I think that's something that people, I've been, I don't know, should I say this?Chris Adams: I'm going to stop you because it's coming up to 40 we've got one story, so we can, talk about societal accounting, all to carbon accounting, this new paper, which I think is really interesting and we both were nerding out about it before this call. So this is a new paper from Google, Carbon Accounting in the Cloud, a methodology for allocating emissions across data center users.So, We can totally talk about the societal aspects of cloud computing here. But this one here is really interesting because this looks like one of the most interesting papers about how you apportion responsibility for your use of cloud services when, using these things. And for the longest time, we've had a real struggle because we haven't had access to any of these numbers.And this paper really. Lays out a bunch of really interesting ideas with lots of really helpful diagrams, and it dies into how inside Google, people allocate carbon emissions for both internal use, but also for cloud customers. It's a really, fun paper, but it is also quite a significant piece of work.Read. Like, me and Asim were quite excited about it, but we realized this is almost like a kind of book club kind of paper to read through, basically. Asim Hussain: Yeah. Yeah. Yeah. Chris Adams: So, Asim, I'll hand over to you because, there's a couple of things I'd like to draw attention to, but I suspect there's some things that caught your eye as well, or at least maybe you might have some context about why the two of us might be so excited about this.Asim Hussain: Well, I just think, I think it's really exciting. Well, I get really excited when any organization does such a deep, thorough analysis of their emissions. The thing I'm kind of like going through as I go through this is like, part of me is like, I'm going to try and like represent some of this stuff as an impact framework manifest File because I can read an impact framework manifest file and I can compare it and I can look at it and I can know what's going on. Like one of the first things that the paper outlines, the approach that Google developed to quantify location based emissions of its individual products. And now I'm now like, I now need to dive into this paper to understand, well, yes, I'm understanding the, my definition of location based. However, I'm also seeing references to CFE, which to me doesn't factor asChris Adams: CFE being carbon-free energy Asim Hussain: carbon-free energy So like, there's like a lot of nuance to this stuff. and yeah, I I would probably, as an experiment for me to try and understand the paper, I might try and represent some of this stuff as a manifest file. 'cause for me that's quite useful as a way of, learning something in, in a way, in such a way where I can compare and contrast it to other methods and methodologies as well.But it looks very, interesting, very exciting. And when Google first worked on its carbon dashboard as well. They were the only organization to have done it the slightly different way, which is kind of like bottom up, like from products and services up towards the top, and now IBM has actually done actually to give IBM some credit as well. And they've got great white paper as well. If you're not ready, I'll send it over to you, Chris. they've done another approach is kind of bottom up as well. And so they're the only two organizations that have gone bottom up. The other ones have gone top down and I'm always excited when people go bottom up, because then you get the data with the granularity, the level of products and services that people actually use, which they drives.Which is what you need to drive emissions reductions. So that's it. I haven't read, I haven't read almost any of this So I'm interested to get Chris's, if you've got any time, I don't know. What are your hotChris Adams: So, the thing that really caught my eye from this is that it talks about some of the ideas that Google has been doing that other organizations aren't so open about. So, for example, you have like a given amount of capacity that might be available inside this. Now, what Google have done previously is they've kind of said, well, We know that we've got a certain amount of, kind of, energy that's coming from green sources and we've said that we want to have a percentage of our energy that is always running at, say, 100 percent carbon free, for example, right, and they, Google have an approach where they say we count something as carbon free if it's matched at the time of use and if it's I think it's on the same grid.So it's not literally a solar power, but a set of panels on the data center. It might be a wind farm that's where you could plausibly deliver the power to that place. And they use this to kind of represent the amount of clean capacity as something which they call a virtual capacity curve, because it changes over the time of day, basically.So they talk about in this paper, there's some production, there's some production loads, like that always have to run and always have to respond very, quickly. And there's things where they've got a Bit of freedom in how they move it around and I think this is actually quite interesting because they talk about where they have some flexibility inside this and they talk about how they account for a bunch of that because it's the first time I've seen a paper, A, talk about this, but also talk about the fact that there's like, a set amount of kind of idle power then there's amount of power that will kind of ramp up and down based on the amount of use you're introducing they do they speak about a bunch of really interesting things inside this and the thing that i think there's like there's a couple of figures which i really find like really quite fascinating actually and the fact that there's like one thing like If you are at all interested in, like, Sankey diagrams, they've got this really cool Sankey diagram of saying, well, this is all the kind of power that goes into running machines, running the overhead.This is how it gets proportioned across all the different services. And this is how these end up being allocated to both our internal use, but also cloud customers and stuff like that. It's a really, fun read. And I'm probably going to spend like, I think, an afternoon or maybe the weekend making some more notes on this.'cause there is a bunch of stuff which is beyond my can, like some of the equations are. I, don't have the, I don't have the ability to kind of make sense of those. I am looking forward to reading this nonetheless, because it's really nice to see something like this, not least because by putting this into the public domain, it's now raised the bar for some of the other providers to be more transparent about this.Because if you're looking at, say, Amazon, you're looking at Amazon's calculator, you don't have scope 3 emissions, so that could be up to 99 percent of your emissions not accounted for in the numbers. So if they look suspiciously good on the Amazon dashboard, maybe they are suspiciously good. But also you look at the resolution.This is something where they providing information at both the location and a product line value. So let's say I'm using Cloud Run or one particular kind of storage. I can see it at that kind of resolution. And that's that kind of location. In some other providers, you might see Europe and then compute.So, there is nowhere near that kind of resolution. So, people talking about this is how we do it. This is how it's possible. This is what you should be expecting from other providers. I think it's really, good. And they also do mention the fact that they're using high time resolution. So, they say, "We're using data from electricity maps to help us work out these hourly curves, so that we know at what times of day, what the kind of carbon intensity for the power might be, so that we know that we've got this much kind of green compute that we can plausibly use," and in a defensible and transparent way, say, "yes, this really is running on renewable energy, according to the way that we talk about this."And like, they do refer to like, they, you don't need to agree with the approach that they use in order to at least understand where they're coming from, because there's plus points and minus points with using the approach here versus having something totally location based like we've spoke about before.So, yeah, that's kind of my initial thoughts when I see this, actually.Asim Hussain: We need to get this to the point where it's kind of easily absorbable and understandable by folks in such a way that they can actually, because you know, otherwise it's just, oh, right. There's a great paper we'll use, you know, but I think you need to understand the nuance of a lot of this stuff.I don'tChris Adams: Well, there's two things, but what I think, so... you know, there's a, the real time cloud working group that Adrian Corkcroft is Asim Hussain: Oh yeah. Yeah. Chris Adams: He's been really pushing on a bunch of this stuff. This feels like one of the abs, absolutely worth sharing to that group because they've been doing a really good job of actually collating this data so it can be used.And like, a scene like that was the thing that kind of fed into the impact framework stuff, right? So there is a path for this. And like, There's a job to decode some of this and make it easier for lay people because, yeah, like, Asim, we've been talking about this for literally five, six, five years to get to, for us to understand why it was exciting.But yeah, you do need a job to actually make this easier for people new to the field because there's lots of developers who are kind of coming into this kind of sustainable software field.Asim Hussain: you know, we should also have and this, we're now talking about GSF work. I know I've actually got to drop like very soon, but this is interesting because as I just asked you that question, I think an answer came to my head as well, which is like standardization. So, you know, as we like to, even as a real time cloud project is kind of evolving, like one of the things it's trying to figure out and work through is like, like you mentioned CFE and your definition of CFE.And I can tell you right now, I've heard different definitions of CFE from other organizations, which don't. Count the, it to be, it doesn't have to be the grid. It could be anywhere. So I think some of this stuff, it might be interesting to have conversation with Google, like other ways to standardize some of the terminology, the methodology, the equations to this, as soon as you can create a standard, maybe something we can push into ISO or something like that. That kind of also in a way also forces the, not only simplifies everything for everybody, cause they're like, "well I don't really understand what the standard is, but I can see that it's got wide adoption and it's a standard and competitors have got together and agreed on this standard." So I don't really, you know, and there seems to be a wide body of people who support it. I don't need to look at this. This equation is so thick. I'm staring at this equWoo! Chris Adams: it's quite a man, Asim Hussain: It's a quite, a yeah, it's quite a lot. Selecting it has like 43 components in the selection.But... Chris Adams: the thing that we can maybe talk about is that there are standards. We don't need to be doing all this work ourselves. Like energy TAG is one standard that is essentially written into European law and American laws around hydrogen now, like hydrogen production. So there's things that, you know, we wouldn't be starting from zero.We could be using some of that stuff, but you're right. Asim Hussain: We how we push Chris Adams: Yeah. I need to also. Thank you. Trying to rush us through for the last few bits, because there's a few things inside this that, as hosts, I need to be doing, and I sadly can't talk about the marginal carbon intensity of baked bread, because we had a really lovely follow from Dr Daniel Schien, who responded about the low carbon bread and high carbon bread thing. So maybe we'll, actually, we should commit right now to do an article about the, about how green the energy is, and how you talk about that, because this is what we just spoke about. Right now, Dr schien raised a number of really good points and linked it and shared some really helpful resources with us.Okay.Asim Hussain: we should get a proper conversation together with, if we can, like with maybe EM and someChris Adams: Yeah, we have some people inside the organization who've been doing that. Alright, let's look at events. So, stuff that's coming up. HotCarbon is a workshop on sustainable computer systems. This is happening on July the 9th. It's free to attend virtually. You can turn up in person if you're in California.It's really, good. And I, there's, they have videos online and really fascinating papers. It's really worth reading. It's like absolute cutting edge stuff. There's also a, the IEEE. They have a cloud conference on the 7th through to the 13th. This is in Shenzhen, China. And for the first time I've seen, there appears to be something like a sustainable AI track, or sustainable computing. You are? Oh, Asim Hussain: to be there. Yeah. I'm going to be there. Yeah. Yeah. I'm going to be there. I was invited by, well, the, anyway. So yeah, I'm going to be, I'm going to be, I'm going to be over there. I'm going to talk about sustainable AI in the cloud. There's going to be a whole track, several panels, discussion topics.AndChris Adams: Oh, wow! Cool!Asim Hussain: You know, I don't think we particularly speak too much to, I'm a big believer in that this is a global challenge and a global issue. And yeah, most of our conversations happen in the Western world. So one of the things I'm personally trying to do is to try andChris Adams: Bring the other 1,5 1 point somet

The Week in Green Software: FinOps, GreenOps and the Cloud

Jun 20th, 2024 7:00 AM

On this episode of TWiGS, host Anne Currie is joined by Navveen Balani of Accenture and fellow GSF member. This conversation navigates the landscapes of, and intersections between GreenOps, DevOps, and FinOps, as well as the vital role of Infrastructure as Code in marrying financial and ecological efficiencies in cloud operations. Lastly, they tackle the intersection of cybersecurity and AI development, emphasizing the need for green software principles to fortify AI systems while minimizing energy use. Learn more about our people:Anne Currie: LinkedIn | GitHub | WebsiteNavveen Balani: LinkedIn | WebsiteFind out more about the GSF:The Green Software Foundation Website Sign up to the Green Software Foundation NewsletterNews:Green coding - CloudBolt: Cloud efficiency... beyond dollars, pounds & pennies [03:17] Why you should switch to green coding for a net-zero future [16:08]The role of cybersecurity in AI system development [31:28]Mexico elects Claudia Sheinbaum as its first female president [40:00]Resources:Cloud Native Computing Foundation [14:36]If you enjoyed this episode then please either:Follow, rate, and review on Apple PodcastsFollow and rate on SpotifyWatch our videos on The Green Software Foundation YouTube Channel!Connect with us on Twitter, Github and LinkedIn!TRANSCRIPT BELOW:Navveen Balani: Definitely, I would say there is some synergy between security and green software and certain, I would say, features of green software principles can also be applied to security domain, right, to make it more energy-efficient.Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.I'm your host, Chris Adams.Anne Currie: Welcome to another edition of the Week in Green Software, where we bring you the latest news and updates from the world of sustainable software development. Today, I'm your host, Anne Currie. So you're not quite hearing the usual dulcet tones of Chris Adams. You will have to do with me instead. But as usual, we'll be talking about the world of green software and what's, what's going on at the moment.And today we're going to talk a little bit about how being green matches with FinOps, which I think is very true. And I think that's a really important part of the story. So we'll be talking a little bit about that. We'll also be talking a little bit about code and code efficiency, which I'm much more...You have to be very careful about code efficiency. So that's, that's the part that we need to be really careful about. What's the context there when we talk about code efficiency. And finally, we'll talk about the intersection of cybersecurity and AI developments, and we'll be talking a little bit about GreenOps.And there is a new Green Software Foundation project, GreenOps project, which is aimed to look about how we can embrace AI and use AI without totally throwing all our green principles out of the window. And I think that is absolutely doable, but we need to think about it. We need to go in, in a very knowing way.So as I said, I am your host today, Anne Currie. But first I'm going to introduce our guest, Navveen. Navveen, do you want to, do you want to introduce yourself?Navveen Balani: Thank you Anne. Hi everyone. I'm Navveen Balani. I'm the Managing Director and Chief Technologist for the Techno and Sustainability Innovation Group at Accenture, working at the intersection of technology and sustainability. I'm also the co chair of the Standards Working Group and the Impact Engine Framework at the Green Software Foundation.I'm a Google Cloud Certified Fellow, a LinkedIn Top Voice, and author of several reading books. Very glad to be part of this podcast.Anne Currie: Thank you very much, Navveen. It's very good to have you. So just a bit of context for me. My name is Anne Currie. I am one of the co-chairs of the community group of the Green Software Foundation. And I am also one of the authors of the new O'Reilly book, Building Green Software. So that's, fills you in a little bit on my background.So before we dive in to the articles this week that we're going to be talking about, it's just a reminder that everything we talk about will be linked in the show notes at the bottom of the episode. So you can, you can read the articles that we're talking about. You don't just have to rely on us telling you what was in the article.So, well, let's, let's move first to the first article from today, which was in Computer Weekly. And it was called Green Coding. It was, it was a, basically a puff piece by a company called CloudBolt who do a look at code efficiency, cloud frequency. So taking it beyond FinOps, beyond dollars, pounds, and pennies.So actually, but it was a very good article, I thought. I was very pleased to see it in Computer Weekly. So it was fundamentally, it was about how GreenOps and FinOps are very aligned. They're very combined. And, and I can, I'm in complete agreement on that. It's a good article. It doesn't say, tell you anything that you probably, that you won't know alreby knowingwing that FinOps and GreenOps are quite aligned and they're all aligned through the fact that in the end, being green, a lot of being green, not all of being green, but a lot of being green is about cutting down on how many machines you, and how much electricity you are using to run your systems, which, which generally speaking cuts down on the cost.So cost is somewhat of a proxy measure. It's not a perfect proxy measure, but it's somewhat of a proxy measure. So the question that Chris Skipper, our excellent editor has left me and Navveen to discuss is about considering the role of infrastructure as code in enhancing cloud efficiency. How can developers ensure that their infrastructure as code implementations are aligned with sustainability practices to reduce both costs and environmental impact?So Navveen. What are your thoughts on that subject?There's, Navveen Balani: I think that's a great question. Yeah, agreed. Developers need to embed sustainability as part of the infrastructure as co implementation. And the frameworks that I suggest developers apply is based on our Software Carbon Intensity specification from our Green Software Foundation which also recently received the ISO standard recognition.So, for those who does not know what SCI is, SCI is a specification to measure the carbon emission of any software application and it promotes three key levers. Writing energy-efficient code, using less hardware for same amount of work, and making applications carbon-aware. And if you apply this strategy to infrastructure and the code, first you start with writing energy-efficient code.So developers can focus on optimizing resource utilization by right sizing resources and implementing auto scaling. This means allocating only what's necessary for each workload, and also adjusting dynamically based on the demand. Second strategy is around using less hardware for same amount of work.This basically involves automating resource management like automating the shutdown of non-essential resources during off hours and starting them during peak times. Even conserve energy and cut cost. Also, tagging and monitoring resources usage helps identifying optimization opportunities and eliminates waste.You can also go with serverless architectures in your ISE code, particularly, it's quite effective as they scale with demand and eliminate, let's say, any provisional requirements. And finally, the third strategy is how do you make applications more carbon-aware. And that's where, as part of your infrastructure code, you can say that I want to deploy a particular workload in a clean free region so you can basically take, apply strategies like region shifting and time shifting and selecting cloud regions which are running on renewables and also maybe deploying workloads or scheduling jobs when the carbon intensity is low.So all of these strategies can be definitely applied and designed as part of your infrastructure code.Anne Currie: That's a, that's a very thorough answer and there's loads to unpick in there. Lots of different things. And some things I think at the moment are very aligned with FinOps and cutting your costs. And some things are not aligned yet, but are almost certainly going to become aligned in the future. So for example, you talked about operational efficiency and automation, which is interesting.Operational efficiency, if you use for your machines. And if you use less electricity, your bill goes down. So that's all good. So in that respect, your FinOps and GreenOps are really well aligned. You know, fewer machines, less stuff, less carbon goes into the atmosphere and it's all fantastically good. And in that respect, I would say that That, that FinOps, that, that, that, you know, your, your hosting bill is a really good proxy metric for your carbon emission.But of course it's, it's, it's almost stupid, it's so obvious to say it, but it can't just use your cloud bill totally blindly as a, as a guide. Just, you know, what I, when I used to do start, startups in my youth, quite often Azure or, AWS would give you loads of free credits. And then of course, but that doesn't mean that it's carbon free.So, so there are times when, you know, you just, but you just need to use your head, don't you? Sometimes, obviously, you've been given a discount, but it's not green. It's just a discount.Navveen Balani: Yeah, yeah, I think that that's, that's a good point. Because if you look at cloud, right? Cloud has infinite resources, right? So, it doesn't mean that, so you have to use it responsibly, right? So, you can bake in energy efficiency and sustainability, right? So, definitely, you have to look at how can you lower the carbon emissions.And now, and there are also dashboards available from cloud vendors, right? Which at least gives you, some approximation, right? How, what, what is your carbon footprint of your application?Anne Currie: Yeah, yeah, yeah. I mean, yes, they do provide really good tools and, and, and it's, it's a cut cost off really is the best possible metric you could potentially use, but it's where you've got the tools, it's where you've got the data. So, you know, it's, it's quite good from that perspective. Sometimes you just have to take what's good enough.And, but so something else you mentioned is automation and obviously the, the really good operations is all automated these days. It's auto scaling and it's using the you know, not just in the cloud, but on prem as well, but actually I, in the. You can just do a lot of stuff manually, you don't have to leap straight, if you, if automation is too scary and you know, it's too much of a leap, just going through and turning off machines at the weekends, even manually, identifying machines that are over, that are over provisioned can actually, bizarrely, I think, I suspect in, well, and in fact, I've seen It might be the biggest carbon reduction you ever do, it's the simplest thing and the least techie thing.So what do you think about, even for automation?Navveen Balani: Yeah, totally agree. I would say even just turning off machines. Yeah. I mean, that's just manually, right? Would definitely save also the cost also, as well as the carbon emission also, right? Man, especially if you've turned on GPUs, that would affect. So, yeah, But actually, I think, also, I think if you look at the infrastructure, right, it's, it's, if you, if you break down the infrastructure in two parts, right, production and non-production environment, you can definitely have a lot of savings on non-production environment, right, because it doesn't need to be on always.Production definitely needs to be on 24 by 7, but you are definitely have a lot of improvements that you can done on your non-production environment, dev environments, people, right, and I've seen customers having more non-production environments, right, than production ones. Anne Currie: Indeed. And it's so ungreen. It's there's, there was another interesting thing that you talked about in your when you were talking about the SCI, which is carbon-awareness, which of course we know is the most, it's actually the most, it That's the code efficiency and operational efficiency are all good for kind of short term mitigating the harms, but in, but to actually take full advantage of that, you know, the soaring production of cheap energy for renewables, we need to demand shift to when the sun is shining or the wind's blowing.But the interesting thing about that is that although that's by far the most interesting thing in being green, I would say, it's the bit that at the moment doesn't really save you any money because most countries don't yet have dynamic pricing. So dynamic pricing, so, so what's your thoughts? What are your thoughts on dynamic pricing and when it's coming?So dynamic pricing is basically when the price of electricity changes through the day, depending on how expensive it was to produce, which usually means, you know, at times when the sun's shining and the wind's blowing, the power is cheaper than others. And that's, that's now very common in certain countries like Spain, but very uncommon in other countries.Navveen Balani: I think definitely that's a good concept that will promote more sustainability. Typically, if you look at cloud providers, like let's be, to give you an example of Google cloud, it at least gives you now, if you're deploying something on a region, it tells you that it's, it's a low carbon region and hopefully in future you will have the dynamic parsing also, right? It will also give you the time when you should run the workloads. And there are a lot of workloads we doesn't need to run 24 by 7, like batch jobs. We get a lot of emails, right? Of all those millions of emails, right? You'll be sending, right? For promotions. All those can be run on time where, where there is least intensity.And definitely if you have, if you have a cost, if the cloud provider gives you a cost that this is a good time window, and this is less costly and then all, all, all the activity which doesn't are not critical enough can definitely take care of the dynamic pricing. So I assume in future, I mean, we can see the trend, right?Maybe where you have, I mean, it's all about data. If you have the data from the grids available to the providers in future, then we can definitely definitely tap on it.Anne Currie: Yeah. Yeah. I agree. It's, it's a bit of a shame. It's, it's where there's the hole in the, in the alignment of green ops and fin ops at the moment. That's, that is incredibly green to demand shift, but you don't necessarily get money off for doing that. You know, moving to that green region is incredibly green.But it doesn't necessarily save you money, but it will do once dynamic pricing comes in.Navveen Balani: I also, I think if I also feel if regulations also are there, right, with regulations around carbon emissions reporting. Especially, I think, the EUA Act just talked about reporting the carbon emission, but it's a dog bone mitigation. But at some point of time, I would say, when you have a reporting mechanism also, and everybody have to comply for it, I see a lot of these trends coming, right, a new innovative way, right, to lower the carbon emission, right?So I think regulations at some point of time will also enable, right, a lot of these, I say, innovations, right, to come up.Anne Currie: Yeah, I agree. Oh, I meant something I meant to mention that, that's aligned with what you were saying earlier about automation. There's the CNCF, the Cloud Native Computing Foundation. So another of our, of the Linux foundations out there. They describe, and it's, GreenOps equals Fin, GreenOps equals FinOps plus GitOps.So basically they're saying GreenOps is automated FinOps, which is an interesting one because it feels to me like they're really saying there that GreenOps is good FinOps. Uh, and oddly enough, FinOps often say, well, FinOps is just good ops. GreenOps is just good ops. It's, it's which is interesting, which I think that people often don't really appreciate. No, sorry, I'm taking the final word there and I need to, I will leave the final word to you, Navveen, on FinOps and GreenOps.Navveen Balani: I'd like to end with what you said, right? So it's, we started with DevOps where you decide to automate something, then you, then the FinOps came, right? So we, because cloud resources were getting expensive, right? And now we have GreenOps, right? So all, we have to look at it in holistically, right? Across DevOps, FinOps, GreenOps, right?And ensure you take care of both the cost and carbon, right? And keep it under control.Anne Currie: Yes, totally agree. Totally agree. Right. So we're going to move on to the next. So that's the first article we're talking about there, I would say is extremely uncontroversial. Operational efficiency is like a total win all around. The next article, which is about why you should switch to green coding for a net zero feature, which is a LinkedIn article from the CEO of CSM Technologies, I think is vastly more controversial. Not because it's wrong that you should write a more code efficient, more efficient code. But because I think that there's a lot of context around it. So, so he's written a lovely article, links, as I say, links in the show notes, saying we should all be, be coding more efficiently, which is, which is nice.But, and it ends with the, with the line, we should change, change, save the world one line of code at a time, which I find massively controversial because I think that when it comes to code efficiency, a lot of business, it's just not the right thing for a lot of businesses to do. It's too expensive. What they should be doing is putting pressure on suppliers to write their code efficient, you know, write the scaled code efficiency.I think that it can really waste time going down, people going down that rabbit hole and their, their bosses were very right to say, "don't, I'm not going to do it" because it would put you out of business if you rewrote all your systems in Rust or C. So I think that is, it's, it's, it's an article that's true, but only true in certain contexts and not in others.So Navveen, what's your, what's your thinking on it?Navveen Balani: So I would say we need to look at this holistically, particularly around green software, focusing on, let's say... I would say three dimensions we have to take. First is having developer training and implementation of green software. Second, we need management buy-in. And third, I would say, it's the culture shift that needs to happen.And if you look at green software, right, when we when we started, when you all started with the foundation, three years back, green software was a very relatively new area. So we need to provide training and certification in this area so that developers are aware of, right, how they embed sustainability in their day to day work.Apart from, I would say, the training, a developer needs to have accessible tools, right. Now we have the SCI specification, the Impact Framework, Carbon-Aware SDK, right, which, and there are a lot of other open source tools also available now, which can make it more actionable and developers can actually embed them as part of their DevOps process.And once the measurement is done apart from code, right, it's also the optimization piece we talked about, right in the infrastructure as code earlier also, right, how you take it all together, right, and try to optimize not just the code, right, but also the resources which are running the hardware, the resources which are powering those applications.And as I would say green software practices gain traction, I would say securing management buy-in also is essential, right, for widespread production. For instance, highlighting the business benefits is crucial. For instance, implementing green coding, right, can lead to also significant cost savings by, let's say, reducing energy consumption and optimizing utilization.And as you mentioned, right, it's not just Our footprint, but the scope, the footprint of our suppliers to ensure they also follow the same standard methodology. And that's where I think it comes to the culture change, right? That we all need to go through, particularly for, for green software. And we need to look at how we can embed green software, right?Going forward in all our work, right? Similar to, let's say, similar, we do it for security, right? So when we, when we had security at 10 years back, right? Now we have security by default, right? We don't talk about that application needs to be secure. We assume application is secure by default. Similarly, if we embed green software, not just code, but across all, all the layers, then we can ensure maybe over the next four, five years where all applications, the new applications that we build, right?Have a green software principles baked in. So I would say, I mean, it's basically a holistic approach that would be required, right, to, right, from enabling the development community, the tech community where the foundation also, like, for foundation, like, Green Software Foundation plays a critical role.The management needs to buy in, and also the culture change that needs to happen, right? And the culture change also needs to happen, I would say, at the universities and schools, right, where they can start educating green software early on, right? Similar to the way we have learned object oriented programming, right?That's by default. We have learned over the, I mean, over the last I would say decade, few decades, right? If we have green software, same as object oriented programming concept, then I think whatever application we build in future, right, we'll have green software baked in.Anne Currie: Yeah, I, I may agree up to a point. I agree with you, but I think we, I, I'm a big believer in separating out two types of developers. Obviously there are loads of different types of developers in the world, but so, but two types of backend developers, so front end developers, this is almost a separate thing.But backend developers, you've got people who just working in an enterprise and the code that they're producing is not going to ever be deployed. You know, it's, it's the code that they are writing is never going to be run by billions of people in their own data centers. And then you've got people who are writing platforms and the whole purpose of the platform is to try and get some billions of people to, or at least millions of people to write this code, to run this code.And those people, they absolutely need to write efficient code. When I think everybody needs to, should be, could, should get used to getting out their performance profiler and just making sure there are no egregious performance problems with their code. Because performance problems are your code's slower and you're burning a load of carbon and it's total waste.So all again, very aligned with the business. You want your systems to run fast. Your customers want your systems to run fast. So, so having a decently performance system is good. But beyond that, you probably don't want to be writing code yourself, which is massively efficient because that takes a long time, but you do want the platforms you're running on to have done, to have made that investment.So it's, it's kind of like, there's a lot of context here, isn't it? Are you writing code for mass use or are you writing code that is not really for mass use, which is, which is interesting. I think that's a subtlety that we, like, for example, lovely article though this was, did not point out that difference.Navveen Balani: That's a good point. So, yeah, especially applications, right? Package applications which will be used, let's say, by millions of developers worldwide or users worldwide definitely needs to make that in right to ensure that for instance simply like all the large language models a good example right all all generative applications will be used by millions of applications millions of developers so how can you make the AI more efficient right both on the user side who is creating let's say the prompts right to Create in an efficient way so that the round trip is reduced and secondly on the backend side, right?How do you have a low cost efficient energy-efficient model? That's why you also seeing a lot of LLM models are now Talking about the small language models more energy-efficient more compact, right? There is a trend where I would say right where Organizations are now looking at energy efficiency also, right, as part of the applications or whatever work they have been doing.Anne Currie: Yeah. And of course that's all driven by cost. It's taking us back to our previous thing about cost and cost and green are very aligned. And the good thing about, well, the bad thing about AI is very costly. The good thing is it's driving quite a lot of efficiency improvements. So, I mean, I talk about this every time I'm on here, that, that Python has got a lot more efficient because, because of AI, that they've rewritten all of the code, core, core libraries in Rust. And of course that's, that's a perfect example of they are the kind of people you want to be writing super efficient code. They can save the world one line of code at a time because so many people run Python. But you want to be getting that out of your platform and not having to do it yourself as a, as a Python user.You don't want to have to change to Rust yourself. You want to be able to get the value of Rust whilst still using Python. But yes, yeah. So, so that is all very interesting stuff, but yeah, very nuanced. It's all at every degree of it. It's what my, to my mind is what makes green interesting, is it's not simple.It's not trivial. You have to step back and you go, "where am I? What am I doing? You know, what's, where, what's, how do I fit into this? Where does mine, where is my effort best applied?" I mean, you're obviously part of the SCI, which covers all of the things, you know, operational efficiency and code efficiency and demand shifting and shaping.What's your interest? What do you like the most out of those things?Navveen Balani: I would say from a, I think from a developer standpoint, right, depending on your roles, right, so SCI, I would say is more inclusive in terms of, depending on roles, right, whether you are a developer, architect, data scientist, right. All are various parts to play, right, to reduce the carbon emission and make applications more energy-efficient.So, depending on your role, for instance, if you are building, or you are a developer writing code, right, then you can really focus on energy efficiency. And it's not, as you mentioned, right, it's not just moving towards a C or C++ language, right, which is more efficient, but it's, so you have to basically look in the context of the work you are doing and trying to optimize it, right.So you have to do that trade off as a developer and how, what libraries access to make it more efficient. Second, I would say the whole hardware optimization, I think in terms of where all the DevOps, cloud, cloud architects comes in. How there are various custom chipset from various vendors, right? How can you best utilize from an infrastructure point of view?And third, I would say is more strategic in nature, right, in terms of how do you bake in the whole carbon away computing concept, because that's new. You need to have data providers, you need to tie up with various licenses which are actually costlier, right, if you look at getting the real time data, right, from various providers.So how do you bake that in in the application to more of a strategy kind of work and thinking? So in that way, I would say it's, I mean, depending on your role and context, right? I mean, whether from developers to architect for data scientists, right? Each can find definitely a value for, in SCI and then try to reduce their scope of work.Anne Currie: Yeah. It's so, it's interesting. When I first heard about the SCI, I was a bit, I was a bit dubious about its value, but I have completely changed my mind on that as time goes, especially because I teach, I teach people who are green software and, and, and. One of the things that often comes up is people wanting to be able to do like for like measurements.And I think that I originally thought the SCI was about a standard that you could compare between applications. And that was where the value would lie. And I was a bit dubious that we could realistically do it. And, and now I've realized that, that I like that the SCI has stayed fairly woolly and loose.It's more conceptual than it is a specific implementation. And I like that because really it means that companies could choose their SCI score, they can choose how to define their SCI score for their applications and choose what's appropriate. What it's going to, what it's going to, what the denominator is going to be.So it's per user or per transaction or per, so everything, something that's specific to them, and then it's essentially for like, for like, so you can say, well, last year it was this, and the next year is this, and you can average over time so that you're, you, you know, saying you don't say, well, I'm comparing a sunny day with a non sunny day, or, you know, all those kinds of things.What I like about the SCI is it's, is it's very kind of conceptual high level nature that forces people to think, "well, actually, how do these things apply to my system?" You've got to use your head. You can't just, you know, you can't just follow it. You can't follow it blindly because it doesn't make any sense if you do that.You have to say, how does this apply? Forces you to think, which I like.Navveen Balani: Yeah, I think that's a good concept. Particularly if you look at SCI, right? It's for an application, so you have better control rather than giving you the carbon emission of all the applications. Right. Which typically is given by various cloud providers given an application, then you have a better control, as you mentioned, right?You can define your own boundary and architecture and calculate the SCS core, right? And the intent is to. Basically, as you deploy new versions, right, the intent is the SCI, you should look at how you can reduce the SCI score, right? We can't achieve zero, but definitely across releases, right? How can you make it have a lower SCI score?And the point you made about the comparison also, right? So you're comparing your application versus your previous application that you have deployed. It's not about creating two applications from two different organizations, right? We're not there yet. It's about currently using this methodology, right, for your own application and trying ways to reduce it.Anne Currie: Yeah, which makes all the sense to me. So many years ago in my, when I was more youthful, I used to work in retail and in retail, like for like comparisons are very important. You want to be able to say, well, this year we've made more money than last year, but you can't just say this year we made more money than last year.You can, but it's not all that useful. What they actually want to do is say, it's per thing. So in retail it's often the kind of per square foot of retail space, balanced for kind of like, well, how expensive was that retail space? You know, so you're not comparing it and say, well, this year we made more money on the same on, on the same amount of floor space, but, you know, it was in London versus it was in the middle of, you know, of the desert. It's, it's kind of like you, you've got to, you've gotta come up with your own, like, for like measure so that you can say, well, is our business improving or is our business getting worse? And the SCI is exactly the same. It's, it's the concept of like, for, like, it's for you to check Ron North's for you to check against other people.So yeah, it's yeah, I, I, I, I've been completely won over to the SCI. I was highly dubious to start with. Right.Navveen Balani: Good example. Yeah, that's a very good example of a retail. I'll also use that.Anne Currie: Right. So now we'll go to the final, the final thing we're going to talk about today, which is, we just touched it from a slightly, again, a slightly more holistic perspective. So this was an article in a Silicon Republic and it was a Q&A with the chief security officer of a, an AI company. It was talking about.Basically, her premise, and I totally agree with it, is that security, cyber security is very aligned with being green. It wasn't, it's not, it's a bit thin as an article, it doesn't give you an awful lot of information, but I think basically, yeah, that there's the, the idea that, that I think we should be discussing is, is security and, and green aligned?And you, you, Navveen, you've talked a little bit about that, about in terms building security in is, is like, it is, we've, we've learned to do that and we should learn to do green things, build green things in, in, in the same way. But separately to that, are there security benefits to being green? What do you think?Navveen Balani: I would say, yeah, definitely there's synergies between security and green principles. So I like to again give that example of SCI, right, if you want to break down the methodology into three parts, right, making applications more energy-efficient, right. So, if you look at the security algorithms, right, how can you use, how can you basically optimize the security algorithms to, let's say, use fewer computation resources.Particularly, if you look at the security stack, right, they also have evolved from various encryption and cryptography software, right, and now I think you have various key ciphers available across different dimensions. So, they're already following this, I would say, backtest, right, of making encryption security, right, more performance and more easier to adapt. So in that case, I would say it's more aligned towards the algorithms that they're using are more efficient, right? As compared to what it might be, let's say five years, 10 years down the line for, for security protocols. And similarly, I would also say new strategies can also be applied for security scanning.For instance, vulnerability scanning is one commonly used, right, to identify any threats, maybe in cloud or maybe in desktops and other systems, right, that can actually run, take the advantage of running it on a time, right, where the carbon intensity is low. So in that way, it can apply certain green software principles.To run all those scans where the carbon intensity is low and also save on the carbon emissions. So, definitely I would say there is some synergy between security and green software. And certain, I would say, features of green software principles can also be applied to secure the domain, right, to make it more energy-efficient.Yeah,Anne Currie: And so, so something that actually I'm, I, well, you did mention a little bit earlier is, is security is the perfect example of where hardware, using the right hardware for the job massively cuts your emissions. So dedicate the right kind of chips that are designed for for encryption are just so much more efficient than using general purpose or CPUs for, for that, for that.So, yeah, we wouldn't be able to do what we do these days if it wasn't for dedicated chips. Oh, and so, and oddly enough, this does also map to some of the stuff that we said at the beginning about manual ops and manual FinOps, that's, it's amazing how many systems are through machines that are kind of like, nobody's, everyone's kind of forgotten about them.They're not keeping them patched. They don't really do anything useful anymore. Those are your backdoor. Those are the ways that people break in and they're just wasting electricity. So even in building green software one of my co-authors, she, she brought up the fact that, that it's interesting that security, well, that are very much an example of a waste of electricity.They, they're, they're wasting. So, so something like a denial of service attack, the whole purpose is to burn your electricity in your systems and burn your system so that your systems don't have anything, any time to do the thing that they're designed to do. You know, the thing that has value for you, instead, it's just burning your systems up burning electricity, running up all your bills to do something which is bad for you.So having a secure system that, and things like applying the latest patches so that you're less, less exposed to denial of service attacks is green because denial of service attacks are very ungreen, they are very dirty. Same, it's, it is quite interesting, isn't it, from that?Navveen Balani: Yeah, that's why I think the provisioning the right hardware, virtual machines, especially, right, provide various cloud providers. Right. They all, they now provide a managed services, right, to detect various denial attacks. And I assume, right, the underlying hardware that they are using, which is, which should be used by millions of applications, right,would be definitely more sustainable, more energy-efficient, right. And, and more scalable.Anne Currie: Yeah. Yeah. Securities are really interesting in that, that's it. So FinOps is just really a, a, a pretty much, except with, for, you know, your free Azure credits or whatever, is pretty much a group, the direct proxy measurement for carbon emissions. And, and likely to become more so in the future when we get dynamic pricing.But security is not a direct proxy measurement is just, it's just that there are a lot of, you know, best practice in ops is also best practice in secure ops is also best practice in green ops. You know, they're, they're, they're kind of like, you can't use the number of hacks you have, then the number of attacks you, you fall foul to as a proxy measure, you, well, maybe you could, I think that would be a bit complicated.Proxy measures for carbon is how many times your data gets stolen, aligned rather than proxy. It's interesting. I mean, so we just, we've talked about those, those three things, but have you run across anything interesting at the moment that's, that you think that our listeners should, should hear about? AndNavveen Balani: So, yeah, I would say for the, on the Green Software Foundations, specifically, right, we are working on green AI and so we are trying to look at how we can extend SCI to take I mean, how we can do SCI measurements for large language models generative AI models. So this is something we are actively, I would say, working towards from the foundation perspective.And if you look at from an SCI perspective, right, we want to have various extensions to SCI, for instance. How do you do SCI for web applications, backend applications? And make it more easy to measure, right, different parts of the code and make it easily available to developers so that developers can measure their, their part of the overall carbon footprint, right so we can make it more accessible.So that's, that's one thing I think we had from a foundation perspective, looking at how we can make it SCMO extensible to various other use cases.Anne Currie: That's, that's very interesting. That is the, yeah. 'cause obviously Ai, AI is the, is the workload of the, it's on everybody's lips at the moment. It is... and so I, I saw very interesting charts from all the, the Economist, I think the, the other week that, that said, you know, that's, that showed the enormous amounts of power that was currently being used on ai, but still less than the amounts of power being used on Bitcoin.So it is just worth reminding. And of course, bitcoin is very aligned with our last conversation about security and what people wanting and people who are attacking you wanting to run up your energy bills, because quite often what they want to do is mine Bitcoin on your, on your machines that you're not properly watching.So security sweeps are a pretty good way of identifying machines that are burning power totally unnecessarily. From political... I, I quite liked to keep my eyes not just on the Al news, but the really good news news last week from a political rather than a technical perspective was the world's just got its first climate science trained president in the new female president of Mexico is a, is a climate scientist by, by trade and training.So, I, I, I would be very interested to see what affects that out on the country. Any other interesting political news that's good news, do you think?Navveen Balani: No, I think I've yet to catch up on,Anne Currie: Well, I'm quite nosy, so I keep my, I keep my eyes open on all things. I would say there's, there's loads, actually, there's, there's a lot of good news going on at the moment. Texas is now a massive solar producing state. It's, so, yeah, there, there is, it's, The world is changing in a positive way. I, I like to, to, to keep reminding everybody that we are not doomers at the Green Software Foundation.We, we are doing this because we believe it will have an effect.Navveen Balani: I totally, I would say, especially with the foundation, right? It's all our collective journey that we've gone through. I mean, we started three years back, we didn't have, we didn't have any specification tools, right? Three years down the line, we have the first specification tool. Software carbon intensity specification, which is now an ISO standard.We have various tools now, carbon SDK, impact engine framework. And I, I know that 1230 projects already in the pipeline on the foundation, right, which will make the world, I would say, a better place, right. In terms of sustainability, right. For all the work that we do. So, yeah, it's basically a shared responsibility, right.Climate change is basically a shared responsibility, right. And from our perspective, developers, all we can do is contribute, right, by using the three SCI principles, which I talked about, right? Which I again repeat is either write better energy-efficient code, use hardware wisely, and make applications carbon-aware.Anne Currie: And of course, actually, not just write it, cause you might not be the one who's writing, it's more important that you use it. So as I'll, I'll constantly, cause Python is such a good, good example of the moment, at the moment, because of their big, big revising stuff. If you people who upgrade the latest versions of Python that are much more, more efficient will be saving a lot compared to people who don't upgrade.And that's, those kinds of things are the kind of things that will immediately be unearthed by running the SCI like for like. I mean, a really big change might be on your like for like is that you upgraded to more recent versions of a particular library or a particular set of tools that you're using that are more, more efficient.SCI isn't just about what code you write, it's about what code you use. And that is almost certainly going to be where you get the biggest, biggest value, the biggest return. Anyway, sorry, now I'm doing my, I'm trying to take the last word again, and I'm going to leave the last word to you, Navveen.Navveen Balani: So yeah, very happy to be part of this podcast. Enjoyed this conversation talking about, I think, three different aspects. I would, I would say. And to all the viewers is, yeah, thank you for listening in and have a good day.Anne Currie: Navveen, awesome. Thank you very much for coming on this podcast. And a final reminder that all the resources are in the show description below, and you can visit podcast.greensoftware.foundation to listen to more episodes of Environment Variables. And see you all soon at some point, if they ever let me back in again as a guest host.Good bye.Navveen Balani: Goodbye. Thank you, everyone.Chris Adams: Hey everyone, thanks for listening! Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.To find out more about the Green Software Foundation, please visit greensoftware.foundationon. That's greensoftware.foundation in any browser. Thanks again and see you in the next episode. Hosted on Acast. See acast.com/privacy for more information.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free