On-Demand

Infrastructure as Software, Not Infrastructure as Code

Fireside Chat with Kris Nóva

Air Date: July 15, 2021

Noah Abrahams: Okay, so I think we’ll get started now. Welcome everyone to our Fireside Chat series.

I am one of your co-host, Noah Abrahams. I am the Open Source Advocate here at StormForge. With me, as always, is my co-host, Cody. Introduce yourself, please.

Cody Crudgington: Hello. I’m Cody, Senior Consultant here at StormForge.

Noah Abrahams: And with us today is our inimitable guest, Kris Nóva, Senior Principal Software Engineer at Twilio.

We will be using Q&A throughout this session, so if you have any questions, please dump them in there instead of into the generic chat.

And let’s get started… So let’s start, Kris, why don’t you start by telling us a little bit about yourself and tell us a little bit about how you got started in in this whole infrastructure direction.

Kris Nóva: About myself and how I got started in this whole infra direction. Hi! I’m Kris Nóva. I work at Twilio for my day job. I’m an engineer there/here. I’m at my house, but I work at Twilio, but here is now work.

Yeah… it’s all related to my career and you know I think Linux and computers has always been something I was passionate about. I found out at a young age, there was money to be made there and money is the thing that you need to survive in this country of ours… so anyway, I figured if I could, if I could survive, while also doing something I loved that would be great. 

Anyway infrastructure was something that I was passionate about very early in my career. My first job was working at a like kind of a do it yourself roll up your sleeves software engineering job. I wasn’t paid very much. We were writing code in the back of a warehouse. We had once been like a tool shop and we turned into an online store. 

Needless to say, we had good ideas. We wrote good software, but if we didn’t have any devops. This was before the days of devops, before anything else, so like our servers were old recycled computers from our you know marketing department that we stuck in a server room and tried to build you know infrastructure on.

So I think, from an early age, including my personal life, computers were fun. Getting computers to work the way I wanted them to and to be set up in a way that worked well for me to have fun with was always a big pain in the butt. So I found very early that I could do this thing of like using computers to make other computers be less horrible for myself and then you know I applied that at my day job and found myself like, I think you see this really like conceptually but, like, I wrote a bash scripts right. I was able to use computer science to do things to make computer easier for myself or for other people to use.

Anyway, as I grew in my career, that problem never seem to go away, but my ability to craft computers, and crafts systems of computers, dramatically grew and scaled and technology has come like Kubernetes and I’m still solving that same original problem of like computers just suck. I’m trying to make them better for myself and for teams of people, except for now I’m doing it like you know I’m you know in charge of designing fleets of some very, very expensive computers and data centers that do a lot of things, and you know everything from how does the computer get you know into a rack and how do we make sense of that, all the way up to you know, do we have an https API listening that’s available to the public Internet, and everything in between. So that’s kind of my TLDR.

My resume is littered with a ton of jobs. I worked at Microsoft on Azure, VMware. I was part of Heptio. We were a lot of the original Kubernetes folks there. I’m sure I’m forgetting other things… I was at Solid Fire and we were we went through acquisition into NetApp. So low level software engineering pretty much my whole career.

Cody Crudgington: Awesome. 

Kris Nóva: Yeah!

Noah Abrahams: And that’s I think that’s a pretty good segue actually because you’ve spoken a lot and written a lot about Kubernetes and written an amazing book, which anyone attending should probably check out.

Kris Nóva: Yes, you should buy several copies of “Cloud Native Infrastructure” for your friends, immediate families, and their friends and their immediate families today.

Noah Abrahams: So, you’ve talked a lot in there about Kubernetes, and when we were setting up this talk, you were talking about how.

Kubernetes approaches problem solving and system management, and how that’s something that is you know it’s interesting that it, how it approaches those particular areas.

So, let’s talk a little bit. What can you say are kind of the takeaways from Kubernetes, with the amount of time that you’ve spent in there, on how we’re approaching infrastructure and infrastructure design like, how do we lead into this?

Kris Nóva: Yeah, totally. Okay, so like real quick let’s start with Kubernetes. So Kubernetes is successful because it attempts to at least abstract what I’ve been struggling with my whole career. From an application or an application developers perspective, Kubernetes is the ultimate API.

And it’s a little weird at first, because you’re like, why does it look and feel this way and it’s because it is a good abstraction over infrastructure.

If you look at computer science, right, you know up here on my bookshelf I have a copy of CLRS, which you know goes deep into algorithm design and runtime compute management, but then like you know there’s other books on my shelf that are more about you know networking stack, the OSI reference model, right and computer science can fundamentally be broken down into like you know, two or three or four main pillars, all of which I feel like Kubernetes abstracts away.

Kubernetes did a really good job at like abstracting container runtime, which is like a compute resource, and abstracting storage with CSI, and extracting network with CNI.

And bubbled all that up to the user and said okay you’re an application developer. Here’s what you need and how you need it in a way that’s meaningful for you to understand.

Which is great, and we needed that. Every engineer who has, you know, been in the business for the past 3 to 10 plus years has felt the pain of trying to deploy applications and it not being fun, especially at scale. That’s just a hard problem to solve.

Kubernetes took a way of managing out from the top down. Now, let’s flip that around and let’s take Linux which Kubernetes was built on. It did the complete opposite. It went from the hardware up.

It said, let’s abstract the hardware and point it up towards this place we call user space and user spaces, like you know your SSH into a terminal.

And we now have these hardware abstractions where like if you, you know, want to explore a block device, there’s ways of doing that, if you want to mount a block device, and you want to go explore it and using the Linux file system, you certainly can. It doesn’t really matter what hardware you have. Doesn’t matter solid state drive or spinner like, whatever, it’s all abstracted. Kubernetes did that the other way.

So we’re left with this like really annoying like neglected user space in the middle between these two abstractions and that, for me, is like that’s my job is to make that less annoying. The more I learned about the software engineering side of why Kubernetes did what they did for application developers, the more I appreciate the constructs, and yet again still find myself struggling with this ugly middle air of what is user space and how do you glue together these two abstractions that are meaningful for completely different perspectives.

Cody Crudgington: Right, so a lot of complex problems to still need solved right and you’re on one side a little bit harder on the other.

There’s this, like you’re saying, there’s still this gray area that we have. For today’s talk we’re focused on infrastructure as software, not as code. So for our audience, can we just define infrastructure as code now, before we segue into the other things.

Kris Nóva: Yeah, so this is, this is a hard problem right, I feel like this is a very personal question. It’s like asking you know what is DevOps mean to you, right. DevOps might mean different things. If you’re a DevOps engineer and you find value in that, you’ve been burned by that, that’s a personal thing right.

We were talking earlier about that you have the type of people who drive trucks. That’s a personal choice that you like you know you wake up one day and you’re like I’m gonna be a truck person.

And I feel like like DevOps is kind of similar. Like I’m going to be a DevOps person. I feel like infrastructure as code kind of follows the same idea. So to me, there’s a difference between what I would call code or like configuration management or like a scripting language, versus what I would call software.

So a really good example of this is the difference between a bash script and a compiled binary. There’s some runtime discrepancies there and reasons why you would use one or the other. But I think the paradigm remains the same. Code is a representation of like what you hope to do and software is more of a given these well known constraints, we absolutely will do these things under these certain circumstances.

And so, when I look at the state of infrastructure over time, I learned that we’re actually just about a decade behind software.

In other words, 10 years ago software management was when we were running virtual machines, we were writing monolithic Java applications, and object oriented programming languages.

Every engineer getting a job walked into that you know engineering interview with I understand the four tenants of object oriented coding design, I know what polymorphous ISM is, I know what abstraction encapsulation is.

I haven’t said those words in my career in you know five years. A reason for that is because the software industry has changed, and anyway, if you look at our deployment mechanisms, I think infra is kind of hot on the tales of software engineering. So for me, there’s a big difference between writing down let’s take care form configuration, or Puppet, or Ansible.

Let’s write down hey you know I want to virtual machine and I want you to run these 12 bash commands on it, and I want to make sure these eight packages are installed.

And that’s a great starting point. That’s certainly a lot better than like walking into the server room and you know plugging the keyboard in and turning the monitor on and typing apt git install package and hoping that you know that computer, which was built two years later than the other computer, will just magically work, the same way.

So we’ve definitely come a long way, however, we noticed in software that that’s kind of a one time thing. This concept of one and done. This concept of throwing the ball over the fence of typing Terraform, apply or you know getting a stack to Chef, right. we’ve all said this before.

What happens afterwards, right? It’s easy to create the infrastructure, but how do we manage it? The management isn’t always quite as simple as we think it is because things happen. Software likes to write things to disk, disks brake, networks are changed, there’s security incidents that causes other teams to do other things, and when the complexity of your infrastructure grows with the size of the amount of people working on it, size of your company or your team, it’s not as simple to reconcile a lot of that management. So infrastructure as software to me, is a means to an ends, for how we manage it after it’s up and running.

We have software that is running that is looking at it, that is continuing to reconcile it over time.

And the whole like… This isn’t new. I didn’t… This was not like a new crazy thought, you know, Kubernetes has been doing this for six years right, this is robotics theory, this is, this is a well known pattern that we see all over computer science. We’ve all seen a system D daemon before. And what does the system daemon do? It’s just a loop that runs and tries to reconcile things over time. I’m just saying like that’s a really good engineering pattern, and we should probably apply that to our servers and our network in our storage and all these things we as infrastructure struggle with.

So to me, infrastructure as code gets you up and running quickly, which is the problem you want to solve, but it also allows for a ton of drift from that original state of starting. Infrastructure as software is the idea that I don’t ever mutate anything manually. I write software to make it reconcile from an unknown state to the state that I wanted in. So, that is, I mean that’s Kubernetes right, and all I said was like hey, we can learn from the application guys if we could start managing our infrastructure in the same way.

So to me that idea was very clear and I’ve actually found that it’s not always that clear and that simple when I’m entering a lot of these infrastructure conversations, right. A lot of folks I think I found, myself included, would find great value and just slightly changing the perception of what it means to management, the structure versus what it means to create it.

So that’s my diatribe anyway, I don’t know if they’ll have any thoughts on that, but that’s kind of what it means to me.

Noah Abrahams: I have a thought on that around sort of the people involved in that process. So it feels like you’re saying that there is basically just a lag between the infrastructure folks and the software folks and the theories that are being put in place and the tools and techniques that are in use.

But I know from a from a stereotyping perspective, folks on the operation side, the folks that typically control infrastructure, tend to be a bit more, I don’t want to say resilient, but more cantankerous. They’ve got their thing, they know it works, and they don’t really want to move on, historically.

Are you seeing, from your point of view, a lot of change of people taking these new principles that were already developed in software industry and applying them to infrastructure? Are you seeing the change in the mentality of the people that are working with the infrastructure to be able to adapt that and bring it in at like with the rapid adoption of Kubernetes?

Kris Nóva: Absolutely. It’s a change in thinking, and with this change in thinking comes a change in the systems we use, we count on, and how we think about our systems.

Right like there’s a big difference in my mind and saying like I bought a daily driver to get me to and from the office every day and I bought a car that was reasonably priced, so I could drive it across the country twice.

You might buy the exact same car, you might sit behind the exact same steering wheel, but the mentality of what you do with the car is going to dramatically shift.

So I feel like we’re seeing that level of… It’s the same tools and technology we are all familiar with, but we’re reframing what we’re using it for and how we’re thinking about it.

And that is actually causing some pretty dramatic downstream effects, I think one of the most really fascinating downstream effects is something that we’re seeing today at at Twilio, with the work that I’m doing there, which is most infrastructure org, most infrastructure, even if it’s one person. Let’s take a small team of engineers that’s got like a DevOps guy right or an infrastructure lady, right. Let’s take that scenario and let’s look at the job, and the work and how it’s generated for the infrastructure engineer.

It’s typically a reflection of an engineering need. So fundamentally the culture is it’s rewarded in the workplace for engineers to be creative.

Engineers are rewarded for generating work and generating ideas. They say you have an idea to solve a business problem, we will reward that. That’s good behavior. You know throw you a bone, right. Good dog, right.

And so, infrastructure engineers have a completely different set of rewarding and a different set of success, which is how do I enable this the ideas of and the work generated out of the engineering work, right. We’re a means to an ends. We’re a void that exists because of another problem.

And so, that is true and that’s why I got here in the first place is because it’s always been a problem and anytime there’s a problem, there’s an opportunity, so I just took advantage of this opportunity, and if you look at infrastructure teams today with this oh if we’re writing software to manage our systems, we are software engineers. This is the first time that architectural design and the business rewarding that is coming from the infrastructure org, instead of being handed off, you know sort of like through a second degree generation of work. So I think what we’re seeing is we’re seeing people who have traditionally found value in solving really, really hard complicated problems for engineering orgs for the first time, imagining what it would feel like to not only identify their own problems, but also offer a software solution on top of that as well. And that, that’s a big change in thinking. That is I mean… That’s like something you take home with you every day, and that is something that like you know, for me, like I talked to my partner about it, like we talk about this type of stuff often because it’s it’s so fundamental to like how you make a cup of coffee in the morning, right. It’s just this idea that goes with you. 

So I do think that we’re seeing a change in thinking and we’re seeing systems reflect that. Going back to the car analogy, it’s ironically still the same systems we’ve been using this entire time. Yeah.

Cody Crudgington: So yeah. Early adoption, right. Again, it also lags behind you know, whatever especially you know, maybe not so much startups and product teams there but, or infrastructure teams there, but you know when talking enterprise and big business and things like that, they typically lag behind 1015 maybe even 20 years, right.

What’s the path to get from this infrastructures code to infrastructure software, right? You say it’s a change of thinking and you know like no was saying people are… They don’t want to give up their toys that they just learned and they’ve gotten certified on and they’ve comfortable using.

How do we push this progression? Because it does seem like these frameworks, and these contracts, they have been in place for years, like you were saying. Kubernetes did it, you know. What’s the right path? How do we get there?

Kris Nóva: So, like let’s go back to the psychology of how work is generated. The reason infrastructure folks are typically reluctant to give up patterns that they have found success in is because there’s a good chance they’re getting a lot of like hey I have a problem fix it.

Right, like in a world where the infrastructure entire their entire job is to fill a void that exists because of another orgs needs, that whole cultural relationship is based around that hey I have a void fill it.

And so, if you’re getting a ton of requests for like hey I have a problem, hey I need a server, hey fix my network, hey the storage isn’t big enough, hey we need more desks, hey I want to try this new database, hey this new database is too slow.

Right, like every devops engineer is going like this right now, because we’ve heard this a million times and that’s because that’s the culture.

And so I think the reason that we’re reluctant to change our toys is nothing more than like we’re just reluctant to engage in any more chaos, right? So it’s basically like saying like I’ll say no, it’s like I hate to say it, but it’s like it’s like you know private healthcare, right? Default no. We’re going to default deny everything out of the door.

Just because we want to see who’s going to yell the loudest and then we’ll give those folks the time of day. Unfortunately that’s a very reactive defensive technique that is effective, but it can be like if used appropriately can be harmful.

So I think you know when we’re looking at tools that people use everyday and how do we get them to change it, I think that cultural shift that isn’t something you can give anyone it comes from within somebody’s approach to how they solve their problems.

There’s a big difference between hey I need a server go create a server for me and saying oh sure what do you want your server to look and feel like? That makes everybody happy that feels good, right? People look at that say you’re doing great.

The engineers say hey I got what I need, and everybody says yeah hey we’re doing… We’re having a party. Then like fast forward a year and every infrastructure engineers going over here and go holy shit how do I manage this?

I gave everybody what they needed and now I’ve got Ubuntu and Fedora, and you know I’ve got this IP block over here on this subnet, and we’re using a completely different routing protocol over here, and we use switches here, and we’re in Amazon over there and…

There’s just so much chaos and complexity that this is unmanageable and surprise, here we are day 2 infrastructure operations, this is what happened.

And so, like I think it’s this idea that you’re not necessarily saying no to the request, but you’re just challenging if they want it in the form that they are asking for. It’s really easy for somebody to say hey I need a server. It’s really easy to give someone a server. It’s really hard as a person to say, do you need a server or do you need a mySQL database.

Right? That’s the question and, more often than not, I found a small amount of kindness, a small amount of thought, and a small amount of listening can turn those hey I need a server and I need it now conversations into hey I need a database and I’ll take it in six months.

And I think that’s the change. That’s the changes and that’s how you get the software to change and that’s how you get systems changes by changing the culture.

Systems are a reflection of culture, right? Everybody is mad at Jenkins because Jenkins is where we go to automate things. We’re not mad at Jenkins. We’re mad at the our ability to continue to like shove things into Jenkins.

It’s like everybody says they hate city life, but they don’t really hate the city, they hate cars. Cities are crowded because there’s cars.

You can’t walk around the city because of parking. You can’t find parking because there’s other cars. Cities are loud because of cars. Cities are dirty because of cars. People don’t hate cities, people hate cars.

People don’t hate infrastructure management, people hate the things people do with infrastructure management.

And so, like I’m trying to say. That’s the change, you can use Puppet, you can use Chef, you can using Ansible, you can use Terraform, you can use Kubernetes, these are all fine.

However, like we need to think about when we use them. That is the change. That’s that’s the change that I am really saying here, and I think thinking of yourself as a feature driven team, instead of a reactive driven team, that’s the difference between infrastructure as software and infrastructure as code.

Instead of saying here’s your server that you asked for it I’m going to give it to you today.

You say our next release of software will include an API that will give you the capability to be self-sufficient and get a server on your terms.

Please stand by for further updates. That’s the change. We’re still making servers. We’re still creating servers, but we’re just changing the language and how we communicate about service.

I’m Kris Nóva. Thanks for coming to my Ted Talk. 

Noah Abrahams: Okay, so that brings me to an interesting question because you’re talking about how these things are all interacting and how you’re using them. I’m leading into another topic that you brought up about placement of pieces as a kernel, and things like that are we making a differentiation here between software, this sort of platonic idea of software, and the concept of an operating system? Are we making any differentiation between those two and how we’re approaching applying these principles to improve?

 

Kris Nóva: Yes and no. That’s a really, really darn good question because, like one person’s infrastructure is another person’s user space, right? It’s just kind of like that’s how this stuff trickles down, you know.

I might be an application engineer who wants to get my little application into a container and that’s all I know, and that’s all they should know.

That container is my world, and then that container sits on top of 12 other worlds that go all the way down to like an actual electron going across a motherboard.

And so the operating system can represent itself many times compounding in that stack.

You might see an Ubuntu operating system and a container that runs on a virtual machine running Fedora that runs on a VMware Hypervisor.

And all three of those different operating systems have concepts of kernel abstractions, and how to integrate with hardware, and like if you throw a Java virtual machine or EBPF in the mix, enter yet another virtual machine that gives you full turing complete capabilities that are abstracted. 

There you go. We’re reinventing the wheel and different abstractions over and over again. So software to me comes with the paradigm of a software driven workflow.

Code to me comes with the paradigm of Git Ops. Of, you know, put it into git and that’s much better than having a bash script on my computer.

And that is it. That’s absolutely better than having a bash script on my computer, but we still aren’t writing tests, we still aren’t doing feature releases, we still aren’t working as like a service oriented engineering team, right? We’re still just you know… We can do a git push whenever we want. We will push it live and things will change and we solved the problem of today and at least we’re solving the problem of today in a repeatable way, but it’s not… We’re still not doing it, you know this is 1.0. This is 1.1. This is 1.2.

The operating system absolutely goes into that. How we keep our tools upgraded absolutely goes into that. And there’s a ton… So much prior art of all the different ways you can manage different operating system versions, you know. Operating systems, they have their own release cycle, right, just like we should have as an infrastructure team.

Ubuntu comes out every couple of months. Arch Linux does rolling releases, you know. Every morning I can upgrade my current link at the latest kernel patches.

Not every OS does that, so there’s different patterns. Do you do quarterly releases, do you do releases, do you do daily releases, do you do when they come, do you take the Arch Linux model? And we can borrow all of them. It’s the exact same pattern except now we’re just applying it to your virtual machines, your networking, your storage.

Cody Crudgington: Interesting. So let me ask this. In some of the notes we tossed back and forth, I think in there you mentioned cluster API acting as the kernel. Can you dive deep in that a little bit?

Kris Nóva: Absolutely. My first task in Kubernetes was please get Kubernetes up and running.

In fact, Kelsey, Joe, and Brendan wrote a book called “Kubernetes Up and Running,” and it was published like I think the same year, my book was 2017.

I want to find it here. Yeah, September 2017. So this books a little older, but the point is is like we had to go through this. We had to get Kubernetes up and running.

And what we found was that we had that same problem we’ve been facing that every infrastructure devops engineer faces, which is infrastructure is hard. And so when I looked at cluster API, this is a reflection of us managing infrastructure at scale.

Every company I worked at, like I worked at Amazon Kubernetes, right. AKS. I wouldn’t say if I’m proud of that by any means, but like I worked there.

And so that was my day job. That’s where ___ came from, and our use case was how do we, you know how do we stamp out Kubernetes clusters. How do we mass produced kube clusters?

And all of this went into my book, “Cloud Native Infrastructure,” and went into the work I did in cops, so it went into the work I did in Cubicorn. It went into this idea that like we need to be a… How do you deal with a ton of requests coming your way in open source software? How do we deal with it and Linux? How do we deal with it in the Kubernetes management itself, right, the Kube community?

How do we deal with it within any Open Source project? We have stable releases and we value standardization over customization. If we start responding to every request, we’re going to be buried over our head. And all we did with cluster API is it’s just basically say now, how do we approach that an a kube native way?

So, in the same way that Kubernetes as a kernel for your team, Kubernetes can equally be used as a kernel for a software engineering team, whose job it is to manage infrastructure.

Right, think about it. If we’re for software team, we do feature releases, we need to write code, code needs to manage things, we have to bundle up services and applications, we have to version them over time, we need tests, we need to run it somewhere, and we need standardized way of running it, and if you were to assess the state of the software engineering ecosystem today, 2021 in the year of our Lord, what is the best way to go, run enterprise applications? It’s actually the very same problem we’re trying to solve, and thus cluster API was born.

In my mind it’s nothing. It’s effectively it’s meaningless. There’s nothing there. There’s literally nothing to it. It’s just the idea that, like we’re now using cube to manage cube. That’s not a very profound thought. However, when you actually look at the similarities of what people need to do while managing cube, there’s actually a void in what Kubernetes offers and what we need. That’s where a lot of these tools like cluster cuddle and cluster API itself, you know the concept of a machine, as it compares to a node all of this came into this idea that we should be using cube to manage other kube clusters. 

And yeah, like it’s your kernel. If you need a web service to help you with infrastructure, Kubernetes has this really cool concept of services, and if you need to you know to TLS encrypt it, there’s tools like cert manager, and then all of a sudden you’re like oh, we are just a cube native team. But our use case is not to build the business app, it’s to build apps to help enable people to build other apps.

So anyway, cluster API is a kernel. It’s nothing more than saying Kubernetes is a kernel, and we’re just a regular software engineering team like everyone else.

Noah Abrahams: In that context, that absolutely explains the cluster API logo of turtles all the way down.

Kris Nóva: Turtles all the way down, baby.

Noah Abrahams: Oh, so if that’s the kernel, where do you think, I just want to get your opinion on this, do you think Kubernetes leans more towards the software or the operating system side of the house? I don’t know why that was weird reflection, but it was.

Kris Nóva: Oh, you’re fine. Kubernetes does a great job. I’ll do the psychology team here. Kubernetes does a really great job at creating meaningful systems abstractions for application engineers, which is a really fancy way of saying Kubernetes gives people who write code what they want and what they need and on their terms.

Put your container image here. Press go. Shut up and don’t worry about anything else, right? That’s what a software engineer wants and as any person who has tried to deploy an application to an enterprise fleet of Linux servers and somebody walks up to you and goes well… So yeah I noticed you’re using this library, unfortunately we run x86 in in our data centers, and this is compiled three different architecture. You’re like I don’t care, what does that mean? It means your software you wrote won’t run run in production.

And it’s just kind of like, why do I care about? That’s not my job right and, but it has to be somebody’s job. Somebody has to solve the problem.

And so, like it, that is why I think Kubernetes gives you what you want, as a developer, and on your terms.

Anyway, I think that was great for the developers, but I think that came at a pretty handsome cost of neglecting what we traditionally have called the operating system.

And you know I did like a White Paper on what I call the distributed operating system void, which I think there’s a huge void here. I think there’s a huge void in how kube even approaches different tenants of computer science at the operating system level like, if you look at the kubelet, the main work engine of every Kubernetes node, it has a concept of storage drivers.

But like networking is out of scope. Okay, but so how do we solve networking? Completely different installed management and paradigm and storage.

And it’s just like does it really need to be that way? If anybody who actually gets down and looks at the implementation underneath cube, all the daemon set is is just a glorified bash scripts, right. All of this is just like paradigms we’ve been doing, but it’s just wrapped up in so many layers of abstraction it feels better.

And, but yeah we were still like, if you look at C and I, there’s little executable binaries and slash opt that like spit out json and json just always comes in the same form and that’s what we call container networking.

However, storage works completely different than that and container runtime. Like that we call it OCI compliance, but it’s just like Docker was not built for this, and so we had to like create an abstraction for it.

So I look at the operating side of Cube as like, that’s the Wild West. That’s the cluster-f*** for lack of a better term, right. That’s where all of this comes in.

Of like, how do we manage the kernel, how do we not manage the kernel? How do we manage everything, how do we not manage networking? That is very, very to be determined by you and your team, and I think we’re at the point now and Kubernetes where we’ve neglected the operating system long enough. We should probably start taking more ownership of what that looks like move on to Fedora. All of these different Linux operating systems are built on the paradigm of having a single user use it.

Nobody really built Linux for enterprise applications like this, and I think it’s time we start looking at what we actually need and the kernel. We don’t really need calendars. We don’t really need users and groups and permissions so that multiple people can work on the same unix system anymore. This was all built for a different world.

What we need is we need access to the kernel, and we need good networking and that’s really what we need, and it’s unfortunate that we have to do that through this really noisy convoluted user space driven Linux world that we have today.

Cody Crudgington: That’s very interesting. So we talked earlier and it keeps coming up… We have abstraction after abstraction after abstraction after abstraction…

Is is too many levels of abstraction too much?

Kris Nóva: I mean, I think so. Just to answer that question plainly, yeah I think so.

I can talk more about like…. Let’s look at the BPM in networking.

Right now, if you deploy a pod in Kubernetes that needs network, it has to be able to talk to other opponents, even if Kubernetes was running on one node it’s still uses local host to communicate to itself.

There’s just different components that need to like route over some concept of networking, whether it’s a real network or not, and all of that has to go through IP tables, and usually route tables. There’s usually you know, just in the networking stack alone, there’s several layers of abstraction. So if you look at a BPS that kind of just bypasses a lot of these layers of attraction that… It’s interesting because I look at it like I don’t know if anybody’s ever wired like a car stereo before, but if you’d like rewired a car stereo enough, you can like get a harness that connects to a harness that connects to an adapter that connects to another adapter.

And there’s a good chance that you could actually just get rid of like three or four of those that were necessary over time and just connect the two on either end together.

And that’s what I think of when I think of like the modern day stack. We’ve had to go from this adapter style to that adapter style when this was user space, and then we pulled user space away, and now that’s user space and we keep pushing the user further and further away, so we can start to standardize things and…

Anyway, like at some point, we just need to take a step back and look at this mess of wires and say really we just have five wires coming in and six wires coming out. We can probably get rid of all of this stuff in the middle and just throw it away. Just make this one clean connection. 

In my mind that’s eBPF. That’s going straight from kernel, straight to user space, straight to your application, right. There’s no kernel modules. There’s no user space. There’s no… It’s just straight in straight out. Clean simple, just the way you want it.

Noah Abrahams: I’m reminded of some of the like Mimi style images that are like a thunderbolt to a USB to like an old style mouse to an old style keyboard to like old serial ports and eventually it gets back to USB, on the other end. We look at that and think that’s hilarious and then forget that we actually just do that as part of our daily life as time goes on.

Kris Nóva: And the beauty of that is like this is my favorite part about humans.

To get to that pattern, you never did anything wrong. You never intrinsically screwed up anywhere along the way. You never intrinsically had a moment of saying that was a bad choice. That overarching bad pattern is a series of correct choices to get to this bad pattern, which is like that’s what I love about it. You can do the right thing.

You can be success. I mean I just imagine me drinking a cup of coffee at work like I’m the best infrastructure engineer in the world. I’ve never had anyone say anything bad about me.

I’ve met every OKR. I’ve gotten all the promotions. I’ve gotten raises.

You know I’m like big. I’m famous. I’m fabulous. And it’s like but I built this pile of crap and it’s just like well that’s because that’s what happens, right. When technology changes, you end up with this adapter of adapters of adapters of adapters. That’s it. Nobody did anything wrong, it’s just now we’re looking at the whole thing and saying, well, we don’t really need these 12 things in the middle anymore.

Cody Crudgington: So we’re going full circle then?

Kris Nóva: Yeah, I think that’s the term, right? You go full circle on USB.

Cody Crudgington: We have a question from the audience… I’m sorry. Sorry, Noah.

Noah Abrahams: Because the question from the audience sort of ties into this human aspect of this is how teams evolve, and this is how engineering orgs will eventually adapt it. I kind of want to focus on that space for a little while about the really human aspect of how this continues to evolve because it’s always the humans that have to implement all this.

But the question from the audience is, have you seen product teams engineers, be the ones to resist the changes driven from the engineers working on the system operationsThat’s I mean that’s like an org question because it’s a human question.

Kris Nóva: So the question like, let me make sure I’m understanding this right. The question is, have we seen product management on the emphasize say no to engineers?

Noah Abrahams: I think so.

Kris Nóva: Okay, I think that’s probably the question. I think it is because that’s certainly a question we see it at Twilio often.

And this goes back to the you don’t need a server, you need a database.

Which just getting good at saying that and getting good at detecting that, that’s a skill. This is just like basic social skills here, which is like hey we need 30 you know servers in Amazon. It’s like well, why do you need 30 servers in Amazon? Well, we thought we needed 15, but it turns out it’s actually cheaper if we get the size versus those size and so anyway, instead of getting 15 we’re going to get the 30 of these. Okay well, what do you need those for?

Well, we found out that we need five servers of redundancy and in order for us to run or SCD cluster that has to be three nodes of quorum, so we now have 15 servers need. And it’s like, oh okay, well that’s you know that’s $1,000 a month for 30 servers in Amazon. Oh, ok cool, or however much it is, right.

And then it’s like or that’s $10 a month, if you go and run them the managed at NCD service.

So, like it’s unpacking the why they need what they think they’re asking for and realizing that really what they care about is they have the NCD API.

Any way, I’m a firm believer that SREs and infrastructure engineers are the Navy Seals of software management. We have to do everything.

We have to do everything everybody else does, but we have to do it underwater with scuba gear on my people are shooting at us, in the middle of pitch black darkness, right?

But like we still have to do the same thing everyone else is doing and I feel like that’s it, we have to be people managers, we have to be big picture thinkers, and we have to have really, really intense level engineering insight, to be able to not only engineer our systems, but actually have the insight to detect what somebody might be asking for versus what they actually need for the sake of the ecosystem, or the infrastructure in general. So I would say the concrete answer to the question, have I seen product managers push back on engineering org? Yeah, all the time.

Whether it’s infrastructure or not, like that’s just you see that all the time. I think a good product manager is just capable of extrapolating the difference between we need 30 servers and we need an NCD service, right. I think a good architect or a good engineer is saying, okay well if we need that NCD, like this goes back to like if you give a moose a muffin, right? What if we need NCD, we probably need Kubernetes.

And, and you know if you need Kubernetes, you’re probably going to need a bunch of servers, and if you need a bunch of servers, you’re going to need to manage them, and then you’re going to need clustering. Yeah, there you go. So yeah. I seen it. I think it’s a valuable thing to be able to say no.

But one of my favorite things like, if folks walk away from this talk and they remember one catchphrase to bring with them to the office, it’s there is a difference between not right, and not right now. 

That’s like a huge huge fundamental difference. We don’t have to say no, we can say yes, but in a different form. Or we say yes, we will take that into our, you know, this is product management, we will take that into our next release consideration.

We will prioritize that for our next release. As it turns out, that we’re not saying no, we’re saying not right now, is actually what you know, an engineering org can take that and work with that.

We can take that and buy ourselves much needed time to not get further into technical debt and we have created a clear paradigm of future driven work that’s going to allow us to release the future as needed.

So, like this is why product managers and why software engineering teams work the way they do is because we need to actually get to a point where we’re doing more like SaaS style development, but for our infrastructure. Where we have releases and we can communicate about those effectively and that, yes, we understand that your problem is important. Yes, we’re doing our best to make you as self sufficient as possible. However, you’re gonna have to be patient because that’s coming out in 1.1, which will be out in 30 to 60 business days.

As it turns out that release driven, that we have a release coming out, that typically doesn’t exist. The ability even to even say those words, I found typically doesn’t even exist in infrastructure organizations. So that’s change. That is the change in thinking and the change in culture.

Noah Abrahams: And as a thank you from a member of the audience about the difference between yes and absolutely not, saying it took them decades to get that mindset.

Kris Nóva: Yeah.

Noah Abrahams: Do we have any other questions, we want to get to before we move on to our fun sort of rapid fire questions?

Kris Nóva: I have a question. We have what 10 minutes left? I’m just gonna keep an eye on the clock here.

Noah Abrahams: Yes. Okay, so there’s that. There’s that. I guess we move on then unless you have question?

Kris Nóva: No, that was it. How much time do we have left.

Noah Abrahams: Oh yes, it’s 10. Okay, gotcha.

Cody Crudgington: Noah, why don’t you run this one.

Noah Abrahams: Okay, so we start with a rapid fire questions which are designed to be fun. Start with: yes or no pineapple on pizza?

Kris Nóva: Yes, absolutely yes. Hundred percent yes and everybody out there and who knows I’m talking to them, the answer is oh yeah all day, every day. Not only that, upside down pineapple on pizza, I’m all about it.

Noah Abrahams: Upside down?

Cody Crudgington: Upside down pineapple on pizza. I love it.

Noah Abrahams: Favorite climbing spot?

Kris Nóva: Ice. Anywhere there’s large amounts of ice. My favorite route I’ve climbed is the icefall at Mount Rainier probably, although some of the routes in Iceland are just such a different experience. Maybe Scafell? That is another good one.

Noah Abrahams: Awesome. Favorite hobby besides climbing?

Kris Nóva: Favorite hobby besides climbing… Does computer science count?

Noah Abrahams: Sure.

Kris Nóva: Yeah, I mean because, here’s the thing, though, is like I found out that computer science holistically was also a means to an ends for my career to survive capitalism, but it also was something I did for joy.

And now, like I’ve done just did a good job of compartmentalizing this activity may or may not be using the same tools and technology I use at work, but I will be doing this activity for fun. I approach that dramatically differently than I would approach, like a work related thing.

Being able to do that took a long time to learn that skill and being able to compartmentalize that for myself took a long time to learn that skill.

Anyway, it took me 10 years to get from like I love computers! Everything computers! To I can’t believe this industry hurt me so bad that they took away the one thing in this world that I loved for me and turned it into like this bittersweet paycheck driven like whatever. Now the thing I love is the thing that hurts me.

And how dare you traumatize me like this. I’ve gone from that to like well as long as I compartmentalize it, as it turns out, working on the kernel is actually fun again.

So that might be my second hobby and if I had to pick one other than that, other than climbing, photography. I love photography. I love taking pictures.

Cody Crudgington: Sweet. Didn’t know that.

Noah Abrahams: Today I learned… Favorite instrument.

Kris Nóva: Guitar. My guitar. I have that’s that thing right there.

Noah Abrahams: Okay awesome. Favorite Open Source project.

Kris Nóva: All time?

Noah Abrahams: Yeah, or current you could do, maybe both.

Kris Nóva: Wikipedia. There is so much more to Wikipedia than what people think there is. It is truly the collaboration over competition experiment that this world needs.

Like it it touches economy. It touches you know the political danger of every country. If you voted, if you’ve been pissed off at politics in the past, like 10 years, Wikipedia is concretely related to every experience you’ve had because it approaches the problem of managing people in the name of collaboration and kindness is more valuable than competition and self interest. It fundamentally breaks capitalism from an economic standpoint, and I can love it.

All about it, I wanted more Wikipedia, more open collaboration, burn it down, we need to be start being nice to each other, and I think Wikipedia is just the living breathing example of that working and I freakin love it.

Noah Abrahams: I love that. Work you’ve done that you’re most proud of.

Kris Nóva: Work I’ve done that I’m most proud of. NAML. I hate to say it, but like this it happened, like over the past few weeks and it’s been totally like it started out as me just getting frustrated and I didn’t even realize this was such a thing that people didn’t realize was an option until I like actually started writing it down and people started to like react to it.

But I’m all about giving people patterns and software to make themselves self sufficient like, right? That’s what we’ve been talking about for the past 30 minutes.

And the moment I realized that there was an opportunity for people to like not only learn a life skill in writing Go, but like also like solve like a huge problem in the ecosystem? I just that’s like a good win, win. That’s like one of those like few times in the world where it’s like not only did we provide power to everybody in the middle of the desert, but we also create a drinking water. As it turns out dams are really important thing, right. It was just kind of just win win that solved a bunch of problems that like made everybody happier.

And didn’t really have too much of a dramatic impact, other than killing a bunch of wildlife and like ruining our deserts but, like you know, whatever.

Cody Crudgington: That’s awesome. For the sake of time, today we’re going to be making a contribution to a charity and this charity in specific, you have something to do with. Can you explain and maybe talk about your charity, a little bit just so people know?

Kris Nóva: Yeah! OK, for the record this my first time ever publicly talking about the Privilege Escalation Foundation, so we have a few minutes here, so I’ll just be quick because I actually… This is like… First off, let me say if anybody out there knows anyone who wants to do like a podcast or a video or an interview or whatever on the specifics of the charity in general, like please reach out to me I’m great, I’m great at this stuff, if you can’t tell.

So I would love to talk more about it, but I have been dramatically hurt by private healthcare in this country and it’s… I say as a transgender person I’ve been hurt.

But it’s not because somebody woke up one day and decided that we hate all trans people and we’re going to like go out of our way to make it hard for them, and we have the secret agenda and ample free time and available emotional resources to go hurt people.

It’s just because people did it… Being transgender has just been misunderstood and with that misunderstanding comes neglect. So like yes, our healthcare system is not structured in favor of trans people, but that’s not because people are out to get us necessarily. I mean sure there’s some of those out there don’t get me wrong, but like it’s mostly just because people don’t understand that this is something that people need to be healthy and to be happy.

Most people view healthcare as like I got sick and got strep throat and I need a doctor, not as I’m going to go kill myself, if I can’t adjust my endocrine system to stop me from going through puberty. That’s just not a thing, our society talks about often enough. So anyway because we’re in that situation, I found that a lot of trans people are, and you know hurt, they are ending up dead, and a lot of them are brilliant people.

And so I started a nonprofit that again, this goes back to Wikipedia and burning down late stage capitalism, that we’re not for profit. I don’t want to make a profit kindness and collaboration is more important to me than making money. I want to just give money to people. So anyway, as it turns out other people have enjoyed that and I have a particular set of skills to enable this and I decided that solving the world’s problems was too big of a thing for me to bite off, but I can certainly solve a small amount of problems for a small group of people, of transgender people in science, technology, education, math.

So the idea is that if a mind is beautiful and the mind is brilliant, it should never be held back by the constraints of its body.

And that’s what the nonprofit aims to do. We just make the problem of private healthcare, as it appears in the eyes of a transgender person, go away. If you need HRT, we will pay for it. We won’t touch insurance. We will pay for it. If you need facial reconstructive surgery, if you need gender reassignment surgery to keep you from killing yourself, to keep you happy, to turn you into the person you need to be to even make the money it would require for you to have this surgery. We will write the check.

Now that’s literally all we do. We have a Discord. We hang out. We have a set of policies in place that like enabled this I manage money. I never decide with money goes. We have volunteers who have taken time out of their day and they have a rubric they put together that says what we look for in how we give money away and right now, our operating costs are like $100 a month.

Every dime we get just goes straight to trans people. Anyway, it’s called the Privilege Escalation Foundation, which if you know anything about me you’ll find that irony in that name, and doesn’t matter if you can give $10, if you can volunteer, or if you just want to share the name, it’s super helpful.

Cody Crudgington: Is there a website people can go to donate and help out with this?

Kris Nóva: Yeah, privilegeescalation.org.

Noah Abrahams: I’m heartwarmed. It’s wonderful.

Kris Nóva: Thanks!

Noah Abrahams: We’re coming up on time. We have a couple of questions in the Q&A that we didn’t get to because it came in near the end. Kris, would you be willing to maybe hop over to Twitter and we can take care of some things over there?

Kris Nóva: Yeah. I can, I can hop on Twitter. I will say that right now Twitter has my account marked for like click here, if you want to see this because it might contain sensitive content or something.

So, like I’m working on getting that taken care of, but yeah. I’m happy to hang out on Twitter, Slack, or even here. However you guys want to do it, I think I’m free for the next few minutes.

Noah Abrahams: Awesome, so we’ll take the questions and bring it over to the @stormforgeio Twitter, so people can publicly see whatever the answers are, and we’ll take it from there.

Thanks to everyone for coming. All of your attendance’s benefit the charity we were just talking about.

Kris Nóva: Thanks everyone for coming. I do appreciate it. Thanks for having me. This has been great to talk, we should do more of these this was fun. 

Noah Abrahams: Thanks. We’d he happy to have you back anytime.

Kris Nóva: Cool, count me in.

Noah Abrahams: Stop recording now.