Hello, everyone, and welcome to today's webinar titled, Kubernetes 101, an Introduction to Containers, Kubernetes, and OpenShift from Red Hat Training. This webinar is being presented by John Walter, Red Hat's solutions architect. John is a solutions architect on the Red Hat Training and Certification team, specializing in cloud and DevOps technology. John is a Red Hat certified architect with a deep knowledge of Red Hat enterprise Linux and its layered products.
During the webinar, everyone's phones will be muted. So if you have any questions, please find the Q&A icon at the bottom of your screen and post your questions there whenever they come up. We are recording the session today, and we'll send off a copy to everyone near the end of the week. All right. Without further ado, John, you can take it from here.
Thanks, Michellem and thanks again, everybody, for joining today. Let's go ahead and jump right into it. Again, I'm John Walter, I am a solutions architect here at Red Hat. I work specifically on the training team. Today we're going to be talking about Kubernetes, really kind of the underlying technology for our product, Red Hat OpenShift, but first, we're going to talk a little bit about containers. We're going to start really the smallest unit. Talk about why are people actually adopting containers. What is a container? What problem is it solving?
From there, we'll move into an overview on Kubernetes, really talk about container orchestration, what is container orchestration? And then we'll move into OpenShift, which is really kind of our enterprise Kubernetes platform. We'll have some time to talk a little bit about some of the training courses and things like that that we have to offer as well. And then we'll definitely have time for Q&A at the end. So make sure you drop your questions into the Q&A.
So let's dive into it. What is a container? This is kind of the cheesiest buzzword term within the industry right now, at least in my opinion, but digital transformation, that's what everybody is talking about. And really digital transformation, what you're talking about is how do we become a great software company. There's a lot of different kind of components, right?
On the left side of this chart, you have sort of the old way of developing applications, right? You have these waterfall siloed approaches, you have data centers that you're deploying into. And what we're moving towards is more of a continuous integration, continuous delivery development model. We're moving into deploying into the cloud and to our data centers, as opposed to just into our data centers.
And really the goals of each of these steps, these transitions, should really be a focus on the business outcome, right? We're trying to increase speed, we're trying to increase agility, we're trying to increase control. And what do those things really mean, right? Speed really means how quickly can we deploy new ideas. Agility means, when we look at things like updates, how can these changes be done more simply, allow for greater experimentation among these teams? And in innovation and on control, it's really looking kind of the infrastructure side, how do we mitigate risks and security and issues with bugs? How can we approach those issues more quickly?
And this really all goes back to agile integration. This is a term that has been around at least as long as I've been working within tech. And really what we're trying to do is define a new method for integration. We need our applications, and even the way that we're approaching our design and the applications to be more agile. We need to be scalable. We need them to exist, not just in our private data centers, but in public and private clouds, the hybrid cloud. And ultimately, we need to be able to iterate more quickly and more efficiently.
If we take a look at this chart on the left side, it's a pretty traditional integration, right? You have your enterprise service bus, it's being served from the servers, pushing out these applications. This is how we traditionally had been doing application development and deployment for the last 20, 30 years. Whereas, now we're moving more towards modular applications. We're moving towards microservices. We have the Internet of Things. We have AI and machine learning. And all of these things required a new, more distributed integration. It requires more rapid iteration with the development cycles. And so that's really where containers can come into play.
And so what are containers? It really kind of depends on what side of the house that you fall on. If you're on the infrastructure team, there is a very technical meaning to what a container is. It is an application that is simpler and more light than a virtual machine. It's something that is easily easier to port to multiple environments so your container is going to run the same way on this roll seven system as it's going to run physical system, is it's going to run on a RHEL virtual machine that's running on AWS.
When you talk to the application team, this is a little bit different. It's a way of packaging your applications with all the dependencies. And so now you no longer need to rely on these infrastructure teams to spin up an environment for you as a developer, you can just have all of those dependencies coupled with that application itself and not really ever have to worry about where it's being deployed to.
It opens up a lot of interesting possibilities for both the infrastructure and the development teams, but it also adds kind of a new layer of complexity. But we'll jump into that in just a second. Really, what we're talking about on the infrastructure side is moving away from traditionally where we would have our applications hosted on virtual machines, moving more towards having them deployed into the containers. And so what really is the difference? What's the benefit of moving from virtual machines and torque sorts containers?
Traditionally, organizations, they're traditionally kind of co-locating their applications on a single virtual machine. So they're on the left side, you have, in purple, and hopefully there's no color blind folks on there, because I do have a couple of green slides in here, you have your virtual machine, it has multiple applications that are running on it. That virtual machine is providing all the operating system dependencies, it's providing the kernel. And while the virtual machines are isolated from each other, those applications don't have any isolation from each other.
So that could potentially, negatively, impact the applications, the files or configurations, the behavior, of the applications because there are other applications running on that system. Meanwhile, on the right side, you have what container orchestration looks like, right? Linux containers, they share a container hosts and they share the kernel, but at the same time, they're using the same kind of technologies that the OS has, things like C groups, control groups, SELinux, kernel namespaces. And all of this provides isolation so that one container can't access or impact the other containers, and it can impact the hooks themselves.
So this is now adding a layer of isolation, if necessary, right? Sometimes you do want those applications to be communicating, but not if they're completely disparate applications just happen to be running at the same place. Linux containers are, kind of in a way, a virtualization technology, right? You have virtual machines that are meant to virtualize the hardware, and then you have containers that are really meant for virtualizing the application processes irrespective of where they're running.
Sometimes it is possible to run every app on a single VM, but the operational overhead, the resource costs of the multitude of extra virtual machines that you need, typically is going to invalidate any kind of a monetary benefit of running all those in one place. And really VMs have had the advantage of providing that hypervisor level isolation for the application, but that comes with the cost of having to run a complete operating system, and having to upkeep that operating system and provide more memory and more CPU power to these virtual machines, right?
Whereas, with VMs, the compute and memory capacity, it's predefined, right? You don't have the way of bursting whenever an application requires more memory or more compute power. Containers on the other hand, they provide isolation through those kernel technologies I was talking about before. So things like control groups or namespaces.
These are technologies that have been battle-tested for decades at this point. It's the same technology that is inside of our Linux distribution, Red Hat enterprise Linux. It's been battle-tested by Google and the Department of Defense here in the States. And so lot of work has gone into these technologies to battle test them so that now we're introducing them into this kind of new paradigm. Well, that's something that any infrastructure person can come in off the street and say, "Hey, I know how to do this part, at least." So it's not really having to relearn everything.
And because containers are using that shared kernel, they're using a container host, it reduces the amount of resources that are required for the container itself, and it's a lot more lightweight than a virtual machine. Containers are providing an agility that's just not really feasible with VMs. For instance, it takes just a few seconds to spin up a container, whereas it may take minutes or sometimes hours to spin up a virtual machine, depending on how large it is.
And with that, it comes to more compute, more memory resources. And what can the container orchestration tool, which we'll talk about in a little bit, allows you to do is those burst modes, right? If your application is getting hit hard, maybe it's a maybe it's a web server, maybe you're Amazon, right? And you've got Amazon.com, and Black Friday, or I guess right now it's Prime Day, right? So prime day comes, you need those applications to scale as the traffic starts to scale.
And that's something that's a lot more difficult to do with virtual machines. Whereas with containers, you can basically build in the logic so it does it on its own. And really this ties into DevOps, right? DevOps adoption has really skyrocketed in the last few years. And it really coincides with the proliferation of containers. And this is in fact not really a coincidence, right? Containers have really played an important role in enabling organizations to start implementing DevOps practices without having the obstacles that they once faced with virtual machines.
If you talk about VMs as the unit of compute, applications are developed by the Dev teams that finished application is handed over to inarbitrary packages, which may differ between different teams. And those are then handed over to those IT Ops teams. And they're responsible for building, and deploying, and managing virtual machines. Also the operating system, and the libraries, and the application dependencies, and the application itself.
And so it kind of creates this conflicting atmosphere between your Dev and Ops teams, and any other team that's involved in delivering the application since, now the it ops scene is being forced in the ownership of this sometimes undocumented, sometimes non-standard layer of their technology stack, right? And so they're typically not going to be as familiar with the application itself as the developers who developed it.
When you look at containers as a unit of compute, there's a really clear boundary between the layers of the technology stack between the IT Ops teams and the Dev teams and what they own and what they manage. So your Dev teams are deeply familiar with the application, they build it, they package it with the dependencies and any configuration that it might need. And so they get to own that application now, and it's configuration, and the packaging, and all of that becomes a container image. And that container images immutable, right?
So it doesn't really matter now where it's going to be going. That IT Ops doesn't have to worry about it, right? They're delivered an image, a container image, that's the application and everything bundled with it. And it's essentially just a standard package, right? And it's, regardless of the technologies that were used, to build the application. It doesn't matter if it's a Java Application, or a Python Application, or a Golang Application, it's the same image, right?
And so now the IT Ops teams just owns the infrastructure. They're going to own the container host, so the virtualization layer. And this is now kind of a standard for running those container images. Now you can deploy and monitor and manage those containers across any infrastructure using these standardized procedures. And really that's what IT Ops teams are looking for.
And so really with using containers, your Dev and your ops teams can really work at different speeds, they can upgrade and downgrade servers or applications independently of each other, they can easily move applications between different environments, and really, it's just establishing kind of a clear boundary between responsibilities of these different teams, which allows them to really work seamlessly together. Right? It sounds kind of conflictory. I said that twice now. It almost sounds like you're creating a boundary how are they going to work more closely together? Well, now there's a clear delineation in responsibilities and roles. And so now it's really, really easy to say, "Hey, as a developer, we have this application it's ready to go." And boom, they can hit the ground running on the IT Ops side.
So a couple of other things around application portability talking about virtual machines, why we're moving away. There's been an effort to standardize the VM format for a very long time. It's never really succeeded. VMware had a go at it, AWS tried to do it where they were tied to AWS hypervisors, but there was really never a standard for the virtual machine image.
And so moving application to other infrastructure typically require you to kind of build from scratch, redoing verifications on previous infrastructure, things like that. And so more and more of the industry is moving towards a mutable infrastructure. And part of that is to simplify deployment and simplify management of software. And so using virtual machines, it requires you to build a new VM every time that there is a change and you have to ship it to the infrastructure.
We worked with Netflix pretty significantly. We actually had a big session in the 2019 Red Hat Summit. When they talked about all of their needs, they had basically... When they needed to make a change in production, they had to bake that change into a new virtual machine. And that was an AWS AMI. And then they had to ship the VM to production. And since then, they've talked about how, if Linux containers had been as prolific as they are now, that that's the technology they would have chosen at that time. And what they have since moved towards is OpenShift.
And so really, we're talking about containers providing application portability, they have these standardized image that's called the Open Container Initiative, OCI standard. And they're really all dependent on Linux, right? It gives teams a lot more control over the infrastructure where they're deploying the applications and the container host provides a common ground for running containers on any infrastructure, whether it's your laptop. You can actually go and download OpenShift instance to deploy a virtual in VMs on your laptop right now, if you wanted to, CodeReady Containers, you guys can flip that up.
But with portability, it doesn't guarantee compatibility. Portability is often believed to mean that you can take a container image, you can run it on any container host built on any Linux distribution. And that's not technically accurate, right? It still needs to be... If it's a 64-bit binary, it needs to run on a 64-bit host, right? You can't run that on a 32 bit host. But there's all these kinds of different inconsistencies. And they can lead to some sporadic failures. They can run into harder to debug issues. Or just complete failure where container role wouldn't work.
And so that's where there's a need for orchestration, right? You can't, at least at the enterprise level, expect to run containers just at the OS and not have a way of orchestrating all of these different applications that are running. And how do I scale? How do I make sure that these applications are either isolated from each other or exposed to each other?
And so that's really where Linux comes into play, but there has to be something on top of Linux, right? From a container standpoint, the same technology that drives containers are the same technology that, if you're a Linux administrator, you're used to right, interacting and managing within the containers environment is exactly the same as interacting and managing a Linux operating system.
So containers in a nutshell, and we'll jump into the Kubernetes piece next. But containers, they're really transforming, I should say, the way that we are thinking about application architecture at the speed of which that we're delivering our applications on those business requirements. And they're promising application portability across hybrid cloud environments. They're allowing developers to focus on building their applications instead of having to worry about the underlying infrastructure, any execution details.
And while containers are removing complexity through minimalism, managing container deployments is a new challenge. Setting myself up for this next slide. Containers, they're going to be deployed for a lot shorter amount of time than a virtual machine is going to be. There's going to typically be greater utilization of the underlying resources and related container technology, just to say, they have to manage a lot more objects, there's a lot more turnover. And so this is what introduces the need for automated policy-driven management.
And so that's why teams are starting to move towards Kubernetes. It has a rich set of complex features, it allows them to orchestrate, manage, containers across their entire life cycle environment. And it's really emerged as the fact of standard for container orchestration for management. And it's become a critical platform for organizations to understand.
So let's get into it what is Kubernetes? And this is, by the way, my favorite slide. I love throwing out the Michael Scott Somehow, I Manage slide here, but containers, they're popular because they allow that application to run as an isolated process, right? They're very similar to what a lightweight Linux OS, right? It's essentially a zip file that has the application. It has all of the dependencies within there.
And that bundling, it eliminates the problem of, "Hey, it worked in my environment. Why is it working in yours?" But as these applications are becoming more and more kind of discrete functional parts, each of which can be delivered in a container, it means that for every application, there are more parts to manage. We're going to talk a little bit about microservices a little bit later on. And that's really where you traditionally had these huge monolithic applications and we're starting to break them into those discrete functional parts. So now, as opposed to managing one giant application, you're managing maybe five tiny applications.
And so there's a lot of trade offs here, and that's where we need that automation to kind of fill that gap for us. In the complexity of managing applications with more objects, greater churn, introduces a lot of challenges, right? Configuration, service discovery, load balancing, resource scaling, discovering and fixing failures. Managing that complexity manually is impossible, right? If you wanted to spin up a docker and run a few containers, that would be no problem, but we're talking about running for the enterprise. And that's typically going to be hundreds or maybe thousands of different nodes that you're having to manage and doing doing that manually would be impossible.
And so if you have an OpenShift or a Kubernetes cluster, it's typically going to run more than thousands of containers at a time. And updating those large clusters is really feasible without automation. And so that's where Kubernetes comes in. Kubernetes, it delivers a production grade container orchestration. It automates a lot of the container configuration. It simplifies scaling. And it manages resource allocation. We talked about that bursting a little bit earlier on. And Kubernetes can run anywhere, right? So if you want your infrastructure to be on site, if you want it running on AWS, if you have maybe OpenStack, private cloud vendor, or maybe a mix and match of both, or all of the above, right? That hybrid cloud, Kubernetes is able to deliver across each of these environments.
So, Kubernetes in a nutshell. When we're talking about complex applications, they have multiple components, containers are vastly going to simplify updates, but placing each component in a container, it makes it really simple to make changes without having to worry about those ripple effects. We talked about those monolithic applications that we're now breaking into these discrete functional parts. Well, imagine you're in there and you're responsible for just one piece of this giant application, but some commit that you makes has this kind of cascading effect where everything stops to work across the entire application.
Well, now with moving towards microservices, or just containers in general, these are applications now with a single function. It naturally benefit from containers because they provide that clean separation between the components and the services. So Kubernetes, it's this open source container orchestration platform. It helps to manage these distributed containerized applications at a massive scale. And you tell Kubernetes wherever you want your software to run, the platform takes care of pretty much everything else for you. And it provides you a unified API, deploy web apps, batch jobs, databases. And the applications in Kubernetes are packaging containers to cleanly decouple it from the environment. Kubernetes and automates the configuration of the application, it maintains and tracks that resource allocation for you.
So really Kubernetes is container orchestration, right? It means that Kubernetes figures out where and how to run your containers, and more explicitly, provides these three functions, right? So first, schedulers and scheduling. The schedulers are going to basically compare the needs of container with the health of your cluster. And then it's going to suggest where it thinks the container might fit. And so think of a scheduler sort of like a thermostat. You tell your thermostat, and I know that some of you may be, I'm going to do this Fahrenheit, I can't do the conversion of my head, but you say set your thermostat to 70 degrees, right? And you expected to maintain that temperature. And so what your thermostat is doing is constantly checking the temperature and then resolving against the setting, right? It has to be 70 degrees on a regular basis.
And so that's really what Kubernetes is doing, right? It's seeking to maintain your cluster health on your behalf. It is essentially minding the temperature for you so that your developers can just focus on building their applications, or your infrastructure teams can just worry about putting fires out when they need to, as opposed to having to mind the health of each of these clusters, each of these applications.
The second feature, service discovery and load balancing. In any system, service discovery can be a challenge. Kubernetes is not an exception. The more services that make up your application, the more difficult your application is tracking and manage. And thankfully, Kubernetes, it automatically manages service discovery. So we asked Kubernetes to run a service, maybe like a database or a RESTful APIs. And Kubernetes takes notes about the services. It can return a list if we ask about them later, and then it checks the health of the individual services.
So if Kubernetes detects a crash in your service, it's going to automatically attempt to restart it. Beyond these kind of basic checks, it also allows for more subtle health checks. For example, maybe you have a database that hasn't crashed but it's just performing really, really slowly, Kubernetes can track this, it can direct traffic to a backup if it detects slowness. And then it can also incorporate load balancing. So modern services, they're going to scale horizontally by running duplicates of that same service.
So the load balancer becomes a really key piece that is going to distribute Or coordinate traffic across all of these different duplicates. And really, it makes it easy to kind of incorporate custom load balancing solutions. If you have an HAProxy, or maybe a cloud provided load balancer from AWS, or Azure, or Google, or maybe even OpenStack, those things can be plugged right into Kubernetes as well.
And then finally resource management. Every computer has eliminated amount of CPU and memory, right? And more and more of those limits are becoming bigger and bigger and bigger, but it's still going to be capped at some point. And in an application that parses texts, maybe is memory bound and application that transcripts video might CPU-bound. So that means the video application is going to run out of CPU first, whereas the text parsing app is going to run on memory first. And so you're completely running out of either one of those resources is going to lead to the system becoming, at the very least, unstable, but likely, probably crushing.
So there has to be proper resource management. And the result of that is intelligent scheduling. Right? We talked about scheduling before, and resource management as a part of that, it's tied to that. Kubernetes is able to schedule these applications to appropriately use resources like CPU or memory while also staying cautious about over-utilization that leads to system instability. And if you're deployed to a public or private cloud, you can also build in resource management so that as you start to approach those caps, it can also scale the underlying OS to provide more resources to you as well.
So, the benefits of Kubernetes. Scalability is a big one, right? It's going to automatically scale your cluster based on your needs. It's going to save you a scale-out and back-down, right? So hopefully this will help to save your team and your organization resources and money. I gave this example a little bit earlier, but Amazon is right in the middle of prime day. So they're having to scale up huge. They for sure have these enormous clusters that are running probably tens of thousands of different containers.
And so when you have a kind of a rough idea of what traffic is going to look like on prime day last year, you can start to scale out, but you can also have the platform take care of some of that for you automatically. There's essentially a cluster autoscaler, there's a horizontal pod autoscaler that are built in. And those adjustments are just going to happen on the fly.
Portability, Kubernetes... I said this earlier, but Kubernetes can run anywhere, right? It runs on-site in your data center. It runs on the public private cloud. It runs in that hybrid cloud, which we're all trying to move towards as well. But no matter where it's running, the same commands, the same behavior to manage Kubernetes exists across all of the above.
As far as consistent deployments go, I know we've talked about this just a little bit earlier on, but Kubernetes deployments are consistent across the infrastructure because of the layer that separates it between the actual infrastructure and the container itself. That's what Kubernetes is. And so, containers, they really embody the concept of immutable infrastructure and all the dependencies, and a set of instructions that are required to run an application are bundled with the container.
And so if you've heard the pet's first capital argument before, essentially, in the past, our monolithic applications and our virtual machine for sort of our pets, right? We spent a lot of time creating them and patching them. They were up for a very, very long time, whereas containers are the cattle, right? They're not really supposed to have a lot of uptime. They're typically going to be spun up and spun right back down in a shorter period of time. So because they're immutable, those configuration changes are going to be prohibited unless we make an actual change to the application. And so there's not really a fear in killing off a container and starting another one in its place, because there's a consistent experience between each of those deployments.
And then finally, separated and automated operations and development. It's really common for operations and development teams to be in contention. I talked about this a little bit earlier. Typically, your ops team is going to value stability. They're going to be more conservative about change. Development teams, they value typically innovation. They prize a high change velocity. So Kubernetes resolves this conflict because there is a lot of intelligence and automation built into Kubernetes, operations teams, they can feel confident in the stability of the system, whereas for developers, containers are saving them time, right? It's really fostering this CICD DevOps loop. So now they can have really rapid iteration cycles.
So just a couple more slides and then we'll jump into OpenShift, but I wanted to give kind of a brief overview of the basic Kubernetes architecture. So what you see here is essentially your control plane and your worker nodes. In the past, maybe master or worker nodes. But essentially, they run together the applications or the services.
So on the control plane side, it's roughly the equivalent of the concept of a master node acts as the brain of the Kubernetes cluster. So this is where those features we talked about before, scheduling, service discovery, load balancing, resource management, all that is provided at the control plane. And so in this high-level architecture discussion, we won't get into a lot of the functions in here, but essentially, you have your API server, that's the point of contact for any application or service. The server determines that the request is valid and forwards the request at the request or has the right level of access.
In here is also etcd. So if the control plane is the brain, then etcd is where the memories are stored, [inaudible 00:29:54]. And so a Kubernetes server without etcd is like a brain that can't make memories, right? So it's fault-tolerant, inherently distributed key value store, and it's really a critical component of Kubernetes. It really just acts as the ultimate source of truth for the clusters and stores cluster state, configuration, things like that.
On the worker node side, this is what's actually running the application or the services. So there are a lot of worker nodes in a cluster. Adding new nodes is how you scale Kubernetes. And so within these nodes are a few components, one is the kubelet, and that's essentially just a tiny application that lives on every single worker node. And it's what it communicates back to the control plane. So think of this sort of like the arm. The control plane sends a command, the kubelet is what executes the command.
And then you have the container runtime engine. So this is a container runtime. It complies with the standards that are managed by that open container initiative. And it runs the container applications themselves. So it's the conduit between the container and then the underlying Linux.
But what's missing here? A lot of stuff. Kubernetes, it offers that portability the scalability, the automated policy-driven management, but it's not a complete solution, at least not for enterprise. It doesn't include the components that are needed to build and run and actually scale containers. It doesn't have the OS, there's no continuous integration, continuous delivery baked into this, the tooling that's necessary where storage at, right?
There's a lot of work that also needs to be done to set role-based access control or multi-tenancy and secure default settings. Kubernetes has a lot of pluggables interfaces for a lot of those components, which offers a lot of flexibility, but it's not a complete solution. So why is it so hard? Right? These are all the different things that are not included in Kubernetes that are necessary to run at a production level.
And really, this is a lot of words. I think this speaks a lot louder than that. This is a screenshot I took from, probably about six months ago. This is the Cloud Native Computing Foundation, essentially is the governing body that's home to Kubernetes. This is all of the different components that would be needed to run, essentially, an enterprise Kubernetes. And I'm not sure if you can see my cursor, but here on the left side you have Kubernetes kind of highlighted in green. That's the orchestration piece. That's it. Right?
We have databases and container registry, cICB tooling, API management, infrastructure automation, all of these different needs to run at a production level, not to mention the actual infrastructure here at the bottom itself. All of this needs to be kind of housed in a platform. Kubernetes done right is hard. Anyone on this call would be able to install Kubernetes, they'd be able to start playing around with it, probably 30 minutes or less. It's really, really easy to get started, but to get to the point where you can start to run business critical applications within your enterprise, it takes a lot, a lot more. And there are a number of things that need to be pieced together, and tested, and hardened for those operations.
So all of these things that you see on this list are considerations that you would need to essentially plug and play into Kubernetes to have kind of a homegrown enterprise Kubernetes solution. Just for example, identity management, right? Kubernetes, you're going to be providing access to potentially critical internal systems to your employees, to your partners, to your customers.
And so you have to adhere to some pretty strict security rules that are established within your organization. And the way to do that, you got to write an integration between Kubernetes and maybe your company's approved elDB. And that requires knowledge of decks, which is an open source identity system. It's going to require knowledge of Kubernetes and elDB, right? You need now need to have knowledge of these three components just to hook in your identity management into this platform, right? And that takes days, maybe weeks, out of your time where your Ops teams could be working on other things. So that is where OpenShift comes in.
So what OpenShift is, is really Kubernetes with all of these additional services built in, right? No matter what architecture you're choosing for your application, what programming language, OpenShift is providing a single and consistent development model across your entire life cycle of the application. So a couple of things that it's provides, self-service infrastructure, it allows developers to provision what they need on demand skip opening up a ticket with the infrastructure team that might take days, or weeks, or sometimes hours, but usually days.
It also provides that consistent environment, right? And makes sure that the environment is provisioned for the developers and across the lifecycle. And the application is consistent from the OS, to the libraries, to the runtimes, and even the application runtime in use in order to remove the risks that originated from inconsistent environments. It also has automated build-in deployment, right?
So OpenShift gives a choice to developers to build their containerized applications themselves, or leverage the CIC pipeline to have the platform build containers from the actual source code or even the binaries. And so the platform then automates the deployment of those applications across the infrastructure based on characteristics that the team has defined for the apps, such as how much resources should be allocated, or where on the infrastructure should they get deployed to be compliant with maybe a third-party license or something like that.
And then finally, configuration management. Huge part of this. Configuration sends to the data management, it's all built into the platform. And that really ensures consistent in environment-agnostic application configuration. And it's provided to the application no matter what technologies use to build the app or what environment is actually being deployed into.
So, ultimately, OpenShift is enterprise Kubernetes, right? If you take a look, and I'm sorry, this isn't highlighted, but here at the bottom, we have essentially our container orchestration as a service, right? And Kubernetes is only a couple of parts of this, right? It doesn't include any of the things in the lifecycle automation or container management piece. It doesn't include any of the application tooling at the middleware layer, things like business automation, or integration. None of those things are provided from Kubernetes. So those are all things that need to be added on afterwards, or are provided by OpenShift.
And so if you're not familiar with Red Hat, and I'll take a quick step back, we're an open source organization, right? And so we work with dozens of different open source projects, and we essentially take pieces from all of these different projects, and that's what we've bundled together into OpenShift. Things like Jenkins or Tekton, or different performance and monitoring tooling. We've identified open source projects that are compatible with Kubernetes, and that's what we're shipping as OpenShift.
And just within the last year, we've released OpenShift 4, which is really going to be our big foundation as an enterprise Kubernetes platform. We've added a lot of productive enhancements to deliver a platform that has all the components that you need fully integrated. So now you're able to build and deploy and manage those applications. And a lot of that is automated for you. And so, like I said, you can be up and running with a Kubernetes cluster in 15 minutes or less, but you're not going to have these cluster services or these application services. There's no service mesh. The developer tools built into the platform, or even the automated operations on top of Kubernetes, those are all the things that we're bundling into OpenShift on top of Kubernetes.
And recently with OpenShift 4, we've also released a kind of a variant of Red Hat Enterprise Linux called Red Hat Enterprise Linux CoreOS. And essentially, this is a container-native or container-ready version of our operating system that is now connected to OpenShift. So now, instead of having to manage on the infrastructure layer, your infrastructure, and an operating system, and OpenShift on top of that, OpenShift and the OS that underlies it are tied together. And so now, updating one does not mean you have to bring down the other and update the other, they're now just in sync all the time together. So it really relieves the infrastructure teams from a lot of the burden of maintaining the platform and the OS.
And ultimately, this is trying to create value for your teams and for your customers. And it really depends on how quickly you're able to deliver these applications. We see DevOps, we see containers, we see Kubernetes as the key ingredients to a modern application platform. And it's going to be the things that enable us to move towards artificial intelligence and machine learning IoT, which has been really, really big. These are these micro applications, we need of that are distributed all over the place. We need a way of being able to manage those seamlessly within a centralized UI. And so this is an application platform. It has to provide that consistent way for developers and Ops teams to collaborate across all kinds of deployment footprints, whether it be from the edge, air-gap environments infrastructure you have on site or to the hybrid cloud.
So a couple of other things that I wanted to mention here from a security standpoint. I talked about this before, but a lot of the tooling that is built into containers or into the platform itself are the same tools that we use within Linux. And so it's a lot the security policies, it's a lot of the tools that you're used to using to secure your Linux environments with the same tools that you're going to be using to secure the OpenShift environment as well, or those individual containers, or even access to the images for those individual containers.
And so security is definitely a major component of our focus within Red Hat, and is certainly a big focus of the tooling built into OpenShift as well. We've also worked to build out a lot of different ways to automate the operations. We talk about deployment that's just one piece, right? That's day one, but what about the actual operations throughout the life cycle of this platform. We built in a lot of different operations and process automation and provided the tooling to allow your teams to operate a Kubernetes cluster and OpenShift cluster efficiently and economically.
You can see some of the features with [inaudible 00:40:58] multi-tenancy, secure by default, which we just kind of talked about a little bit, over the air updates, the ability to update the platform and the underlying OS with the push of a button. We want it to be as easy to update OpenShift and the underlying OS as it is to update your iPhone.
And one of the things that we've done is work with our community to build out what we call operators. Essentially, they are automation scripts similar to our Ansible offering. And really the operating community, it was birthed from the progression of these app workloads on Kubernetes. We're now running complex distributed systems, they require concepts above these built-in objects like stateful sets and deployments. Things like data rebalancing and doing seamless upgrades of those complex applications are now things that we need to be able to automate.
And even the installation needs to be automated, right? On the left side, that's OpenShift 3, and potentially, OpenShift 4. But this is what I was talking about before, where it's decoupled. The infrastructure, the OS, and the platform of OpenShift are all decoupled. And being able to manage them typically required, or even just installing them, typically required. And you have to have your infrastructure, you then install a RHEL, then you can install OpenShift on top of that. And from there, they're upgraded and managed separately.
Whereas with OpenShift 4, there's now an automated install that has this all bundled together. So essentially, CoreOS is a component of the product, right? And so OpenShift 4, the install is going to first configure the underlying provider, it's then going to spin up the real CoreOS nodes, and then it will deploy the OpenShift cluster and the services on top of that, all done automatically.
And then here are a bunch of the other operations that we have, right? Fully automated day one and day two operations. The automated install to the ongoing management of the platform, implementing role-based access control, or even integrating third-party solutions. We work with this huge operators network that we have provided. In fact, right here on this next slide, we have what we call operator hub. These are certified operators from all of our partners. We launched this in conjunction with Amazon, Microsoft, and Google, mid-2020. And essentially this is a place within the actual OpenShift platform that you can find and consume these curated operators from these different vendors. So say you just bought a bunch of storage from Rackspace. Well, there's an operator to actually essentially automatically configure that storage for you within OpenShift.
From and administrator standpoint, you have full control as to who actually has access to these operators, and you can install them just from the click of a button. So this is actually a look at the OpenShift UI. You can see a few examples of some of these operators. Some of these are provided by Red Hat, a few on here, one by AppDynamics one by Apache. And as an administrator, you can control who has access, who can check them out, who can actually subscribe to them, who can install them. But role-based access control is a huge component of this.
And I mentioned this before, but self-service for developers, right? Being able to, while engineers that are interacting with operators in a self-service manner, you can now see a developer's view of all the operator capabilities that are exposed to them, right? There's all these different self-service options for the operators as they're being provided by this administration team.
That is a quick overview of containers, Kubernetes, and OpenShift. A couple of things that we were going to touch on before we jump into Q&A. I'm want to talk a little bit about some of the training that we have here at Red Hat. As far as ways to train, we do everything in a normal world. Obviously, right now, we're not really doing in-person deliveries, but we have been doing virtual, instructor-led deliveries for a very long time, for many years. And so we have a lot of experience delivering to remote folks.
We also have a lot of options as far as self-paced training as well, whether it be for an individual course or through our Red Hat learning subscription, which through the ExitCertified website, you can actually get a seven-day free trial to check out the learning subscription itself.
From a curriculum standpoint, I did want to highlight this really quickly, we have a lot of OpenShift training. And a lot of what we've covered today is going to be covered here in this top course, Red Hat, OpenShift, one containers, and Kubernetes. Now this is a name change of a few of these courses that I name-changed just in the last couple of weeks. So if you check out, I believe that the folks at Exit are going to send out a few links for you to check out. So if you see any name changes that are different than what we've covered here, go by the course codes. But essentially, our DO180 is our intro level course. It teaches you everything you need to know to build and manage container images, spinning up a Kubernetes cluster, and finally, how do you actually deploy applications and OpenShift.
From there, the track release splits. Depending on if you're on the IT Ops, the infrastructure, administration side, we have a course really dedicated to deploying and all of the day-one operations for OpenShift. That would move into our day-two operations force, which is our OpenShift Administration III course. And then on the development side, a lot of courses built around containerizing existing applications, implementing a cICB pipeline, working with microservices, which are really going to be the way that application development heads, if it hasn't already.
And then even for the DevSecOps folks, we do have a stand-alone security course for those on a DevSecOps side. And I talked about this course just a bit, but this is really where if you're getting started with containers and you want to work at really the OS level. This is the course for you, DO180 Red Hat OpenShift one course. You'll learn everything, like I said, around building a managing containers. This a course geared towards really everyone within the organization that has a technical background. So architects, developers, administrators, operations teams are going to get a lot out of this course. And there is now a certification as a part of this course as well that just launched in the last couple of weeks.
And then for those of you that are really not technical but maybe are on the development side, we also have a one-day course, that's really an Introduction to OpenShift Applications. This is a course that is fully tied to just using the web UI, how do I deploy and manage an application and do that.
But from there, I did want to also highlight the learning subscription. This is essentially a year access to our entire course portfolio, which includes all of those OpenShift courses, a lot of RHEL content, and a lot of stuff around our middleware products and really every product that Red Hat offers. So that's definitely something that there is a free trial, if you want to check that out.
And then, just a really quick view of all of the courses that we have, there in the middle is really everything around OpenShift as well as Ansible, our automation platform. On the left side, you'll see all of our RHEL courses and OpenStack. And then on the right side, for your application developers, everything tied to our microservices curriculum or to our middleware. Our JBoss curriculum is going to be there on the right.
And then finally, we do now offer remote certifications. This is something that's just launched in the last couple of months. So I did want to make sure, if you are interested in becoming certified, that we do have an option for you to become certified. Remotely, we're currently offering nine certifications, including three different OpenShift certifications that you can take from the comfort of your own home as opposed to having to go to a testing center.
So with that, I will pause. I know we have some questions in the Q&A. So, Michelle,
Awesome. Thanks so much, John. Just to follow up with you speaking about the Red Hat training courses and those recommended courses, I posted the links in the chat window for everyone so you can have a clickable link to learn a little bit more about those recommendations and all of the great things that we have to offer around Red Hat training. Quite a few questions queued up in the Q&A. Would you like me to read them off or do you want to take a look? I think some of them might be paired questions. So maybe if you open up the Q&A yourself and we can go from there.
Yeah. Let me let me see if I can get to the Q&A real quick. And I'm having a hard time finding the Q&A. So if you don't mind just reading them out.
Absolutely. Okay. So first up, can't container on a server run deep learning AI semi-assistive frameworks onto the outcomes results from other containers on the same server?
Yes. You have a lot of flexibility to... None of these systems have to be exposed or anything like that, but AI and ML are like two of the biggest use cases for containerized applications. Right now, we're working with so many different organizations, including IBM, who is now our parent company, who's doing a lot of work on the machine learning side and still. Where these kinds of beefy monolithic applications, the past had been working on these giant data centers. Now it's a lot easier actually to run them within containers because we can now essentially share the resources across all these different systems now. Doing that locally, it may require a lot of hardware on your end, but most definitely, that's definitely a use case for containerized applications.
Great. In brief, would a user need any enterprise VMware HA licensing?
No, not for OpenShift. Certainly, if... There's a way to implement VMware's HA solution into OpenShift or really into all kinds of different Red Hat solutions. Proxy is not necessary. It really is going to depend on the scalability of the cluster, but you're not tied into a specific vendor or anything like that. Red Hat has our own cluster solution that would be a part of this. And there are a lot of third-parties that offer those as well, that would easily integrate into OpenShift.
Can you speak on what the best practices are to prevent storage from being volatile on a container reboot?
Yes. So there are essentially what we call persistent volumes and person volume shares. That is part of one of the things that we actually covered within the DO180 courses, but ephemeral storage versus persistent storage is a big, I don't know if the debate is the right word, but there are a lot of people within Red Hat who would prefer persistent volume shares running on a dedicated physical storage on-prem, and there are others that say, "Hey, femoral storage is fine." I guess to answer the question in brief, persistent volumes shares are the way to avoid all of the storage issues through a container reboot.
Awesome. What are the differences between OpenShift master and infra nodes?
So the master node is what is doing most of the orchestration, whereas the worker nodes are what are running the actual individual containers, so where all of the secrets and role-based access rules and all those things are stored on the master side. Whereas anything specific to the application is what is running on the worker nodes.
As traditional DPA, which path should someone choose?
So that's actually a tough question and something that I've talked, to a lot of DBS in my role, about. It's really going depend on what your interaction is like traditionally, right? Are you more on the administration operations side, which is probably closer to? I've worked with a lot of DPA's who might as well call themselves developers as well. And so, partly it's going to depend on, if you were going to be responsible for building out these database applications and containerizing them, then probably more so on the development side. If you're going to be responsible for consuming the platform and the applications themselves, the more so on the administration side.
Can you give some examples of OS dependency? So environmental variables, parameter files, start-stop scripts, are some, but is there a comprehensive list that's available for consideration?
Not that I'm aware of. that's something that I can circle back and maybe there could be a follow-up or something like that. I've not seen a comprehensive list, but a lot of what we're talking about on the dependency level is, let's say you're building a Python application. So you need Python, maybe 2.6, but now you want to build a version that's on Python 3.4, or 3.5, or something like that. So those are OS-level dependencies that traditionally you would need to spin up a second virtual to run multiple versions with different runtimes.
Whereas with the containers, you're packaging that Python runtime with the application itself. So it doesn't really matter what version of Python is at the OS level because that OS-level Python runtime doesn't ever interact with the application itself. So that's what we're talking about when I talk about OS dependencies, but I'll have to see if we have anything a little more comprehensive than that.
Great. For someone coming from a traditional Linux admin support background, what would you recommend as a pathway to Kubernetes operations support?
Great question. One of the courses that I had highlighted on that big slide was the DO380, and that's essentially our operations course. It teaches you everything once the cluster has actually been deployed, that's what's covered in the D0280. The D0380 covers, here's how you actually operate the cluster, right? So it's more for your ops team. It's more for your folks with a Linux background. Right? All of those courses on the administration track are really geared towards licks administrators because it's a lot of the same concepts. And even at the command line level, you're going to be using a Linux terminal.
So I would recommend probably taking a look at the DO280 course because there are a lot of concepts that are established in those first two level courses, but DO380 is really focused on just operating a production cluster. So it's really geared towards automated policy-driven management, gets a lot into our operators and how do you create an environment for your developers and administrators to really thrive in.
Can you speak about the portability between OpenShift on IBM power and that of x86 or x64 systems?
Yes. So the applications themselves are tied to the architectures, but where OpenShift is running can be on multiple architectures. So for instance, if you have a data center over here that's running power and you have a data center over here that's 64-bit, the cluster themselves can be a shared cluster, where you as an administrator, as an operator, somebody who's consuming [inaudible 00:57:16] the web UI or whatever, it's the same.
But what the difference is, is the applications themselves. You would need to say, this is a 64-bit application, it's pretty much done automatically. But that 64-bit application would only run over here on this data center, the 64-bit or x86 data center, whereas any power applications would be running on that data center. But other than, there's really not a difference. There's some installation for actually getting those clusters up and running, but once that's happened, that's a shared cluster, it doesn't matter what the underlying architecture is, except to the applications themselves.
When hosting on IBM Power machine, do we need to make Dev aware that they're running on IBM Power?
Yes. So that goes back to exactly what I was saying before. The application still has to run on... Imagine it's just running on bare metal, right? An application developer would need to know that they're developing for power versus a traditional 64 or 32-bit application. So, definitely something that Dev team would need to be aware of.
For a Windows IT admin background, which course would you recommend to start with?
So you may want to start with our Linux courses first, because there are definitely some expectations going into our OpenShift curriculum that you have a Linux background. Now, having said that, we are in the process, and hopefully this will be sooner rather than later, of building out a version of OpenShift to run on Windows. And so that may be something that, in I would say medium to long-term, would be a solution. But to get started with OpenShift today, I would recommend probably starting with our foundational Linux course, because a lot of what's going to happen from the administration standpoint is going to take place within a Linux environment at the command line.
That's great. Well, we are at the end of our allotted time for today, but I do see that we have a couple more questions. So I'm hoping that we can get answers to those inner follow-up email. And if there are any additional questions, you can reach out to one of our master instructors Miles Brown. I've posted his email address in the chat window for everyone there. And John, if you have any closing words, feel free to share.
No, I appreciate everyone for your time. As Michelle said, Miles is a genius. I love him. Please [inaudible 00:59:51]questions, reach out to him. We'll take care of it. But thank you again.
Fantastic. Thank you so much. And then just one more mentioned we've recorded the session. We're going to send a copy out to anyone. Do feel free to reach out with any follow-up questions. We'll try to get answers to any missed questions in the Q&A in that follow-up. Hope y'all enjoy the rest of your day. Take care.