Welcome to our webinar. Today, were diving into Kubernetes, Running with the Application Essentials. ExitCertified and Mirantis are so excited to come together to bring you this presentation, and I have the honor of introducing the content expert, Peter Ghobrial. Peter is the lead curriculum developer and a part-time technical instructor for the cloud native computing courses at Mirantis. In this role, Peter drives and designs the development of technical training for enabling IT professionals to work with cloud native technologies such as Kubernetes.
Over the past 15 years, Peter has worked as a systems administrator, cloud engineer, technical trainer for both conventional and cloud technology stocks, and most recently a learning strategist, enabling global technical teams to support hundreds of customers on migrating traditional applications to the cloud and building cloud native applications. Now, before we get started with the webinar, Let's cover the functionalities of the session. During the webinar, the audiences' microphones will be muted. So if you have any questions, please enter them in the Q&A box at the bottom of your screen.
If you take your mouse and hover inside the platform, you'll see that toolbar at the bottom of your screen where you'll find the Q&A icon. We have dedicated 15 minutes after Peter's presentation to answer all of your questions entered in the Q&A box. If you enjoy the presentation today and if you're interested in learning more about training anywhere with our interactive virtual platform called IMVP, I want to invite you to visit our website and to contact us.
I encourage you to stick around until the end of the session because I am announcing a special, limited time promotion on Mirantis' on-demand training with ExitCertified. Last but not least, today's webinar is being recorded and we will email it out to everyone who registered to be here today. All right, let's get started. Take it away, Peter.
Thanks, Michelle. Good morning, evening, afternoon, folks, wherever you are joining your call. Welcome. As Michelle said there's a lot of content that I packed in to the webinar, this morning, this afternoon, or this hour, I supposed I should say, just so that you can get a sense of what specifically resiliency might mean for your applications in really wrapping your mind around, or the point around, how would you view resiliency within Kubernetes. There's a lot you can unpack with this particular topic, but the idea was to give you something to where you could start or get more comfortable with.
When you're thinking about how to use spin-up applications in Kubernetes, one or specific requirements that I'm going to be need, how do I know what to use and perhaps what I will need to use, so let's get started. And in part of putting this together, it's really a good place to start thinking about or start with the approach of making sure there's an understanding around what we mean by running applications within Kubernetes. And for me, when I think of Kubernetes, the main thrust, or the main power if you will, of using Kubernetes in your environment is running applications that are put together in a microservice design pattern.
So before we get into some of the components that you might want to consider running your applications, it's important that we clarify upfront of what would make the application successful. What are those requirements for that application to be successful on the Kubernetes platform. So we'll start our discussions of this hour with microservices. And the way I like to talk about microservices is really in juxtaposition to its opposite corollary design pattern, which would be monoliths.
So as we understand monoliths or at least in terms of how I like to describe them, it's really this notion of having a singular application that has its entire functionality captured within a single unit of execution. So you can see here in this particular diagram, you have these different shades of colored boxes within this gray square. This is really just to represent the kind of functionality that you would have within your application. Maybe you have an application that manages user information or user data.
Maybe you have an application that processes orders, which then would mean, it also has a catalog function and so on and so forth. The idea behind showing you this particular image is to illustrate the idea of what a monoliths pattern look like. A singular unit that encompasses the entire functionality. And by virtue of that, all of these different functions would have one singular access to a persistent data store where by color coding, for example, this part of the application would write its data to this particular table within that database, and so on and so forth with the corollary colors.
Now obviously, the monolith as shown in this particular diagram is fairly well-known, perhaps in the last 20 to 30 years. Many shops have decided to go with the monolithic approach for the various advantages. One being as an example, it's very easy to deploy if you have a singular unit or executable application. You can copy it into a directory on the server and then at what you go, right? Then the application is up and running.
It's well-known Most people are very familiar with how to manage an application, if it's a singular unit and it also, as another advantage of simplifying testing. If I need to add a feature or change the feature within an application, I can make that change against a singular code base and run it through the same unit testing that I would need to do consistently across any other changes that would happen within the application.
So with that kind of brief summary, there's a lot we can go into with monoliths. With that brief summary of a monolith, how does this compare to a microservice design pattern? Well, the notion is fairly simple if you think of it in terms of taking the same exact functions or processes within this particular application and then breaking them down into their own singular units of execution. That's a very simplistic way of trying to describe what a microservice pattern means.
There's a lot that we're unpacked here, but in terms of a visual representation, now that you can see when comparing these two patterns, you now have singular services that are running, and if there is a need for some kind of data persistence, then they can have their own particular data store that they can reference and have access to.
So in this particular case, it allows you to think of your application no longer as a singular unit of execution, but something that's been decomposed into separate services. And of course, the way that these then processes would make up the entire application functionality would be through some kind of lightweight communication across these services.
So now we see what this pattern visually would look like, it's important to underscore or get familiar with the specific characteristics of a microservice pattern or an application that's moving towards or will be developed in this type of pattern. It's important because if you do not take two different kinds of characteristics, you may end up taking or bringing along with you some of the practices that are incumbent to a monolithic practice.
So the question then would mean, "Okay, so how do I go about? How do I ensure that I'm following a pattern for microservices, and isn't necessarily bringing in some of this legacy methodology for application development and management?" So we'll take a look at some characteristics trying to help clarify what would be common practice or perhaps prescribed as best practice for thinking of application that's been developed in this type of pattern.
The first point that you want to think about is the decomposition of the application itself. Now, I've written here on the slide functional decomposition. That's obviously a suggested, and by this point, a tried-and-true method of decomposing application. If you're unfamiliar with this term, essentially what this means is you think of your application, the business requirements and so forth end to end for what the user experience would mean or what would the overall requirement is of this application and then you break it down into separate functions.
So if you're familiar with domain driven design, that's a very common way of breaking down your application functionally speaking. Of course, there are other ways of breaking down your application, whether it be through the case of various types of services that the application provides or perhaps you want to break it down by department and so on and so forth. But the notion that I want to stress when you're thinking about decomposing your application is not to consider it in terms of technologically breaking down. The application as in presentation to your business audit, to your database tier, but more in terms of the actual function of what that is.
I'll go into more detail to help clarify this point on decomposition, but one of the main benefits of thinking about these types of services as broken down in this way is you can apply what would be considered a full stack methodology approach to that application. So what that encompasses obviously would be some kind of presentation, maybe some kind of business logic and then of course some kind of like data persistence for each independent service.
So as teams get ramped up with this notion of approaching an application design from the point of decomposition, one of the critical questions that need to be covered is how do I ensure that I'm actually capturing a discrete service or function? In other words, the question then becomes how I know this is enough or this is too much? So the suggestion then is you think of this idea of singular responsibility. So the notion is the service and how its design should have a singular responsibility, and there are various ways of analyzing how are you ensuring you're adhering to this particular principle.
It could be various kinds of testing. For example, if this service were to fail, how much of the application would go down? Or it could be in terms of development, how long would it take to actually develop and deploy this particular service? What does that timeframe look like compared to other types of services. So through the analysis, you're going to come out the other end of thinking, "Okay, yes, this is something that's manageable for my team and yes, it fits the point of having a singular responsibility or perhaps no." And in that case then you go back to thinking, okay, how can I further decompose or potentially decouple this of singular service if you will, into subnets more discrete and more manageable in this pattern?
A good example to think about singular responsibility to kind of make a finer point on this is think of the various types of utilities you'd find in your operating system. Each particular utility has a singular purpose. And when put together in a script for example, they each independently function on their own, but in the whole, they operate to get whatever it is that script is attempting to accomplish.
So with services that are designed in such a way to fit a singular responsibility, implicit to this particular pattern is something called an exclusively published interface. So in addition to the actual core function of the service, there needs to be an additional point of development of how do services consume what this particular service is going to provide? So in this particular case, there has to be some kind of contract if you will across services.
So in other words, the consuming service has to have some level of agreement with the consuming service over say an API or a domain payload and the actual response to a call request from that service. The idea behind this particular point is that once that interface is in place then it becomes quite seamless across services as to how to go about ensuring there is consistency in communication. The one point to bear out is that as services grow, adapt or change that existing contractor has to be maintained throughout the application lifecycle. So that means you either keep the existing interface or the new interface has to be backwards compatible to whatever that previous contract was all about.
There are some benefits obviously to deploying an application in this way and that is that an application can be deployed or these services can be deployed independently of each other. They could be upgraded, they can be scaled, they can be replaced. So no longer were teams going to be tied to a long deployment push, let's say in a monolithic application. A change has to go through rigorous amount of testing and of course then it needs to ensure that there is no breaking of that application, especially if teams are working in a continuous integration pattern.
So as such, then teams are then freed up to maintain each services lifecycle independent of the other services within the infrastructure, within the application, overall application experience. And to that point as well, if there is that level of freedom within that lifecycle, then there's also a freedom of allowing development teams responsible for these independent services to pick and choose what they want to do in terms of developing and maintaining that particular service.
So for example if I have a service that requires a database, a relational database, for example, then a team is then freed up to use MySQL as its backend data store. If a service requires the storage of documents then the team is freed up to choose MongoDB as its data store. So of course, that also means development teams will have the freedom to pick and choose whichever language framework or what have you interpreted or assembled to fit the particular service that they're wanting to go after, which means they are not necessarily tied down to a particular technology stack over time for that particular service, because they can choose one particular stack this year and next year, whatever and if there is a better way to implement it and making it simpler then that team is free to do that because of the sheer size, being micro, to be able to make that change and not necessarily introduce any kind of complication to the other independent services within the application.
Then of course the communication that happens across these particular services need to be lightweight as an example rest over HTTP. So the point here is that you have options in terms of what that communication might look like, whether it's synchronous or asynchronous, right? Or you could even have an option to do both. So if you're familiar with what messaging provides for your different services, you can certainly answer a message bus between your services to provide that level of asynchronous communication.
And even in that case, by doing that you also reduce the risk of potential bottlenecks with asynchronous communication. But the main take away here is that when you're thinking about how your services are interacting, they need to be lightweight and not be completely dependent upon large transactions across services.
So even though we've gone to some of the characteristics, it's important to understand some of the advantages of the microservice, some of which is they're easier to develop, understand, test and maintain. So code in a microservice is going to be restricted to one function of the business, and so it's easier to understand. Development environments can load that small code base and very easily keep developers productive. Local changes can be deployed independent of other services. So any change local to a service to be easily made by the developer without requiring coordination with other teams.
Also, improves fault isolation. So a misbehaving service, such as a memory leak or an unclosed database connection won't necessarily affect the services as opposed to an entire monolithic allocation bringing the whole thing down. So this is going to improve full isolation just by virtue of one piece of it being down or misbehaving. It's easier to start, scale, and replace.
So scale of microservice is obviously much smaller than a monolith, so it's very easy to start up these things quite quickly in this type of pattern. You can also scale those services independently from each other. So based off of where you're seeing pressure in the system or on a particular service, you can see only that service up as opposed to the entire application. And of course as I mentioned a moment ago, there's no long-term commitment to an implementation choice. So development teams are free to pick and choose whatever stack makes the most sense for them to accomplish the task of that function within that pattern.
But it's not all peaches and roses, right? Microservices are hard. Let's be real about that. There is a significant operations overhead that goes along with it. Think about it in terms of like, if I have a team that's mainly responsible for maintaining a singular application, that can be easily done, right? But if you think of it now in terms of separate services, now there's a lot more that you need to consider in terms of what we traditionally would want within a production environment.
As an example, you no longer having to consider observability for singular a application. Now, you have a fleet of services that make up that application. So how do you ensure that you're capturing, let's say monitoring events or logging for those separate services. How do you know which ones are important to take notice and take action on, and how much of it is just really noise or just useless chatter, just information? So that's just one element of that, right? There's other aspects to it. How ensure that these services can communicate with each other across different types of hosts or different types of environments?
So it certainly does make the application or the operational standpoint of that application a lot more complex and therefore requires a lot more forethought and thinking in terms of how I ensure that I have proper management of the application itself. Also, we should necessarily [inaudible 00:22:06] on the amount of skills that are required to not just develop in this type of pattern, but go from end to end of development all the way to production.
So most teams these days are going to be putting together or have already in place some kind of pipeline that you will of pushing code from development to production. How do you accomplish that when all of these different services are independently being built and maintained over time? What does that pipeline look like and how you account for that? What else do you need to be able to sustain the kind of change that a microservice application pattern requires?
In a lot of cases, it requires this development of dev ops skills. So in other words, not just knowing operationally how I support this, but knowing more importantly like, "Okay, do I have the development skills necessary to be able to succeed in this particular space?" It's important to understand that when you're moving towards more cutting edge environment like this, that you be clear on the availability of tools. And in a lot of cases, when it comes to managing microservice applications, it does in fact require a person who traditionally would be an operations person to have some kind of development skills to be able to use the tools necessary to support the application.
Implicit interfaces. I talked about that a moment ago. So that's the notion of making sure that each service that requires communication from other services has an interface that's in place at all times and is made available, and is consistent in terms of how all those other services interact with it. There is duplication of effort. The point here is let's say there needs to be a new feature for a particular change within the alignment of a product line, then you're not opened up to, okay, so where do I implement that change across these different functions of the application?
So we need to ensure that were not necessarily duplicating effort. In other words, this begs the question of some level of governance of the application or however many applications you have within this type of pattern. Of course, there's distributed system complexity. You're no longer thinking one app per server. It's now multiple services across multiple services. These days when we managed an application in this type of pattern, it's more of a question of like how do I make sure that these services can find each other across the infrastructure?
So there lies a question of, "Okay, so do I need a registry of all the services for this application? And if so what does that look like? How do I ensure that each service knows where to go or what to do to be able to successfully communicate with the service they need to, to communicate with?
An asynchronism is difficult. This gets at the point of ensuring that when you're dealing with communication across your fleet of services in this pattern, that the communication is in fact being managed accordingly. So it isn't so much like, "Okay, I don't have all my microservices." But in the case of asynchronism, do you have the right infrastructure in place to account for processes that might be busy processing a previous request? So then the question is making sure those messages are lost over time.
And of course there's testability challenges, which I can imagine just a moment ago. This touches upon specifically these independent software life cycles of the services. You now have to create tests for each of those particular services and then an overall test to ensure that the application functions the way that it does. With each team then being responsible for a particular service it then becomes a question of how do I ensure that I'm writing a test that's actually going to give you the result I'm looking for in determining yes, this is safe to push out versus no, this is not a way to push out.
The short answer to this would be you need to think about how are you going to deploy these changes over time and reduce the risk of potentially breaking something within the application unintentionally. So what is the point of this and what's the take away? Well, in terms of considerations, decomposing applications into services. I mean there is no silver bullet to this.
It does take a bit of practice. And for some shops, it could look one way and for other shops it could look another way. There are some suggested methods for decomposing applications, but it's incumbent upon that shop to really understand like if we commit to a method, then we must commit our culture, and other kinds of ways of supporting this application in that particular way.
Another point of consideration is it takes a long time to get ramped up on putting together or orienting towards an organization that's moving in and developing applications into a microservice pattern. So for some, they might want to consider, "Well, how long is it going take to deliver an application in this particular way?" If the infrastructure is not there, it's going to take a long time. And if speed is of the essence, it may be appointed to pivot towards a monolithic application because if you're just creating an application and you don't necessarily have the team skills and infrastructure and so forth in place then that might also be a crucial factor to consider.
Then of course, as I mention, the support system for those microservices, talking about service discovery as I mentioned a moment ago about do I need a registry of services and if so what does that look like and what does that mean? In terms of resiliency, how do I ensure that that is also coupled or built into the design and implementation of that microservice? And as I mentioned also a moment ago, service observability. How do I know that the services are not over consuming what they should be consuming or how I know that a services failed? How do I ensure that the system takes corrective action towards a service if in fact there is something being reported as a particular failure?
So this is a very complex topic, and hopefully I've given you enough the groundwork to think through some of the points of microservices. But for the next part of the presentation, what I'd like to take a look at his application resiliency within Kubernetes. So I'm going to move quite quickly through some of these next slides. It really is a more point of helping you understand at a high level what the different components are within Kubernetes that you should be thinking about to help get you the resiliency that you're looking within your application.
So one major point to understand about the Kubernetes platform itself is that it's infrastructure agnostic. So whether you're a complete Linux shop or Window shop or if you're thinking in terms of like the development of code, all of it is fairly agnostic and Kubernetes doesn't quite care. Obviously, there's some caveats I'd say about how you actually run your platform, but for this particular call it doesn't really matter and for a developer, that's great news because you don't necessarily need to be worried about will the environment have X.
How will I run my application? All that stuff is going to be handled for you so that your concern about that is going to be fairly minimal compared to your concern about, "Hey, am I writing my application the way that I need it to run?" So what I offer here is kind of a breakdown of how you can get or consume Kubernetes in your team. If you need the most customization and you need a lot of flexibility, there's the standalone model which will give you full control, end-to-end from the control plane to the data plane. It gives you whatever kind of changes you need to.
In fact, if you want to make code changes to the platform itself, you're free to do that. You're free to put whatever else you need on top of your platform, absolutely. But bear in mind, you're going to be able to run the whole thing, which means you have the responsibility of making sure that what you put in place, in fact is going to work for your organization. So I put in here for this particular point that it's highly intensive. For some folks, it might not make sense to go complete all in on Kubernetes in terms of managing and maintaining the platform itself.
So they'll turn to some of the experts that have packaged up Kubernetes in a way where it's very easy to deploy and maintain. Some of these vendors will offer SLAs against that particular package, which means there's a guarantee that the platform will stay up for running your applications and if there is any need to escalate, you do have that option there as well.
Then of course there's the Kubernetes as a service model which is vendor managed. So you have very little control over the operations aspect of the platform. Some shops don't really care to have that kind of responsibility, but it's very opinionated. You can only do X things within the confines of what the vendor provides you, and this is to be considered a kind of turnkey solution.
So the other aspect for resiliency of your application is knowing that Kubernetes in and of itself provides you a clustered environment whether it's through your master nodes or your worker nodes. All of what's available to you there is that this notion is that what is running the cluster underneath the platform doesn't matter, right? You could have one node two nodes or however many nodes that you need up to the prescribed limit within the documents as to be able to run your applications wherever you need them to run. And when you zoom in on these particular nodes, it's important to understand the actual components that allow you in our case to consider resiliency of your application.
I'm not going to go into a lot of detail, but just to point out, the Kubelet is going to be responsible for maintaining the pod where these containers are running specifically. And then kube-proxy is going to be responsible for managing the network traffic that gets routed to these individual pods whether... And this is going to be true on any particular node within your cluster.
So we're not going to go into a lot of detail of the mechanics of the architecture here, but it is important to understand what the underlying components are and what is their responsibility in terms of ensuring resilience within your application. So for us, I just want to make sure you're clear on Kubelet is important to understand and kube-proxy is another point to understand as well.
So resiliency within the Kubernetes mechanisms, what exactly does that look like? Well, let's take a look specifically at the data plane. So imagine if here are your worker nodes. They're clustered up and so they all have the capability of running your application at any given point during this time. And then for you, we have here what's called the declared configuration. In other words, this is part of the control plane. This is where you declare up front what needs to happen in order for your application to run.
So here you would have what's running at any given point in time and here's what the source of truth actually is. This needs to run in this certain way. And the value then of Kubernetes is that the internal mechanisms have a consistent loop of actions that run all the time. So if ever there was any kind of incident that happens within the cluster where the applications are running, that would be reported back to the control plane and we'll check what exactly it needs to be running at all times.
And if there in fact is a change, then there will be a proactive change that happens from the control plane to get back to what's considered the target state, back into the cluster. So what does this mean towards resiliency within your application? Well, let's say one of these nodes were to go down in your cluster and part of your application was running there. That would get reported back to the control plane and saying, "Hey, you're actually missing a pod that needs to be running so that pod would then get spun up on another node, just to ensure that there is consistency across what's observed and versus the target state.
So let's take a deeper dive then into those different components. And let's start with what we're looking at in terms of the pod. Now obviously, there's a lot we can unpack with the pop, but for our sake when it comes to resiliency, some of the points to consider of your pod are these two components here as I've outlined. At the pod level, you'll want to make sure you have a restart policy set to always.
So what this means is should any container within that pod fail, the policy would be, "Hey, docker or container runtime, recreate that container based off of this policy and get it back up into a running state. If you need more granular control, in terms of ensuring that your containers are not just up and running, but they're alive and they're ready to go, Kubernetes provides you the option of configuring probes that the Kubelet will use to ensure they match the state as specified within your probes.
So for example if say I wanted to have a check to make sure that the containers are alive, and that they're responding, I could configure a probe to check to make sure that that container is up and running. And there are various methods of doing this. I'll save you the details, but the idea of which is, "Hey, is this container responding to this prescribed probe? And if it is, check that it is in fact alive and running and mark it as success." But if you need more capability than just saying, "Hey, the container is responding to this thing, I need to make sure that the application is actually responding."
Assuming it's a web app, that could be a simple HTTP call to ensure that you're getting some kind of status code that you're looking for. So you would then use what's called a readiness probe, in which case the readiness probe will not place that pod online and not have traffic route to it, what it will do instead is it will verify that the application within that container isn't responding in the way as prescribed within your readiness probe and once successful, then the container will be marked ready to go and then of course the traffic will then get routed to the pod to be processed to receive traffic and process whatever that application is responsible for.
Now, startup probe is a relatively newer probe that's just gone GA in the last couple of iterations. And it's important to understand that in some cases some shops are migrating legacy applications over into a containerized platform in which case it may very well be that whatever legacy or whatever application it might be, may take a much longer amount of time to come online.
So the startup probe offers you the capability of providing if I'm recalling correctly up to five minutes to mark the container as already started up. This gives you some flexibility of ensuring that the appropriate resources are in place if you're trying to manage startup times across the applications you're bringing it up. It's really a way for you to ensure that Kubernetes does not necessarily go into like what would be called a crash cycle.
It's not ready. We need to restart. It's not ready. We need to restart it and so on and so forth. So startup probes are important to provide you that ample amount of time to make sure that the container is up and ready. So this type of resiliency within the application gives you the capability of ensuring at various degrees is the application alive. If not, restart it. Is it ready to go? If not, don't route traffic to it and do I need additional time because there might be some lingering legacy dependencies that the application depends on before it can be booted up or started up correctly?
But this is all well and good when you're looking at it from the pod side. What does that look like if that pod were to ever go down? Well, bear in mind that it may not necessarily be enough for resiliency's sake to have one pod. You would then want to have multiple pods of the same application. So in this case, you would then need to have additional components of Kubernetes to help you out. First of all would be the replica set which main task is to create the necessary number of pods and then of course in addition to the replica set, you would want to have a deployment option object that manages your replica set for you, so that you don't necessarily have to worry about the direct management if you will of these particular pods.
There's a lot more that I can unpack for deployment object, but know that when you're thinking of resiliency, the more control of managing the actual processes that you can hand over to Kubernetes, the better off you'll be to be able to actually get the resiliency that you're looking for in your application. And in this case, a deployment and a deployment object and a replica set are going to be crucial for resiliency's sake.
But we're not quite clear yet on resiliency across services. So in this particular case, how do I ensure that if I have these pods wherever that they're running are completely accessible to other dependent services that need to access them. Do I route the traffic to this pod, but if this pods offline, how does it get to this pod or this pod? So there's a question about network resiliency that should also be considered when you're thinking about your application?
It's a similar diagram, what we just saw, but now you have another pod perhaps being managed by another deployment object somewhere else in your cluster that needs to be able to access the application running in these pods. How do you ensure that there is some kind of resiliency? How do you maintain resiliency along with equal access to that application wherever it's running?
Well, then you want to bake in what's called a service object, which main task is to ensure that wherever these pods might exist, there is going to be a singular point of access that's consistent across the life cycle of these particular pods. So should this pod go down, well, that's great. The service will then know or will be able to detect that that pod is down and will not route traffic this way, but continue to route traffic to these other pods.
Should this pod get recreated and comes online and is ready, the service object will know specifically about that new pod and then we'll resume passing traffic to that new pod. There's more points to talk about, but I want to be conscious of time here. So I want to get to this last point here. And this is more in terms of the storage aspect. Let's say you're dealing with a stateful application that does require data persistence over time. How do I ensure that there's a there is a level of resiliency within the data that my application depends upon?
And that would include bringing on board what's called a persistent volume and then by virtue of that a persistent volume claim. Now, persistent volume is mainly an object that deals directly with network storage. So in other words, the backing store could be anything, but in terms of the actual storage device itself, it's going to be networked across your fleet of servers that you've put into or that's running within your Kubernetes cluster.
So that means then anything that's written to your persistent store will be made available across your different servers to the pods that are running on them. And then a persistent volume claim in this case is just a way in which your pods can be configured to ensure that it's attached to the appropriate persistent volume that's been created for the application data.
So this is a diagram to show you how all these components fit. We don't have enough time to talk about secrets and config maps, but I just wanted to give you an image of what this could potentially look like in terms of a diagram once it's been deployed within your cluster, as simple as it is.
Okay. So considering our time constraints, I need to move on to our next suggestion. And that is to the summary of what we've talked about so far. So just enough of Kubernetes. The whole point when I start thinking about Kubernetes is what is the actual need for Kubernetes? So there's this notion of am I using the right tools? Am I using the right objects and how do I know I'm using the right objects? Well, I need to go back to the software application requirements. What does the application need to be able to run successfully?
And then more important or perhaps equally as important is what are the business requirements for ensuring that I'm using enough Kubernetes to fit whatever SLAs that my organization might have to its particular customers? So the answers to those questions will help drive you to get to a point where you're not just throwing everything in because it's the latest and greatest and needs to be in there, but rather you being more selective and more intentional with your approach to ensuring that you've got, "Okay, I know I need a pod. I know I need a deployment object."
As rudimentary as that might be, it's a great way for you to get to the point of like, "Okay, do I need repository management? If so, what does that look like? Do I need helm charts as an example? If so, what does that look like?" It just leads you down a line of questioning to get to a point of like, "Okay, do I have enough to be able to successfully do this?" So what I'm proposing is that you think about it in terms of like what the actual requirements are before you go off and start thinking about well, what if this, what if that? And then spiraling out into some place where it just becomes unwieldy and overwhelming from the start.
The other thing I want to provide you is are you using enough Kubernetes at the right time? And here, what I want you to take away is microservices, it might be the new hotness. It has been I guess for the last two to three years, there is this notion that microservices can exist right along with monoliths. As I mentioned before, there are certain constraints that might impact the way a team can deploy a microservice application.
So that begs the questions are monoliths necessarily a bad thing or should we put them or describe it in a way that that's the old way of doing things. I'd say there's a use case for both of these things. It's just making a decision based on what are my constraints and what do I need to get done in the time that I need to get it done? And that should then lead to one approach over the other.
Now, it may very well be you start off using a monolith and then you eventually migrate over into a microservice design pattern. I would argue that's more a case-by-case basis depending upon use case. And then the last point is are you using enough Kubernetes in the right way? So that means developing the skills necessary to be able to sustain your platform, to sustain your applications, and then of course following the best practices that are provided to do both of those things.
So as you can imagine, this is just a sampling if you will of thinking about Kubernetes and of course thinking about resiliency within Kubernetes for your applications. There's a lot more that we need to unpack in order for you to get on the ground walking then running in your deployments in Kubernetes if you're just starting out or maybe you're in a position of supporting something right now. It's like are there any particular gaps, if you will regarding your understanding of Kubernetes or the various kinds of tool chains that are in place for your application deployments.
So with that portion said, I want to turn to what we have available to you to help fill those knowledge gaps or to help build out those skills when it comes to Kubernetes. So at Mirantis, we tend to focus primarily on cloud native technologies and also we focus on the OpenStack virtualization platform as well. What you see here is what the tracks look like. This is what I primarily focus on, but I'm more the vendor agnostic type of stuff as you can see down here.
So Kubernetes Operations, Kubernetes Development. So thinking about developing your applications and being able to deploy them to your platform and then we also have productized training for our opinionated Kubernetes Deployments. So there's training for customers who need more information about how to manage those particular products. And they come across in various tiers as well. So if you're just starting your journey let's say in Kubernetes as an example, we have a course just for that. Maybe you need to backtrack and say, "I'm not quite there yet with containers. We have a training there for you to get ramped up on containers then move to Kubernetes."
If you say, "Yeah, I'm familiar with these basics, no problem." Well, then we go into looking at Kubernetes more from a production standpoint looking at our 200 level classes. So these are people who have the direct responsibility or are going to have the direct responsibility in the coming months for managing and maintaining either applications or the platform itself. And the ancillary or the other components as well to ensure that you have an environment ready for managing and operating Kubernetes.
Then we have advanced classes as well. Just to kind of preview on this point, while we do have an advanced Kubernetes course, I'm towards the end of wrapping up a course on Kubernetes security where we cover a lot of advanced aspects that touch upon Kubernetes but go beyond what else you need to consider to ensure that you're running in a secured environment.
So if you wanted a visual way of looking at what this looks, this is on the Mirantis training website. We have a Kubernetes operational track. If you're wanting to know like, "Hey, is there a way that I can bundle up these courses into a singular shot?" Absolutely. You can start with containerization essentials and move on to what does day one operations look like for me as a person who is an operator for Kubernetes. And then as you see here, we have an advanced course.
This would be more geared towards day two topics. And the same is true for Kubernetes Development starting with essentials and then finishing off with native application development. And then for those folks who are working with Mirantis products specifically around these technologies, we have a track there for you as well including updated our trainings for Mirantis Kubernetes engine which if you're familiar with what UCP was, this is now what we're calling it here at Mirantis. And then Mirantis Secure Engine is its own separate standalone product. It has its standalone course. If you're familiar with what was Docker Trusted Registry, it's now being branded as Mirantis Secure Registry. Then for those folks who have an interest in OpenStack or have responsibility for managing openstack, we have training there for you as well.
Okay. I think these slides or this recording is going to be made available so I'll skip this, but this just gets you to understanding of how the certifications align to the course work that we provide. And with three minutes, I appreciate everyone's time for me allowing me to go over. We will turn it over to questions.
Thank you so much, Peter. I'm going to jump in here now and we decided that we're going to act on the fly. We're going to answer all of those questions that have been queuing up in our follow-up email and blog post. So I do apologize we're not getting to them live today, but you will see your answers, I assure you that by email or in our blog post.
We are officially at the end of our presentation. Thank you, Peter. Thank you, everyone for taking some time out of your day today. As I mentioned at the beginning of the session, we did record this and we are going to share it with everyone. And I am very excited to announce a promotion that we are running right now, which will save you 40% on your Mirantis on-demand training with ExitCertified. I'm going to link to that promotion in the chat now.
ExitCertified recently launched their newest training modality on demand. The platform offers vendor certified content available to the learner 24/7 with hands-on labs, searchable lesson plans, budget-friendly training all done at your own pace. We're very proud of the platform and we believe that you will like it too, so check out that link in the chat window to browse the latest course offerings. And again, one more time, thank you. Keep an eye on your emails. We will be sharing the recording and the answer to all of those great questions posted in the Q&A throughout the session. Thank you, everyone.