Hello and welcome to today's webinar, Automate Everything with Red Hat Ansible. This webinar is being presented by John Walter, Red Hat's Solution Architect.
Ansible is an automation and configuration management technology used to provision, deploy and manage, compute infrastructure across cloud, virtual and physical environments. This webinar will provide an overview of Ansible Engine and Ansible Tower as well as dive into various use cases that Ansible can automate.
John will demonstrate writing playbooks, the simple language that powers Ansible Engine. In this webinar our expert will cover automating tough tasks with simple, repeatable playbooks, managing large environments with dynamic inventories and Red Hat trainings Ansible's curriculum.
Stay tuned to the end of the webinar where we will announce an exciting promotion on upcoming Red Hat courses with ExitCertified.
During the webinar everyone's phones will be muted. So if you have any questions, please enter the in the Q&A box at the bottom of your screen. And if you enjoy the presentation today, and are interested in learning more about training anywhere with our interactive, virtual platform, iMVP, please visit our website or contact us.
Today's webinar is being recorded and we'll send a copy to each and every one of you. Now, let's get started. Take it away John.
Awesome, thank you so much. So again my name is John Walter, I am a solutions architect for the Red Hat training and certification team, so my role is primarily around technical enablement and especially how it relates to the training that we offer. As Michelle said, today we're going to be talking about Ansible Automation as a whole. So that includes Ansible Engine, Ansible Tower, and then our Ansible Network Automation Platform.
Let's first start off just establishing baseline, if you're not familiar with what Ansible is, we're going to cover Ansible from start to finish. When we talk about Ansible, we're really talking about a series of projects. Ansible Automation as a whole really comprises multiple projects and so a lot of the times when we talk about Ansible, we're really talking about Ansible Engine. And Ansible Engine is the ... we talk about the language that you're going to use to automate all kinds of different things. So that may be system administration tasks, it may be network administration tasks. Maybe that's deploying, monitoring numeric services, maybe that's deploying to your public or private cloud.
And then we also have Ansible Tower which is a WebUI that allows you to really operationalize all of the different tasks that you are creating these playbooks around. It really allows for scaling and things like role-based access control, or job delegation and job workflows, things like that. So we'll talk about those a little in depth.
But the first question that I typically get from customers is why would I want to move towards something the Ansible, and we've really boiled it down to these three keywords. Simple, Powerful and Agentless
We'll start off with simple. Ansible really requires no coding skills whatsoever. If you're familiar with what Amil is, Amil is an extremely human readable ... I'd even struggle to call it a language, although it is, but essentially everything is executed in order from start to finish and Ansible uses a series of modules. And we're going to get into that in just a little bit.
Essentially a module maps to a specific command or task. And so anyone within your organization even if they aren't system administrators are going to be able to go through a simple playbook and really understand what's going on. Ansible really emphasis self-documentation throughout.
I work with customers throughout my career and my career at Red Hat, who are having to migrate from these legacy bash scripts that people who had been in the organization for 20 or 30 years had crated and really just continued to build upon. Now they've left the organization and so they're needing to find something to understand how to make this a little more readable, a little more digestible and really more modular as well.
So that's the simple side. It's powerful really because of all the different use cases. And we're going to get into these more in depth in just a little bit. Typically the biggest question I get is why would I choose something like Ansible over some other kind of configuration management tool like Puppet or Chef. And the big reason is, Ansible is not just a configuration management tool.
We have a lot of customers who are using Ansible in conjunction with things like Puppet and with Chef because those tools do configuration management really, really well, but when it comes to things like network automation, or application deployment, they just aren't really up to snuff.
We'll talk a lot about the different use-cases for Ansible in just a little bit, but just keep in mind that Ansible is not necessarily a replacement for Puppet, it's something that you can really implement into your CI/CD pipeline.
This next point as far as how agentless it is, is another really big key as far as security sampling goes. Ansible doesn't require an agent to be on any of the systems that it's interacting with. All that you need is access. So on links for UNIX based systems you need SSH access. On windows systems, a WinRx system. Anytime that you would be able to run a Powershell command on a Windows system, you'll be able to run an Ansible playbook in that sense as well.
So what does that really mean? The bigeat takeaway, no agent to exploit. If you look at Puppet you have to have a Puppet agent on every system, every in point that you want to interact with. That's not the case here. So that's one less thing that can be exploited and really one less thing that you have to worry about managing on all of those different hosts or in points.
So let's dive into what can you do using Ansible. So we boil it down to these six major use cases here at the top. So orchestration, configuration management, application deployment, provisioning, continuous delivery and then security and compliance. And really the provision and configuration management have been something that has been a major focus of Red Hat's long before we acquired and have adopted Ansible into really all of our products. But application deployment has become, especially with everyone moving their workloads to the Cloud, has become a really big problem so to speak for a lot of our customers.
And so especially when they're moving from a test environment to development, to preprod and then finally to prod, it takes a lot for your administration teams or operations teams to spin out these environments for the developers to then deploy those applications and test them.
Ansible allows you to really automate that entire pipeline. You can create really simple job workflows where you spin up let's say RHEL VM, you deploy your application, you run a series of tests maybe from a bash script or Invoke that from the playbook. And then depending on what kind of feedback you get back from those tests, you can automatically deploy that from your test environment into your preprod environment and then test again. Essentially we're just taking all of the manual work out of your application deployments.
And then obviously continuous delivery, so many of our customers are now moving away from a waterfall style and towards a DevOps mindset. And so a big part of that is the technology, but another big part of that is the process. And so this allows you to really processify for lack of a better word, that CI/CD pipeline.
And the security and compliance, Red Hat for a long time was very hands off as far as security was concerned. We would obviously secure the OS, but when it came to compliance, we would point to other vendors. And within the last few years, especially my time at Red Hat we have really taken a different stance on security and compliance especially.
And so we work with our product security teams to come up with tools to help customers stay ahead of the curve as far as that goes. Obviously our ProdSec team works with various vendors like NIST or OpenStack, all these different projects to figure out, how can we automate security, how can we make this something that, especially on the DevSecOps engineers, how can we save them time so that they can actually address meaningful security things in their environments when scanning for vulnerabilities or weak configuration. That's something that should be able to be automated.
And then apart from that we've talked about configuration management, we've talked about orchestration and provisioning. The nice thing here in this next slide is really going to cover this a little more in depth, but what you can manage is pretty outstanding. Here you see a list of some general service infrastructure storage, network devices. Network devices is a huge use case for so many of our customers who have these giant environments with tens of thousands of network devices, whether it's switches or VLANs or routers and all these different devices.
We work with all of these different network vendors like Cisco or Juniper, Arista. All of these vendors to create modules so that the deployment and then the management of those devices throughout the life cycle is something that can be managed with something like Ansible.
This slide really just demonstrates a list of the vendors that we work with. Obviously Red Hat, our flagship product is our Linux platform. But that's not the only thing from an OS standpoint that you can manage. I mentioned at the onset that the different flavors of Linux we can manage with Ansible. Obviously Windows is a big use case for a lot of our customers. So many of our customers are mixed houses. They've got some Windows, they've got maybe some [Sosay 00:10:43], maybe they have a lot of RHEL as well, hopefully.
But so many of our customers are also moving towards a hybrid cloud and that's obviously a huge focus at Red Hat. And so the ability to deploy to and then manage various cloud vendors, whether it's AWS or Azure, OpenStack obviously as far as an OnPrem that we host, or offer.
And then take a look just as far as for containers. So many people are moving from their workloads being in virtual machines, moving towards containers, things like OpenShift or other container platforms.
Managing storage in the Windows and obviously the network side which I've talked about a lot. And then take a look at the DevOps and monitoring options as well. So many of the tools that you are likely already using, you have the ability to integrate those things into Ansible, as far as providing notifications whether it's if a job was run successfully, having a ServiceNow integration. So if somebody opens up a ticket, you can have Ansible essentially understand what that ticket is, find a playbook and then run that playbook so that you don't really have anyone at the service desk that has to interact with these low-hanging fruit tickets.
We're going to talk a little bit more in depth whenever I go through my demonstration, but this is what a pretty simple playbook looks like. What this playbook is doing is installing and starting a patchy web server. So this is something that typically back in the day if you had 10 web servers that you needed to spin up, you were deploying 10 rail systems and then you were going through each and every one. And maybe you were able to create some kind of bash script that would SSH in and run a few commands. And you would need root access to do all of those things.
What this does is, working from top to bottom, obviously we've named our playbook and it's so important for best practices. You want to treat these like code. And so you want to make sure you're documenting all of this from start to finish. But we're going to be targeting a series of hosts and we'll talk a little bit about how we actually create those host files in just a little bit.
But then you move down to the different tasks. And the texts that you'll see in red and my apologies for those of you that may be red-green color blind, but we essentially have three tasks that we're running. And these tasks map to commands that you're likely using if you're a Linux Admin.
Yum, if you're not familiar is the package management tool that we use in RHEL, it's similar to things like Afgad or DNF if you're using a different flavor. And all that we're doing is targeting and making sure that a specific file is present. So in this case, it's the httbd, the Apache web server package. The service.
Next we're taking from our host file, or our host system to all of our destination systems. We're copying in a DEX file over to them and then finally we're starting the service.
And so you'll find that, and we'll go through this in a little bit, each of these tasks, each of these modules has a series of switches in the exact same way if you were running the yum command just from the command line. So we'll go through these in just a little bit whenever we get into our demonstration. I just wanted to show this is something like 16, 17 lines of code that potentially is deploying a web server onto potentially hundreds of hosts.
Next we'll talk a little bit about where Ansible fits in. We're going to start off on the left side here. You have your users, so these are the people who are actually creating the playbooks. These playbooks are going to be comprised of a series of different pieces.
We mentioned before playbooks are written in YAML, the tasks are going to be executed sequentially. There's a lot of room for different various plug ins, different logic. You can run them serially, so it'll execute the entire playbook in full on one host before moving to the other, or you can have run in parallel so it's going to do each task on each host before moving to the next task.
We talked a little bit about the modules, but modules are essentially Python scripts. And so some are going to be written in Powershell, the ones that are targeted for Windows devices. But largely they're Python scripts. So, we shipped today something like 12,000 different modules as part of Ansible comes with RHEL, upstream it's closer to something like 30 or 40,000 and those are going to map to all kinds of different capabilities. So some of them will be tied to monitoring services, some will be tied to various operating systems, et cetera. And what these modules do are essentially replace commands or a command or a series of commands in your workflow.
We talked a little bit about plugins as well, but plugins allow you to really adapt to various operating systems or various platforms. They all to be a little bit more flexible and you can really adapt your let's say RHEL specific playbook with plugins to be more universal so it can work on other flavors as well.
Inventory files are just a series of hosts and what you're seeing here is an example of a static inventory file. So this would just be one file. It's very, very basic to write in and you can group various hosts into groups. So for instance we have two web servers here, a database server, we have a couple of switches, a firewall and then our load balancer.
And the nice thing here is if you think back to what that playbook looked like we were just targeting web hosts in this scenario. So the playbook would run against that grouping. You can also run against a single server, you can run commands ad hoc against a specific IP address that doesn't exist inside the inventory as well. And then if you have a lot of workloads in the Cloud, those IPs are changing constantly, we have the ability to integrate things like AWS and Azure and OpenStack into Ansible so that you can actually get dynamic inventories as well. So those will update automatically anytime those IPs change too.
And then here a lot of this can be hosted, as well manage a lot of these cloud vendors. So things like OpenStack and Satellite from the Red Hat side. VMware, AWS, GCE, Azure. All the major vendors and some of the minor vendors out there as well out there. You can host these workloads there and then target those workloads as well.
And then integration for things like ServiceNow, Cobbler. A lot of the tools that you're working with right now can be integrated into Ansible and really automate what that service ticketing workload looks like for your team.
And then ultimately, what can we automate? We talked about this a few times already but, various operating systems, various network vendors. Really all of the things that you're going to be interacting with or responsible for from an administration standpoint, from a development standpoint. We can manage all of those things with Ansible.
So we've talked a lot of Ansible Engine, let's talk a little bit about Ansible Tower next. I mentioned this at the onset, but Ansible Tower provides a WEBUI and a RESTful API that really allows you to operationalize all of the playbooks that you've created.
So one of the big benefits of Tower is the role-based access control feature. What this provides is, you can have your operations team manage access not to just who has access to specific systems, but also who has access to specific playbooks.
You can even say, this user has access to our development environment, but not to our test environment and not to our production environment. You can say, these users have access to these playbooks and can only run these in these environments. So it just allows a specific operational accountability. You're making sure that only specific users are working with what they should be working with and not anything outside of that. And that's something that's really easy to mange. It's something that's really easy to change on the fly as well.
The really nice thing is that you can also manage who can run which playbooks as which user. So the playbook example that we used earlier, on a RHEL system, you need root access to use Yum and you need root access to certain enabled services. And so you can say, with these playbooks, these playbooks are allowed to run at the root user, but these playbooks are not. And so it's something that's really, really fantastic for management and operations teams.
Push-button deployment in central logging is another big key feature within Tower, but I think my favorite feature is the job scheduling. You can create these giant workflows that are sort of forks, after forks, after forks. And I think that we have a demonstration of this a little bit later on.
But essentially, I used the example before as far as you have an application, you are deploying to a test environment and you're running test. And based on the feedback that you get from that playbook, you can fork it towards a different playbook. So if the test is successful, you can go one direction to actually push that into your production environment, maybe that playbook says, take these 10 servers off the [load down 00:20:38] server, deploy the application, run one more test and if that's successful, add it back into load down server and move onto the next 10.
If that initial test had failed, then maybe it pushes it to a different environment. Maybe a preprod environment tries to run again. Having the ability to create these logical workflows for your deployments is really, really key and ultimately will save you a lot of time, especially if you are iterating quickly, if you are pushing updates to these applications in that kind of DevOps mindset continuously. Ultimately it will save you a whole lot of time.
And here, this just kind of breaks that stuff down a little bit more. The role-based access control is a huge, huge, huge feature. The workflows again as well. Having those enterprise integrations, the tools that you're typically already using, whether it's notifications with PagerDuty or Slack, whether it's using authentication with Active Directory, or Kerberos, all these tools that you're already using, having the integration with that into Tower is key as well.
And then ultimately everything is centrally logged. So if you're using things like log4j, you have these logging systems off site, everything that's logged within the Tower you can then push into those log services as well, but ultimately you're admin users, you're operations teams can look and see who's running which jobs, when were they run, were they successful.
And here you can see the full scope of where does Ansible Tower fit with Ansible Engine. You have your administrators who are going to be creating the playbooks. They're feeding those into Tower and then you have your users, maybe technical, maybe some more non-technical who will be able to quote unquote, check out these playbooks. They'll be able to essentially if they have access to a certain system, they can request to run that playbook on a specific system. And so it really provides a [inaudible 00:22:43] interface for running jobs, for scheduling jobs, for checking to see that jobs are run successfully, or successfully. And again, what you're seeing there at the bottom, those are the different things that you can automate, all the different use cases that we've talked about throughout.
So here you can see a few of the features within Tower. This is the dashboard when you log in. So at the top you can see all of the hosts that you have access to. You can see if any hosts are down as well as the failed hosts. You'll see all the different inventories that you have, any sync failures. That's where the dynamic inventories come into play.
In the middle there you'll see the job statuses, so you'll get a nice representation of all the jobs that have run, when they've run and if they were successful or failed. And then various templates and job runs there at the bottom, you'll see how many times they were run, which playbooks were run, which job templates were run and if they were successful or failed.
And here you can see a specific job and on the right you're seeing essentially real-time playback of a playbook, so here this is somebody has gone through into a project which is just a workflow essentially. They've run a playbook and they're seeing real-time feedback as though they were actually at the console itself seeing it run. That's a really fantastic I think feature for, especially those that are more technical, they can still run these playbooks from a centralized UI, as opposed to having to go into a server and run [inaudible 00:24:22] targeting multiple hosts.
This is what the activity stream looks like, so again, this is where we talk about the centralized logging. We'll see anytime that templates are created, anytime that jobs are updated. You'll get a central log as to all of that activity there.
And then again, we've talked about inventory files a few times here. If you have a small non-cloud based environment, then using static inventory files is more than okay. We within our curriculum, we cover both static and dynamic inventories because sometimes especially let's say in a test environment, all you really need is a static inventory. But for your larger deployments, when you have all of your dev and prod living across multiple public or private clouds, you're going to want to have dynamic inventories. And so this is the general source of truth as far as pulling in all of these inventory files.
And here you'll see job scheduling. My role before I moved into the training organization was within the support delivery organization and so my job primarily was on the user space side, so a lot of my day job was helping customers understand what these 40,000 line bash scripts were really doing and then trying to break them down so that they could be a little more digestible.
And so what a lot of customers were doing were just invoking these bash scripts inside of cron jobs, and as soon as that person that dropped that cron job left, nobody really knew that it was there. And so this allows for a lot of transparency in that regard to create these jobs and schedule them in, then everyone will see when they go into that dashboard that that job is scheduled to run, or has just run.
And then here is the external logging integration. All of this can be pushed to external sources. You can host these if you have a centralized logging server. You can push these all into var/logs or I don't know what the Windows equivalent is unfortunately. But all of this stuff can be hosted just within the WEBUI and there's the ability to essentially create a highly available tower as well, so you can have that failover in there as well so there's never a concern of losing access to these logs.
The integrated notifications are a big feature as well, especially on the networking and on the service desk side has been really, really exciting. Here's just an example of slack integration, so as soon as a job was completed, whether it's successful or failed, it would push notification to a specific group of operators.
And so they would know that a job went through, they would get some kind of notification of it was successful or failed. And they'd be able to follow up if needed. You can do the same thing that I've mentioned a few times, but with ServiceNow, the ServiceNow integration is really, really fantastic and is something that we've adopted internally for our internal IT, where traditionally we would open up a ServiceNow ticket, we'd have to wait for somebody to prioritize it one through four and then depending on what the issue was, we may be waiting 24 or 48 hours for a response.
A lot of the times it would be something like, Hey, my log ins not working for this. And so you would choose a specific category and you'd describe the problem. Now we have this all automated, so there's a grouping of specific case types that when you choose them, Ansible run a series a playbooks and try to resolve it and is able to test on itself. And then you'll get an automated response saying, Hey, we've remedied the issue or, this will actually require manual intervention. And so for a lot of what we would see as low-hanging fruit tickets, it's saving us a lot of time and in turn our customers are using it in a similar way.
So I think at this point I'm going to jump into my RHEL system and we'll actually do a demo. So this is just, if you're not familiar, this is our learning subscription. We provide as part of our online learning or learning subscription web-based VMs. So what we're going to do here is create a few different things.
So best practice as far as playbooks are concerned, typically you want to have a configuration file and an inventory file map to every playbook or series of playbooks depending on what it is that you're trying to achieve. There is a master inventory and a mater configuration file in etsy/ansible. And so by default, if it doesn't find one in your present working directory, that's what it will use by default. But what we're going to do is we have this playbook basic directory and we'll just see what's in there already.
So we have our Ansible configuration file in inventory file locally, so that's what our playbook is going to use by default. And then we have this files directory as well and let's just see what's in files. Looks like it's just an index file. So what we're going to do is actually create that playbook to an extent that I showed before.
But let's say you're brand new to Ansible and you're not really quite sure which modules you could use. Ansible has some pretty fantastic documentation that's built in. So if I just do an Ansibledoc-L, it's going to list all of the modules that ship. So this is going to be something like ... Well, let's just see.
(silence)
So 2,800 that we have just on this stock system right now. And there's so many more that we can download. We'll talk a little bit later on about Ansible Galaxy which is essentially just a repository upstream of all these different roles in playbooks.
But what we're going to do first is find ... I know that I'm going to be installing something on a RHEL system so I know that would typically use Yum. And so it looks like we have a couple different Yum modules that are available to us. I know that I'm not going to adding or removing repository so I'm going to use this first one.
So, I'm just going to jump in here to the documentation, and this is very similar to a man page. Here I can see if any of the switches or options are mandatory, there would be an equal sign next to it instead of a dash. The nice thing here is that Yum, none of these are required. Even the name of a package, because what I could be doing is just giving a list of all available packages. So I don't even have to provide the name of a package.
Here, these are all different options that I can use within the Yum module. I can specify a different install root, I can specify that I want to skip any packages with broken dependencies. I can even just say whether I want it to be present or not. So we'll jump back here.
Another really good resource, and I'm going to come over here real quick, is the Ansible website. If you go to the Ansible docs page, there's a module index which I think I just see right here. And they're all broken up and hopefully this is zoomed in enough to see, but they're all broken up into general categories.
So let's say just for Windows modules for instance. Here I can see a list of all the Windows modules. It's a little easier to read I think sometimes than doing from the command line. Let's see if it's go back to all modules and I'll just go down to Yum. And so this is the exact same information that I just got from the command line, but it's a little easier to read I think.
You can see if there's choices that are required within there. And then there's some examples here at the bottom as well so we know it's an example. So here I can just see, I want to install a specific version of Apache as opposed to just stating the name of the package and hoping it just gets me the latest version.
So let's jump back in here. The other thing is that we want to take a look at really quickly are the inventory file. So we've got a couple servers that we're going to target here for our web servers and then for the Ansible config. Pretty basic. It has our inventory stated, what we want to use by default. We have a user that we're going to run the playbook as, and then it is going to sudo into root access because for this specific playbook we are going to need root access.
And then ultimately what we're doing is going to be copying this file locally to all those servers and this is what it's going to say. This is a test page. So let's actually create the playbook. So one thing to keep in mind is that Amil is very particular. You want to make sure that you're abiding by their standards otherwise you're going to run into some issues.
So we're going to name this, install Apache and move index html file, make sure service is persistent. So if you're not RHEL users, starting a service does not necessarily mean that if you restart that server, the server's just going to come back up. So we want to make sure that it persists through reboots.
We're going to target those web hosts. And honestly I could just specify those two host names here as well, but because we have them grouped into an inventory, this makes it a little more efficient, and if later on I want to add a couple web servers to my environment, all I have to do is update that inventory file and run this playbook again.
And then here it is, it's just a series of test. And again, we want to make sure that we are documenting this. So the first one is going to be install httbd package. As I said before we're going to use the Yum module for that. And the only things that we really we need to specify are the name of the package, httbd, and then I'm going to specify a state. And so we had a few options there if you remember, what we could do is present, make sure that the package is present.
And this is something ... maybe I'll pause for a second, if you're not familiar with the concept of item potency, item potency is essentially getting to the same state. We want to make sure that Ansible is only affecting systems where it has to. So what it really does, is it's going to check and make sure that the configuration is the way that I want it to be. And if it already is that way, it's not going to run these commands, it's just going to skip the task and move onto the next one.
So in this case I could say I want the package state to be present, it's going to look and see if the package is already there, and if it is, it's just going to move on. What I'm going to specify is latest. And so it's somewhat similar, it's going to check and see what version of the package is there. If there's no package there, it's going to install it. If there is a package there, it will compare it to what the newest available is, and if it's not the latest version, it'll install it. If it is the latest version, it will just move on. So hopefully that makes sense.
That's the first task that we have. Second task we're going to do is we're going to copy the index html file to hosts. We have a copy module for that and all that that is going to specify is the source, which is in files index.html and then the destination which in RHEL the default will be vardubdubdub.html, index to html.
And ultimately we need to make sure that the service has started. So we have a service module for that. We need to specify the name, which is httbd, state, started. And then the enabled is [inaudible 00:37:29]. I mentioned this before but, we need to make sure we're enabling the service so that if that web server gets rebooted, the service will automatically start back up.
So this is ... we're line 20, potentially affecting thousands of systems, although in this case just two. So before we run the playbook, what we want to do is do a syntax check. And if all is good, formatting wise, we should just get an output that tells me the name of the playbook. All right.
So now we're just running the playbook and you'll see real-time feedback, the first thing that it's going to do is tell me the name of the overall playbook, we'll get some tasks. The first task is going to gather facts and the nice thing about gathering facts is we can actually get a lot of information from these hosts and use those as variables throughout the playbook.
We'll move to the second task which is installing the package, so it's doing these on both server C and on server D before moving to the next task, so we should see a changed as opposed to okay. Then it's going to copy that index file over to the hosts, we should again see changed because that file isn't present currently. And then ultimately it going to start and enable the service.
And so we get a nice little play recap at the end and we can actually make this much, much more verbose, this is a pretty rudimentary playbook here, but we can actually get a lot more feedback than we are here. But typically this is all that you really need. You'll see if any hosts were unreachable, you'll see if any of those tasks failed. The way by default if any tasks fail, it actually just kicks you out. So here I should be able to curl and we get that index file.
So now if I run this playbook again, we'll see that nothing is going to change, we'll just get a few okays, but if I make an update to my inventory file, if I make an update to my index file, if I replace that test page with what I actually want the content to be, only those tasks will run. And there we go, we get our nice play recap with just four okays there.
All right. So we talked a lot about Ansible. Let's talk a little bit about the curriculum that we offer as well. So we really have a few different entry points depending on what your job role is. One thing we have here at the top is essentially a technical overview, it's very similar to what we went over in the first part of this webinar. These are free and they're available on the ExitCertified website if you want to take a look at those. They're about two hours. They're an overview of what you can expect to learn from the course and they're delivered by one of our instructors.
But then depending on if you're network admin, a Linux admin or soon a Windows admin, we have a course that's going to cover everything on the engine side. So understanding how to write playbooks, understanding how to work with things like roles which are essentially Ansible projects, managing static and dynamic inventories, and then we relate it all to hands-on experience with whether it's system administration tasks or network administration tasks, we give you real experience working in that lab environment that I was just demonstrating.
Our Windows administration class is actually going to be coming out in just a couple months. It's something that we've gotten a lot of feedback from our customers who are not really RHEL [inaudible 00:41:16] but they're looking to adopt Ansible in their overall environment. And so we have a few different entry points that you can consider if you want to get up to speed on the Ansible side. And then all those really filter into this advanced automation course. That's going to go into things like managing Ansible projects with Git or implementing a CI/CD pipeline and it's also going to cover Ansible Tower, really all of the feature set.
And so that's really if you've been using Ansible for quite some time, the Ansible Best Practice course is great secondary level for you. And this breaks that down. I mentioned already some of the topics covered are managing your playbooks or inventories with Git, controlling applications through the REST API, there's a URI module that allows you to query APIs from within your playbooks as well, and then ultimately covers Ansible Tower in depth.
So at this point, we'll pause. I think that we have some questions in the Q&A.
(silence)
The first question is, Is there a module for dynamic inventory to work? So no, that's something that's actually outside of the modules. You can either integrate inside of Tower or you can query those inside of your inventory file, but there's not a module that you need to actually invoke that.
Do you know if Ansible's officially supported for AIX? Yes, there are a series of AIX modules out there and outside of that, there are a few AIX projects on GoodHub as well. So obviously in traditional Red Hat fashion, we work a lot with these upstream projects and then harden them and verify, certify before moving them into our projects, but I mentioned at the onset that there are so many modules that we ship in and then there are 10 times as many more that are available to you upstream and those are all within the realm of the ability to use ...
What is the difference between AWX and Ansible Tower? That's a really good question. So if you're familiar with what our model looks like as far as our relationship with RHEL and Fedora, we essentially work with the upstream community for AWX, that's the Ansible Upstream Tower project. And we harden it. So AWX may have some features that Tower does not. But it's also something that is going to be bleeding edge. So if you want something that's bleeding edge, may not necessarily be 100% tested, that's going to be the AWX project, Ansible Tower is something that has been essentially certified to be enterprise ready. For the most part the feature set is going to be very much the same, but obviously new features will be introduced into AWX before they're adopted in Tower, but our development cycle is very, very rapid, and so typically you're not going to see a big delay between features in AWX that don't make it into Tower very quickly.
Can you expand on the integrations with test automation tools? So definitely, so a lot of things like Jenkins or various CI/CD pipeline tools that are out there, I think like Atlassian things like that, there's going to be integrations for a lot of those tools. I would take a look again at the Ansible documentation page because there's going to be a really big breakdown of how to integrate those tools into Tower, how to integrate those tools even into Engine or the upstream project core. But a lot of it is just taking your existing tool set and finding how it integrates into, whether it's Tower, or into Engine. For the most part, that's just a plugin, an API, something like that on the Tower side. It's usually typically pretty easy.
What should be the best fit when choosing Ansible for either a continuous integration or continuous deployment? That's a tough question to answer. Ultimately, and we talk about this a lot, moving towards DevOps is not just a technological concept, it's also on the process and on the culture side as well. And so ultimately you need to have buy-in from everyone within your organization, whether that's management all the way down to your engineers. But a lot of it is going to come down to making sure that your operations team is on the same page as your engineering, your development teams and creating workflows that make sense so that you can constantly iterate and you can constantly improve and deploy and then when you run into issues that you have, workflows in place that you can address those issues quickly.
(silence)
All right. Well, it seems like that maybe the last of the questions. I'll give just another few seconds and then I'll think we'll wrap up. One thing I will mention is that myself and Miles Brown from ExitCertified will be having a second panel at two o'clock really around cloud technology so we'll be talking a lot about containers and the shift towards container platforms like OpenShift so definitely. Oh it looks like we have one more question.
Can you share a document or URL for dynamic inventory configuration via playbook? Yeah, so let me jump back into ... So the Ansible documentation page I think is fantastic and you can find that either at docs.ansible.com or docs.redhat.com
And you'll find a lot of information here around whether it's Ansible architecture, or various plugins, or best practices, but here we have something around developing dynamic inventories, and so I would say just run to the Ansible docs page, here on the left, it's very searchable. And I can copy and paste that into the chat. How about that? So let me send that there. All right. So last question we'll go ...
Do we have any video rerun for a user to join in the middle of session time? So this will be on demand for presumably the next several months, a year or something like that. So yes, you can definitely re watch this as well.
[inaudible 00:48:15] an e-mail.
And you'll get an e-mail with the recording at the end of this. I think Michelle will confirm that in just a second, but ...
Do we have access to the test environment to practice? So this test environment is something that is a part of one of our courses. So I'm specifically inside of our Red Hat Learning subscription right now. Obviously if you have a test environment in your own environment, you have access to Ansible right now, with a RHEL subscription. So, that's something that you can start to play around with on your own. If you want to have a little more of a focused environment, then Red Hat training definitely has options for you. I would say reach out to your ExitCertified rep and that can definitely talk about some of the different course options that we have. We offer our Ansible curriculum across multiple modalities, so we can bring somebody on site to you, we can online self-paced avenue as well where we host the labs for you. So a lot of different options there.
And on that note, I will say thank you so much everybody for joining us today. We really appreciate it and if you do have the time at two o'clock, we'll be doing another one of these all around cloud technologies.
Thank you so much John. We're at the end of our presentation and thank you to each and every one of you for tuning in. As a token of our appreciation we are offering 15% off Red Hat training from ExitCertified, using the promo code Red Hat 0918. I'm going to post that in the chat right now for everyone. And like John said, we're hosting another Red Hat webinar in just one hour from now on the topic of leveraging Red Hat OpenShift for a multi-cloud strategy. If you're interested in attending, please register using that link that's also posted in the chat.
John mentioned it, it's true, we are sending each and every one of you a recording of today's presentation and you should have that by the end of the week. Thank you all so much. I hope you enjoy the rest of you day.