Alright, welcome everyone. Today's webinar is titled options for running micro services in AWS. This webinar is being presented by Miles brown senior cloud and devOps advisor at Tech data exit certified
A quick summary of what the webinar will entail, you're going to learn how microservices architectures differ from traditional monolithic architectures.
But I'll let Miles jump into all of those details what I want to tell you though is that you do not have a microphone. There's no
Outgoing audio for you. So if you have any questions don't hesitate to enter them in the Q AMP. A or chat window at the bottom of your screen.
If you enjoyed the presentation today. And you're interested in learning more about training anywhere with our interactive virtual platform I MVP.
Please visit our website or feel free to contact us. Today's webinar is being recorded and we're going to send each and every one of you have a copy of that recording. All right, let's get started. Miles, you can take it away.
user avatar Myles Brown
Alright, thanks, Michelle. So first off, somebody might not be familiar. Exit certified is our go to market brand for technical training and
This is a couple of us for receiving the award for the AWS Americas training partner of the year for 2019 and we've actually won awards for a bunch of different vendors IBM Red Hat, you know, some others and
But technical training is is mainly what we do at exit certified, but we're part of a much bigger company which is tech data, which is a fortune 100 company.
My name is miles brown and I'm the Senior cloud and devOps advisor and I
I guess I've been primarily an AWS instructor until about a year ago I kind of moved more into this sort of advisory role, but I still teach the AWS curriculum and also some Google Cloud stuff as well. So I've got my professional
Cloud Architect for Google. And I have the two professional and a couple specialty certifications for AWS as well so
The I've been working in the cloud for about five and a half years now as an instructor and a little while before that, you know, doing some real work.
What we're going to look at today is as Michelle mentioned, sort of, you know, some of the problems that we ran into in traditional software development and building these monolithic applications and and how
microservices solve some of those problems and then really what we're going to focus on is, well, what are some of the technologies in AWS that make it
You know, easy to implement those kind of microservices architectures, we're really going to focus on to one is the idea of containers and containerization many of you've heard of Dr. Cooper 90s those ideas.
And the other is sort of more of a pure service idea using things like AWS lambda. And so we'll take a look at all those. So let's just start and this will be way back you know history lesson.
Up until the 2000s most software was developed as large monolithic applications we started to break them up a little bit. When the web came along. And so you start to see those sort of n tier, maybe three tier applications.
And a lot of times the systems really mirrored the organization around specialized job roles.
So you had some people who were sort of web developers and they would build that part of it then there'd be the back end or business logic people, they would build that part then there'd be the DB a's and the database.
Maybe data developers as well. Working on the data side, you know, and this sort of comes back to something
That, well, it's, it's called Conway's Law. Now it's got Melvin Conway was really you know he was around in the late 60s in the 70s worked on a lot of different programming languages.
And he kind of said organizations which design systems are constrained to produce designs that are copies of the communication structures of those organizations.
And so, you know, very often, you saw things like this. So we had our teams split up and then our applications split up like that.
And anything we're doing these days around microservices, it's, it's sort of actively working against that idea and trying to say no, let's bust that apart.
Now, the other thing that we ran into in large software development houses was
You know that the siloed kind of development and operations side of things. So in, you know, in a smaller shop, you'd be jack of all trades, you, you, you know,
Gather the requirements code it do it do everything yourself right but in larger development houses, you would get distinct teams.
For programming the code and maybe do unit testing on that, but then there would be a separate group, who's just doing quality assurance and really just testing, then you would have a completely separate group whose job it is to take that
You know unit of deployment and put it into production and manage it monitor it there and then a whole separate team probably just for dealing with database.
Now from the 90s on we saw a lot of
Interesting things happen on the development side, a lot of agile techniques extreme programming and and then techniques like Scrum and can man and all those things got very popular.
And so that meant that the the development team was doing this sort of iterative and incremental building of the code.
But then when we put it to production that was still only happening, maybe every six months, you would give this you know unit of code and say, Okay, we've got a new version.
And so those deployments production were very large rare and often very disastrous right
And so what happens is the teams are pointing fingers at each other, saying, oh, you know, your, your deployment environment isn't the way you said it would be, and the deployment being people say all the developers screwed this all up and so really you know the the
The problems here. We're not just that it's there's there's more problems than just those. But those are sort of some of the culture problems that DevOps types tries to help
We had some other problems with those big monoliths right one was you couldn't just change one aspect of the system, you had to wait until we did a big push to production, and you probably couldn't easily scale independent parts of that model.
And so what you ended up with was, you know, you know, you take Amazon com back in the early 2000s, you know that huge website with just one big monolithic c++ application.
Or somebody asks, What's a monolith. It's just one big unit of deployment instead of little modular pieces. It's one big chunk.
So it's one big program that everybody has to work on. And therefore, you kind of sort of standardized and say, Okay, we're going to use one technology for this thing.
Even though if you broke it up into pieces. There might be some benefits. Oh, this language and this framework would be better for this piece this language and that framework would be better for that. And so we couldn't take advantage of some of those things.
Until the concept of microservices really came along and it's a it's a term that's kind of, you know, hard to pin down and and this is a pretty good description from Martin Fowler, he says.
There's no precise definition around it. But there's certainly some common characteristics things around, you know, each microservice is sort of one small business capability and we're going to do some sort of automated deployment.
We're going to have some intelligence around the endpoints and decentralized control of languages and data, you know, those are some of the big ideas of microservices.
Now, if we sort of do a basic you know look at them monolithic had the advantage of very simple deployment. You just deployed one big thing. But the problem was.
There was so many changes that went into that one big thing when you put it into production.
You'd be like two weeks trying to fight fires everyone running around, figuring out what went wrong.
When you have very small microservices, you tend to just deploy one little thing with one little change if something goes wrong, you know exactly what the problem was you roll it back right away and fix it, you know.
So we can do these partial deployments, which which are a little more complex, but turned out to be a little friendlier
Because instead of the big binary failure mode like either the whole thing works or it doesn't. You know what we can have is, even if some microservices are broken.
Other ones are still working. And so maybe we build in this concept of graceful degradation, where we say
Well you know that element of our application isn't available today. But it's not the end of the world. Like, if you look at Netflix. If the recommendation engine isn't working.
They don't just say Netflix is broken today. They say, oh, here's the top 10 most popular things that you might be interested in. You know, it's just missing the personalization part of the recommendation engine.
The way we kind of make changes to code is very different in a model, if you're looking at refactoring kind of the whole thing at a piece
With microservices each microservice has a very specific mount module boundary where you say, here's the inputs and outputs to this service as long as I don't change those I can change whatever's going on inside and nobody needs to know.
So I can make little changes to how things work and they don't have those big ripple effects.
I mentioned earlier that with a monolith. Usually, you know, the whole development team across the board is using the same technology stack. You say, Okay, we're Java developers using the spring framework and whatever. Right.
In microservices. You have a lot more technology diversity, you might say, Hey, your job is to make this service. I want a RESTful web service API to it. How is it implemented, that's up to your team. I don't know. I don't care. We use. Whatever's best for you.
When it comes to scalability, typically in those big monolithic apps you just put it on, you know,
When you need it to scale bigger because you have more requests coming in or whatever you'd end up putting it on larger and larger machines that have more horsepower, more CPU more RAM or disk, whatever.
microservices are typically set up in a way that we can do horizontal scaling, so we don't have to find bigger and more expensive machines we just run more smaller machines in parallel. And that really works nicely with the way the economics of the cloud work.
When it comes to the data side of things, these monolithic databases generally have one huge shared data store and for many, many years that would meant a relational database.
And we stored all kinds of data in a relational database, whether it was sort of transactional data or big data or
You know, sort of your analytics type stuff where you're doing full table scans and it's maybe not the best thing in a relational database.
And I even stored binary large objects in an Oracle database. These days, the idea is let's use the right tool for the right job.
Instead of the relational database being the Swiss army knife and say hey store everything in here.
We've got all kinds of no sequel databases, you've got, you know, MongoDB and Cassandra and you know dynamo DB in the Amazon world. We've got so many different options that for a given microservice you can choose the right one for this job.
And then finally, you know, the big monolith was sort of owned by everybody. So everybody was working on it. The way we break down microservices typically
There's sort of a small team that's in charge of one or several microservices. And so they have ownership from end to end, you know, and that's because microservices often goes hand in hand with a culture shift, which we call DevOps.
And that also goes hand in hand with the cloud, right. So these are three trends that we typically see all happening together.
As people move, you know, to modernizing their applications. So a lot of companies are already in the cloud. They even if they're just sort of testing the waters. Right.
Doing non mission critical things like dev test environments. We've had
Students come through over the last six years, taking AWS classes and a lot of times we saw the first wave. They were just trying to take their existing apps, move them to the cloud.
And now they're saying, hey, now I really want to take advantage of all the benefits of the cloud and build what we sometimes call cloud native applications and so
You know this this concept of cloud native is one where you're saying, hey, these applications are built for the cloud.
Maybe built for any cloud. So not just AWS, maybe we're thinking about Azure or Google Cloud, you know, and so we want to make things in a way that I can move them around, but also I can build them and test them on my laptop, you know,
And so that's sort of a big move the DevOps move, of course, is that, sort of, you know, there's a culture shift in how we build code within our companies.
And you know, it's not just a culture shift. There's also practices and tools around it and everything else.
One of the ways to achieve that sort of
better communication between the developers and the operations teams is to make small cross functional firstname.lastname@example.org they did that in the in the mid 2000s, they coined the term to pizza team.
Which was really sort of, you know, maybe 810 people enough people that you could feed with two pizzas.
And, you know, it would be made up of, you know, some developers, maybe a couple testers and maybe one or two operations people
And that group would be in charge of a set of micro services. So that's exactly what Amazon went through right in
They took their big flagship product Amazon com right and they blew it up into thousands of little by that time, they didn't call microservices, we had to coin that term yet.
They called them primitives and then they changed completely. How they developed applications into these small cross functional teams and one team would be in charge of
A set of micro services from end to end, they would say, let's gather the requirements, let's build the thing. Let's build a roadmap for what are the
New features, we're going to add over time and they would code it, test it, put it into production maintain it there.
You know, and now they did have to interface with other teams that want to use that and sort of, you know, decide, okay, what's the interface to this service look like and hopefully not change that very often. Right.
Now all of this that we're talking about is, sort of, you know, trends that we see in in kind of how software gets made.
At the same time, we've seen some trends in how the technology has changed, right. And so, you know, back in the old days, you know, when I started in the 90s.
We mostly had bare metal kind of applications running on some operating system. And if you wanted five web servers you went and bought five physical servers.
And all that changed in the 2000s. When VMware and some other companies really started to push this idea of virtualization.
Where I could get just a few large servers and I could stack all kinds of virtual machines. On top of that, and I could really, you know,
Squeeze the most out of this hardware that I was buying it was running at a high utilization all the time.
And so that's sort of the that's the big change in data centers that happened from the 90s into the 2000s. And so we have this sort of term that came out of it. We talked about the
The old style being pets versus cattle. Right. And so if you think about it.
If you've got a physical machine. And there's some file on it that's corrupt, you know, you're going to try and take some time and nurse it back to health, you know, and you give it names.
Like those machines you would those servers would have names and and you know an administrator would say, Hey, what's wrong with and they give it the name and then they go fix
In the virtual world. It's up it's you don't think of these virtual machines, as you know, important things that that were they're not worth naming
And they might not last long, especially if there's something corrupt and and what do you do look you kill it. Get rid of it, get a new one.
Right, so they're very lightweight, we can launch them quickly. We can get rid of them quickly.
And that is really the basis for cloud computing and all, all we did with public cloud, like AWS and Microsoft Azure and TCP is they're just saying
Well, not every company has to run their own data center and and have to deal with the physical machines and then launching the virtual machines will just rent you virtual machines for pennies, an hour, you know,
And so that's sort of the world in which the cloud kind of grew. Now the big shift lately has been the idea of containerization right so when Docker came out in 2013 that's what changed the game. It made it very easy to build these
Little containerized applications and they're even much more lightweight than a virtual machine, you see in the picture here, where, where, you know, each virtual machine has an operating system and then the app running on it.
But the difference with containers is those apps are running in a container runtime like Docker and they're all sharing the same kernel of the host operating system.
So they're much more likely, you can launch them get rid of them really quickly. So in the in the
Cattle versus pets analogy. Maybe you think of them as chickens where they're a little like cattle and like you know you're not necessarily naming them all.
But they live in this little thing and they their lifespan is much shorter, you know, they come and go quickly, you know,
And, you know, they live in their little coupe. I guess the last one I don't know about the analogy, it kind of, you know,
Breaks down a little bit. Is the idea of service apps. We're going to talk a little bit about this coming up things like AWS lambda, where you say,
I have some code that I want to run, but I'm just a developer, I don't want to have to go and launch a Linux machine.
And pay for it for 24 seven when I want this code to run every once in a while.
Right. And then I've got to manage that virtual machine. And there's an operating system and a file system. And I got to patch that thing if you know there's
Comes along over time. There's going to be some security problems you you have to go and apply patches and there's just a lot of basic management of the infrastructure that
Really, I don't want to do. Right. Instead, I just want to write this code and say, when this event occurs, call the code and I'll pay while it's running.
And so, in effect, the servers are there, but we don't see them right. We don't think of them, and it's a little bit like microbes in that there are microbes all around. They're essential for life.
Right. But you don't see them, you don't think about them every day. Right. And so that's sort of the analogy I yes
Alright, so a little bit more about containers will talk about containers and the options there in AWS, then we'll come back to the service piece.
So containers have been around and Linux. For many years now.
It's basically a standard unit of software that packages, the code and all its dependencies.
In a way that I can you know I can deploy build this application on my laptop and then I can deploy it anywhere I can deploy it on to some physical machines in my on prem data center.
Or in the cloud anywhere right there much more lighter weight than a virtual machine because they share the operating system kernel of that of that host
So they've been around for a long time, but it was really when Docker came along that they made it really easy to build and share images and the images are the things from which we say hey go launch a container based on that image.
And so they made it really easy. They came up with this concept of a Docker file and it worked pretty well.
We said that the ideas, these containers will run the same wherever they are.
Which is a good thing. And I guess the last thing to say is that while Docker made containers really popular and it is still by far the most popular container option.
It isn't the only game in town. Right. So there's cryo rocket pod. Man, these are all gaining in popularity. And so now we have this sort of
OCI, the open container initiative so that you know if you're building an infrastructure that can run containers, it can run maybe all your containers, other than Docker.
Now the only problem with with building your applications at deploying the music containers is that
You know, a lot of times for complex things we end up having, you know, maybe hundreds, maybe thousands of containers that we need to run
Right. So if you think about it, these, these containers, the way they have that sort of isolation. They're perfect for encapsulating a single micro service, right.
But a complex application will require 10s to hundreds of images and maybe hundreds to thousands of running containers and trying to
You know orchestrate all these containers yourself is very difficult to do so. Most companies that get into containers pretty quickly find themselves saying we need to use some sort of container orchestration tool.
And by far the most popular one is called Cooper Nettie. It was originally developed a Google and then open source and it's become kind of the de facto standard, it's not the only game in town, either.
But it's very popular. And sometimes you'll see it abbreviated k s because it's K and then eight letters and then S.
So when it comes to running containers in AWS, you have a few options. You can go it alone. Right. You can just go launch a bunch of EC two instances.
Install whatever container runtime whatever container orchestration tool you want
And manage that it gives you great flexibility right but it requires a lot of administration, you have to understand how Cooper Nettie works really well to be able to say, okay, well I need these pieces I need these pieces I got install it all. It's a lot of work.
That's one option. The other option is to say, well, what does AWS have in terms of managed services, right, because it's a big thing that cloud vendors do they say hey,
Don't worry your pretty little head about it. We can take care of part of that infrastructure for you.
And so to that end, there are two managed services for containers that Amazon came up with the first one that came up with was PCs.
And that was sort of a, an Amazon specific idea this you know because they were started on this before, Cooper 90s was the clear, you know, de facto standard
And then eventually they said people don't love this option that is kind of a little bit AWS specific
And so what if they already had Cooper daddy's in there on prem and they want to move to the cloud, we should be able to help them with that.
And so that's what he kiss. So we'll come back to both those in a couple seconds.
Now no matter whether I'm going it alone, or using ACS or ek as I'm typically going to need some sort of a registry to store these container images.
And so for that Amazon has ECR elastic Container Registry just makes it easy. I don't have to go launch an easy to instance and manage it and install like you know Docker registry in there. You know, so it's a nice easy way to do it. So the first big option we said was ACS
What we like about DCS, is it really integrates well with all the other AWS services, I am ELV cloud watch cloud trail, the list goes on. Right now it is a bit limited to just Docker containers and it kind of locks you into an Amazon specific solution.
But this is sort of the picture where where we can use API Gateway. We'll talk a little bit more about that later in front of a load balancer. So when somebody comes in and says, hey, I want to make this call it goes in, it finds
A service that's, that's, you know, so it's a load balanced bunch of EC two instances and it'll be able to find on which instance is this particular container and then go talk to
And of course it can hook in with ECR for for grabbing those images out of the registry and launching the containers. So it's really all about, you know, you define these tasks and the task is one or more containers and any shared stuff that they need between it.
Now, under the covers ECM, as I mentioned, there's usually a cluster of EC two instances it kinda depends with UCS you have two different what they call launch types. The one launch type is easy to
Where you tell it. Hey, I want to launch X number of EC two instances.
And and in fact the wizard. If you go through the wizard say in the management console with VCs, it can actually provision that cluster VC two instances.
And then installs this little ACS agent, which is how they talk back and forth and see what containers are running and how much space. They have to run other
They'll also install the Docker runtime, you know, the container runtime all that for you.
Now, that's kind of where it stops. As far as the managed service goes, what that means is you still have this here. I got a cluster of three EC two instances.
Then I still have to separately patch and scale and monitor and all that. So figuring out where there's room to run a container with you know this much RAM and this much CPU. That's the Cs. That's what it does.
But actually running this cluster of machines that still on me so easy to launch type is one option there is another option called far gate.
So far gate is is the other launch type and basically what it does is it will launch the containers and manage all that in underlying infrastructure for you.
So there's some some interesting stuff that comes out of that.
So it'll launch the containers and manage the containers and the actual infrastructure so easy to instance, you don't even notice. And in the end, you know that's that's pretty good stuff.
Now, what you'll find with far gate is there's some trade offs with the pricing and maybe we'll come back to that. Yeah. We'll talk more about our gate. After Dr DK s
So, so that's ACS, I think that for Amazon, you know,
They, they tried to get out there, really quickly and build a containerization you know orchestration tool.
And if they had waited a little longer than they would have realized that, you know, Cooper Nettie sort of already won the race.
And so, you know, then it was kind of like afterwards they had to catch up. And that's when they built he cast. So it lets you run Cooper 90s in AWS without having to be a real expert in the operations Cooper natives.
You do still have to learn the basics of Cooper Nettie so your basic unit of deployment in Cooper daddy's is called a pod and it could be just one container or maybe a couple containers and some shared stuff.
Similar to ACS, you can deploy on a fleet VC two instances that you manage or use forget to take care of the infrastructure and here's that sort of far gate trade off.
There's a lot of nice ease of use, but it comes at the loss of customize ability and so they're going to decide how many you know sort of servers run under the covers, forget it has a higher cost per hour than UCS or E KS running just easy to
But it actually might save you money because you only pay while it's running, you don't pay for idle EC two instances. So it's basically growing and shrinking the cluster under the covers for you.
And when you're not doing much. Well, then there's not much to pay for. Right. So it's a but if you just end up running stuff 24 seven it'd be cheaper to just run these to yourself.
So there's there's some really interesting things in there. Now there's couple questions in the chat, which might be worth talking about.
First one here. How does ek as compared to the Google offerings. So, so Google and Microsoft Azure both have managed Cooper Nettie services as well.
In Google. It's called GK who Cooper daddy's engine in Azure. It's called aka as as your Cooper Nettie service and they're all very similar in their idea that they're going to take care of, sort of the
The management plane. So instead of you launching some servers to take care of the management of Cuban 80s, they're going to take care of that. And you really just sort of tell them. Here's what I want to deploy.
I don't know. I think Google
They know a lot about Cooper daddy's so they do a pretty good job of it. You know, because it was created there and everything they do is Cooper daddy's at this point.
But I think they all do an admirable job. And the idea is, this is my path to being really sort of
Cloud agnostic, right, because the way my code gets developed and deployed looks the same, whether I'm running it.
On my laptop on my, on my on prem sort of Cooper Nettie cluster or in any one of these public clouds really
Right, I have to know a little bit different about how to manage each of those services but but the actual deployment these pods are going to look the same, no matter what. And so that's really the big selling point
Now somebody asked what are the limitations using Cooper 90s for large implementations. I don't know, ask Google, you know, would you say that they have a large implementation. I mean everything they do. I think that
Once you embrace it and learn how Cooper 90s works. You can do a lot with it and it's a bit of a steep curve. There's a bit to learn around Cooper A's. It's not an easy switch but but if you do understand it and you embrace it.
You know it scales very well.
It's just hard to say. Well, we're going to go completely to that and say, well, what about all these existing apps that aren't you know containerized
You know, are we going to somehow try and container container eyes around them. That's the part that's difficult for a large organization. It's like
The growing pains of saying, well, we're not going to go and spend all this time to make changes to all our huge code base that's deployed and running just fine.
And so that's, you know, maybe you look at it for just the new stuff there is this idea. And one of the things, Google's doing is trying to kind of build a way to make that easy.
With their anthems project. But you know, I don't. I think that's where some of the problems with deploying Cooper 90s in a big organization is to say, are we really trying to shift everything to that or just new development.
So that's a little bit about the container side the side that Amazon seems maybe a little more excited about is the service compute
And Verner Vogel's who's the chief technical officer at Amazon com
He's famous for this quote he says there's no server better than no server, right, that idea that right now. If all you do is use
AWS to do what infrastructure as a service, you're saying, hey, let them take care of the physical machines everything up to and including the installation of an OS. Right. So when I launch
An easy to instance I picked some. Am I that am I comes with either Linux or Windows and it gets installed. But then, from there on, I'm in charge of that virtual machine, the operating system. The file system, all that well.
What service computers saying is, well, what if we just let Amazon be be in charge of more of that stack.
And they just hide the server from me completely. So it's really kind of a developer centric idea where I'm going to write my code. I can bring my own code in
In a number of different languages. Now at first there was like two or three or four languages. Now they've kind of opened it up quite a bit.
So I bring my own code and then I configure. What's the trigger that causes this, you know, what's the event that triggers this code to run
And then I don't worry about servers or capacity or scaling or fault tolerance, you know, Amazon's going to find somewhere to run this little container to run this code.
And I only pay for how long the code rocks, not for all this idle time right
And so if you think about it, that that idea of DevOps where we say, let's make those small teams where you've got, you know, developers and testers and operations.
Well, forget DevOps. You know, this is no ops right where we've we've gotten rid of a lot of the operations.
So it's easier to build those small fun cross functional teams or even to have the, you know, that sort of full stack developer
Where the developer is able to deploy to production and monitor and manage it now because they don't we already took away all the physical machine stuff. Now we're taking care of the virtual machine and the operating system and the file system, all that
So it's an interesting idea. And so, AWS lambda has been around for a while. It's very similar in Azure. And in Google Cloud. They both have sort of service compute options to call to Azure Functions or Google
I don't think that they're quite as mature at this point, like the Azure, one might still be sort of in beta. Although it's been in beta for like three years. So I mean, people use it all the time.
So if you are going to try the microservices thing with server list, then each microservices essentially implemented as a single lambda function.
And that lambda function might be the back end business logic for one little piece of the service. It might be doing some sort of event handling, like, hey, whenever somebody uploads a file to this S3 bucket go and run this code.
Might be talking to our data stores and and so those microservices need to have strong module boundaries and ideally
If we want to embrace that sort of like, hey, this parts written in this language that parts written in that language than the API boundaries have to be sort of
language agnostic, right. And so this is where Amazon API Gateway comes in nicely where we can put a RESTful web service in front of a lambda function.
And so you don't know or care what language the land and the lambda functions written in all you know is, hey, this is a RESTful API that I used to talk to it.
And then I can build larger applications of these little micro services by composing them together using something called step functions. So let's take a look at API Gateway and step functions.
API Gateway is essentially this
default user avatar Unknown Speaker
user avatar Myles Brown
It's a fully managed service and makes it really easy to create publish maintain monitor and secure API's and when we talk about API's are typically saying it's RESTful web services, but it could be web sockets.
And those are the front doors for these micro services that would be written in say lambda functions and they also support some nice
Sort of side benefits of API Gateway.
At the API Gateway level there's something called the gateway cash, so somebody comes in and does okay I'm making an HTTP request to this URL and I'm doing a get. And then I do it again like five seconds later.
There's really no reason to go call that lambda function, unless you know it's something time dependent
But a lot of times you can say, well, let's cash. The get requests on certain URLs for a period of time. And so it saves me, you know,
Money, you know, if I have to pay every time we call it lambda function, then I can avoid calling lambda function, a bunch of times for common stuff that we're doing gets on all the time and you know the other thing is the
The. We can also do throttling and all kinds of stuff right there at the API Gateway level right
Yeah, so I think here they're showing you know I can have all kinds of things coming in making RESTful web service calls the API Gateway says, Okay, you want me to go call lambda function or maybe something else.
You know, it has all kinds of back ends that it can talk to you could talk to an application on easy to it could talk to some third party RESTful web service, but typically in a microservices model here the API Gateway is sitting in front of a lambda function.
So Alex is asked, can I call a Python lambda function from Python code without using the gateway or is the gateway required, ya know, you can, you can, I mean with lambda. There's a bunch of different models for how you call lambda
You can even for the same lambda function, you can have multiple different ways to trigger it right, it could be could be like I said, could be event driven like whenever somebody uploads a file to an S3 bucket. That's an event.
Call this lambda function. And I could also just explicitly call it through an API call.
Using the SDK for Python or whatever language and I'd have to have the right you know access key secret key from AWS and permissions on that user to be able to execute the lambda function, but I can certainly do it. Yeah, so they're both three client is the is the SDK for Python.
Photo or Bluetooth people to different people call it different things.
So that's the API Gateway part step functions is it's a little more complex, but it. What do you have really complex kind of workflows.
Where you say, hey, I want to do something and it says, well, for, for sure. Do this step, and that step, maybe there's some branching
Maybe there's some choices. If this is the case, then you do this. If not, then do that. And so we can build that kind of with with timeouts and retry logic and all that.
And then the individual functions are little lambda functions and their logic can change, but the overall flow is dictated by something called step functions.
And so they have sort of a nice way of building that out.
So I would say that these are some of the pieces that we often see in in the sort of server list microservices model.
Now there are some challenges to implementing microservices architectures for sure. Probably the first big one is that that getting that buy in from pretty high up that we want to, you know,
We want to make these sort of small autonomous teams responsible for a set of micro service.
I'd say the next big challenge is identifying what is a micro service. How big is it you know there's there's typically some things called Domain Driven Design and bounded context, we don't have time to get into them in this in this little hour long webinar.
But another big idea that we see in microservices, is that decentralized data store. And sometimes that's tough.
Because within your organization. There's like a data group that's in charge of these relational databases and they always have been. And we always do things.
A certain way and they don't understand that sometimes it might make sense to use a specific little no sequel database for this one micro service to store this data and get it back and nobody else needs to know about it and
We sometimes call that polyglot persistence, where you use the right tool for the right job for for, you know, use whatever data store makes sense for that.
So each service owns don't data so that the schemas can easily change and storage can be independently scaled from the rest. And then just a lot of this sort of DevOps ideas. You know, sometimes you're talking about 12 factor app methodology.
And so there's some of those ideas, it's, it's hard to get a lot of that sort of culture change in, you need to sort of educate higher ups to be able to get that buy in.
But if you do want to see a little bit more of this stuff again all from the AWS sort of angle. There were a couple questions. People were asking about vendor comparison of Google and Microsoft and Amazon cloud that's really not the scope of this session.
We're really sort of focused on AWS and and there is a fairly new AWS class that we run called advanced developing on AWS.
And we'll send you all these links, but this is a three day class. And if you if you go and take a look at it.
The skills gained. You know, it's a lot about analyzing a monolithic application to determine the logical or programmatic breakpoints.
Applying the 12 factor app app manifesto concepts and then figuring out what are the appropriate AWS services to develop a micro service based, cloud native application.
And then go and build it right and so it is it is sort of an advanced AWS class. So the pre wreck it assumes that you've
You know you already know AWS, so maybe you've taken the architect thing on AWS or the developing on AWS class and those have a pre wreck of the tech essentials.
But this is essentially what you're going to look at in that class. If you decide to take it the other way to go is with
Just your oh and Michelle is put those links into the chat if you want to, there's a way to save the chat. I think if you go to the chat. There's a dot, dot, dot. Yeah, there's a save track because when you go out of the
Webinar. You know, it's going to be gone. So you might want to save it right before he leaves.
The other thing that we have classes on is Cooper 90s. Yeah. So Amazon doesn't have really a very great class on E KS yet.
I think maybe there's one coming in the roadmap. But for now, if you just want to learn the basics of Cooper Nettie
Either from an administration point of view or from an app developer point of view.
We have sort of classes from the Linux Foundation Linux Foundation works with the cloud native computing foundation and they've built like two different
certifications for Cooper 90s, one from an administrator point of view, one from a developer point of view. And these are the classes that also help you prep for those certifications.
But we run those classes fairly often. You know, they're now running at least monthly
And we do a lot of classes virtual we have training centers around North America, but we also have sort of a virtual sessions we typically teach in a hybrid kind of way.
So that's a few different options and Michelle put those ones in the chat as well. Finally, you know, we got a lot of AWS people here. If you're going to reinvent that's coming up in
I guess it's a month.
In Vegas, the first week of December, I'm going to be there. So if, if you do want to sort of do a little bit of a we've got either a half hour or an hour long
Meeting rooms. We have a meeting room where we can kind of booked in and I guess the first
Oh, you're automatically entered to win one of 20 Apple air pods. Nice.
So, so we're doing some of these meetings to kind of figure out where you are in your cloud journey and, you know, how do you get to that next level, what would be the training you might need. You know, so I mean
We're always going to kind of bring it back to training because that's mainly what we do but but I can kind of give some guidance. Also, if it's your first time going to
Going to reinvent I made a little blog entry, how to do AWS reinvent like a ninja. So maybe. Oh yeah, maybe that one, you can find that on our blog pretty easily. Actually, you know, I'll put that into the
What about into the chat as well if anybody's interested
That's not me. This is some do they use for the photo.
So that's sort of you know what's coming up. And so the rest of the time we're going to spend a little bit on Q AMP a. Now I did see a question.
Scott asked, What are the, what tools are best to use for CI CD and deployment of microservices is Tara form good for deploying microservices lambda API Gateway. So yeah, this is a good point.
Um, what you'll find is that there are hundreds of tools in the DevOps kind of space. Right. And when it comes specifically to CI CD. The main one, you run into in most organizations is Jenkins. It's a wildly popular CI server. It's open source. And it's got a huge list of
Of sort of customizations that people have built. It's really easy to build your own customization for that's why there's so many out there. But before you build your own you know don't reinvent the wheel. There's, there's a big list out there.
Now it's really good at, you know, hey, somebody checked code in I want to take that go and build a deployable artifact and then run it through a series of tests, maybe launch a production like environment and run the tests in there.
You know, so that lunching a production like environment is the interesting part in the cloud, you know, each cloud vendor has their own tools like in AWS cloud formation and Azure, you got arm you know and so on. So they all have that. Now, there are some that are vendor.
Sort of neutral. Right. And that's what Tara form is. And so, Tara forms wildly popular these days. It's not like I mean I think when people think oh it's multi cloud.
It's not like you're going to build one huge set of scripts using Tara form that launch everything in AWS and then one day, you're just going to snap your fingers and say we're moving it all dash right
It's, it's just that if you do decide to now deploy to Azure. You don't have to learn a new template in style. Right. And so that's
That's that's the big idea is that it's the same template in style, no matter which one of these options, you're going to be deploying to
And so they do help with with microservices, you can launch things like lambda functions and step functions and API Gateway and all that stuff using careful
And you know, that's a very popular one. But it alone is just really the infrastructure is code park for building up the area that the environment in which you're going to test things maybe
But you don't typically need, you know, other things in your CI CD pipeline for for maybe managing the whole thing. Like Jenkins.
All right, any other questions.
will forget containerized applications able to interact with on prem data dependencies.
And it's an interesting question or foggy containerized applications be able to interact with on prem data dependencies.
Yeah, I don't see why they wouldn't be able to
When you run far gate.
Away, I guess.
You have to tell it which
Which subnet. You know, you run it in a VPC, and then you can set up your VPN back to your data center or something. So I think, yeah, that's why we do it.
Alex asks, Is vagrants till the tool to use for setting up the Cooper Nettie cluster that would work on prem or cloud vagrant definitely works with that so vagrant is is a tool from
What's the company does Roku
But maybe that's the company. Yeah, anyway. Yeah. So vagrants pretty good at that.
And Dina, like I said, there's probably, you know, for every one of these categories of DevOps products, there's
20 different options, right, there's the two or three very popular ones. And then there's all the rest. And you know I can't speak all that well to what each one of the tools and what's the best tool. A lot of times, it comes down to, you know, what do you have licenses for
So yeah, so I think that's about all I can say about that.
default user avatar Unknown Speaker
I think is asked the same question again.
user avatar Myles Brown
All right, any other questions.
Give me a couple minutes.
user avatar Michelle Coppens :: Webinar Producer
While we are waiting for a few more questions to come in. I just want to thank you, Miles and all of our attendees today for joining us.
gentle reminder that we will be sending a copy of the recording to each and every one of you and will include all those links that we shared in the chat today.
We'll hang out in the room for a little bit longer. In case there are any more questions.
user avatar Myles Brown
Alex asked the question, what's the preferred way to have micro service communicate to each other and Cooper Nettie as well. I
Mean, there is no preferred way. I mean, different people prefer different thing. Well, we are finding is there's sort of
As the complexity of these huge containerized applications increases. We're starting to see a new kind of product pop up called a service mesh. So you might have heard of SEO and envoy. These are some of the tools that we're starting to see like an easy way to communicate between them.
So I guess that's it, but
Other than that, you know, most people opt for essentially RESTful web services you know that's that's your easiest way.
Because then these micro services, yeah sure they're communicating between each other in this app. But what about when some other app needs this one park right but so that's sort of that's sort of the, the idea there is the
default user avatar Unknown Speaker
user avatar Myles Brown
It's a, it's a vendor agnostic way that they can communicate that
Makes it nice and easy.
Oh PCs can easy to system be auto scale.
Yes. Yeah. So there's one have to create cloud watch monitoring handle the auto scaling and VC to or as a part of these. Yes. Yeah, unfortunately UCS
Yeah, it can launch your cluster for you.
Think you can launch it using auto scaling, so you don't have to go and build the cloud watch like metrics and alarms yourself.
But, but like I said, it doesn't, it's not fully managed in that once it launches them. It's still on you to, you know, apply patches and things like that.
But, you know, if you do set it up with the auto scaling group, but at least you don't have to worry about when to grow and shrink and if one dies, it gets automatically replaced. You know, so there's part of that.
Cloud watch is the service in AWS that that
Collect metrics on all the other services. And so, you know, we generate these metrics and then we can build alarms based on them.
So if you had a cluster of 10 web servers and you said hey you know when the average CPU of these web services over 60% add two more nodes to the cluster.
And so that's sort of the idea there. And so in a cooper Nettie cluster, it makes sense, right, if it with the chaos or he CS you know that that cluster me using auto scaling on that makes its kind of a no brainer.
Well, looks like questions of slow down and we're pretty much at one o'clock Eastern. So I think we're going to call it a day. There
user avatar Michelle Coppens :: Webinar Producer
Thanks so much, everyone. Miles. Again, thank you. Alright, thanks.
Powered by Otter.ai