Hi, everyone. Welcome to today's webinar, Microservices Architecture: Benefits and Challenges. This is the second in a series of three that are going to be presented by Randy Abernethy, Managing Partner at RX-M, a leading cloud native consulting firm based in San Francisco. Before Randy gets started, I have a couple of housekeeping items to go over.
During this webinar, everyone's phones will be on mute. If you have any questions, please enter them in the chat box on the left and we'll answer your questions at the end. Today's webinar is being recorded and we'll be sending a link out to all of our registrants. Now, let's get started. Randy, you can take it away.
Great. Thank you very much. Welcome, everybody. This is the Microservices Architecture Seminar, and I'm just going to kind of go through the presentation here. If you've got questions as we run through the material, feel free to drop them into the Q&A. I'm Zoomed in, however, so I won't see them right away, but at the end, we'll take some time to answer any specific questions and make sure that we've got a little of window to cover the things that people might bring up.
Again, this is a seminar presented by ExitCertified Tech Data and also RX-M, partnering to deliver some breadth of cloud native training and consulting. To kind of kick it off here with the microservice architecture concept, MSA, or microservice architecture, it's an architectural pattern, so there isn't particularly a style or one given approach to building quality microservice systems. There are a lot of different viewpoints and perspectives.
The best practices are really emerging sort of just now today. There are a lot of firms that have done this at scale, but the tooling and the infrastructure and even the people processes around microservices are in flux and developing. There's a lot of great insight today on how to do microservices successfully, but there's also a lot of new learning and experiences to come that the industry is certainly going to reap from the continual innovation in the space.
We can say some things about microservices, certainly. Number one, it's an architectural pattern. Number two, that microservices application architectural is generally business-aligned, so unlike technology patterns that we had in some of sort of the application architectures from previous generations, we don't build our microservice systems around technology stacks or around technology divisions. We build them around business concepts and business targets.
Most everybody can probably recall the lamp stack, where you had the database tier and the people that worked here were the DBAs and you had stored procedures and schemas and things like that that kind of fit into this layer. Then, in the middle tier, you had business logic, and so you might have C# applications or Tomcat Java-based stuff, JBoss, what have you. Then, up at the top layer, you might have a bunch of web kinds of technologies or UI stuff, front end types of things. The teams and the things that they did really kind of broken out into these layered types of systems. You ask somebody what they do, they might say, "I'm a middler guy," or, "I'm a DBA."
One of the problems with this is when you want to add a new business function and it requires some state, you have to add something here, these people have to do something, and these people have to do something. It ends up drilling through your entire platform and requiring coordination across all of these levels and all of these engineering teams. Another problem that you have is that if you're going to add some new billing feature, these guys aren't particularly billing experts in their team and the things that they work on.
The actual physical things that they deploy are pieces of software that are for front end technology and that's their expertise. These guys don't maybe know everything about billing either because they don't know the front end part, they don't know the back end part. They just know the middle tier part, and so they may know a lot of the business logic around billing, but they don't particularly know how it's stored and scaled or how it's presented. Then, the same thing happens down here.
This distribution of knowledge about a particular business function is something that causes coordination overhead, and if you know anything about Agile, one of the key things that we're typically trying to do in an Agile system is not flog developers to go faster because they can only go so fast. What we're really trying to do is remove waste. Get rid of the muda. Pull out overhead and things that slow us down.
Microservices give us this business-aligned structure and that is really, in my mind, one of the most important things. A lot of this is informed by many things that we've been doing for years in software engineering in different ways. For those of you familiar with Domain-Driven Design, seminal work by Evans, speaks a lot to this business alignment of software. Microservices really is predicated in that and that's pretty important.
When you break applications into small pieces, for example, if we look at the little diagram down here, and give credit to the WordPress blog where the diagram comes from, we've got pre-SOA architectures where everything's all kind of wired to everything else. We have these monoliths that are sort of very tightly coupled. Lots of teams involved in trying to release this thing. Lots of coordination overheard. Move to service-oriented architecture and we get sort of monolithic services, so they're smaller. They're more decomposed.
Each service has a single business responsibility. That was sort of the idea, but in many ways, the vendors sort of hijacked this whole movement. We ended up with enterprise service buses, big, heavy, slow, single points of failure, vendor-controlled environments that take you a long time to adopt. Then, you're kind of wired into them. We had some technologies that were great in some of the things that they brought to the table like soap and things like that, but the world has moved on. We've found some better solutions and we've incrementally improved on a lot of these things, and that brings us to the microservice world, where we've got lots of small services.
We don't have any kind of layering or central communications facility. You can have these things, of course, but it's not a requirement. The idea of a microservice platform is symmetry. Any of these services can communicate with any other service. However, by taking some of the insights from Domain-Driven Design and business alignment, you're going to have contexts within which these microservices live, so this might be a particular part of your application. You might have some controlled access to that particular part of the application, and so over here, we've got maybe another part of the application and so we can create gateways that give us access into these different subsystems that we might have built.
If we say, "Hey, you know, this services is a little bit too complex. Let's break it up into two separate services." That'll give us more flexibility. Nobody outside of this context needs to care about that because we've got this sort of gateway that provides a stable interface that allows you to get into the functionality of this application. Within this application, all of the microservices are within a small subsystem and can all take advantage of the fact that we've now got these two services rather than one.
Evolution and self-organization is really important in microservices. We don't try to design everything up front. We try to build pieces of the application and learn as we go and improve the architecture over time. Another important thing about microservices is they define an application, not an enterprise, so we're not trying to create a model that everything in the enterprise will have to fulfill. We're looking more for a solution for a particular application because that's what gives us the latitude to design the right solution for that application. It constrains the scope of the overall problem.
We got here through a very interesting series of events and there are a number of vendors that have cropped up in the timeline of technology evolution from maybe 2000 to present, and this is from the CNCF, this little model that you see down here at the bottom. Dan Kohn is the Executive Director there and was kind enough to let us use this. Great model, great kind of mental model for how things have evolved. We started out with the dot com era in the 1990s, building things that were mostly on bare metal servers. Then, kind of in the later end of that era, virtualization started to show up. We had non-virtualized hardware, Sun being a popular platform among others.
By the way, these brands aren't the guys that invented this stuff necessarily. They might be, they might not be, but they're the ones who made it really popular, who were sort of the landmark firms that drove a lot of market value with this particular thing. When it comes to virtualization, of course, VMware 2001 makes it possible for us to have that web server run in the VM instead of on a physical machine, so our packaging changes from a physical computer to a virtual machine and we can increase our server density and yet still have all of the different parts of our application nicely isolated. Infrastructure as a service shows up in 2006. Thanks to Amazon, we can now pay as you go. We can do experiments and deploy things much more quickly and easily. Then, that was followed by platform as a service.
Interestingly, other players like perhaps Google App Engine and ZenKey and so on have introduced kind of these platform as a service sort of concepts earlier than Heroku, but Heroku found the secret sauce. They found Ruby on Rails and people were building web apps with Ruby on Rails. It was a phenomenon and Heroku said, "Well, gee, you know, it's nice to have a computer, it's nice to have storage, and it's nice to have a network that connects you to the world." That's what infrastructure as a service gives you, but if people are building rails applications, they need a web server. They're going to need a database and they might need some messaging queuing facilities. Then, they just simply need to push their Ruby code up into the platform and have it run.
If we just provided all of these things for people and they could just focus on building their application and not worry about all of this other stuff, wouldn't that be magical? It was and it was a huge success, and now Heroku is a property of Salesforce and still a very important cog in the machinery over there. At this point, a very interesting thing happened. The world started going open source and we got OpenStack, which is an open source cloud. We got Cloud Foundry, which is an open source PaaS, so open source IaaS, open source PaaS.
Then, once we had these open source platforms and we had the ability to deploy applications easily, another phenomemon started to take place. Many of the hyperscale companies had already decomposed many of their applications into small little bite-sized components, single-responsibility packages that they could distribute throughout their infrastructure. They'd created tooling to make all of this work, but the rest of the world didn't have these tools. We didn't have a way to package up small services and reliable, deployable units. Instead, the rest of the world was using Puppet and Chef and things like that.
Docker came along and said, "Gee, you know, Google has basically pushed all of this stuff into the Linux kernel, so anyone can use it. Why don't we build a system that makes it easy for people to package up their apps and ship them to us?" One of the problems with PaaS historically was that if the PaaS didn't have an environment, a build pack for your programming language, in fact, your specific version of your programming language, then you couldn't use that PaaS. If you're using Haskell, it was going to be tough luck in the PaaS world. If you were using Ocaml or something like that or even Go, Google invented Go and they didn't support Go in App Engine for quite a while.
Containers, and Docker started as a company that was actually a PaaS, dotCloud, and they built this Docker thing to make it easy for you to push whatever you wanted. You just packaged it up into a container, send it upstairs, and we'll run it because everything that that app needs to run is inside this container. Well, that was hugely liberating. All of a sudden, having the ability to actually reliably deploy something, because anybody who's used Puppet, Chef, Ansible, or Salt to manage a complex host knows that there's no guarantees. If you run a bunch of configs on a host today and you do it again tomorrow, there's externalities. We're pulling things down from package management's repos and so on and the conflicts that take place on those hosts are not something that you can anticipate.
If you have a static environment, if you build a container that just is immutable, that can't change, and that image is shipped over to a machine and runs inside it's own private Idaho, it's kind of going to just work like VMs, but VMs are big and heavy. Every virtual machine needs a virtual operating system. It needs a full-blown kernel running inside it. In the case of containers, we're virtualizing the operating environment. We're sharing the kernel, we're sharing the computer, and we don't actually need a virtual machine.
We get the same kinds of isolation and features that we would get in a virtual machine. Not as robust, perhaps, but for individual application for a service, it's a perfect world. It's lightweight, it's fast to deploy, and that was one of the big liberators that made it possible for everyone, even the hyperscale folks, to do microservices. Not Docker specifically, but these were the tools they were using in their own way.
Then, the last piece of the puzzle is when you start creating a microservice application, instead of having maybe five, 10 different monoliths, you have hundreds or thousands of these little services. They are bite-sized little chunks of functionality that collaborate to create an application. One of the great things about this is that it realizes many of the benefits of object technology at another level of abstraction. One of the problems with object technology is I can build a really great class, really make sure that it's reusable, it's going to be perfect, but if I build it in C++, you can't use it in Java. You can't use it in Python. You can't use it in a C++ application potentially that's using a different version of C++98, 99, C++11, so on and so forth.
If you package that into a microservice and you give a language-agnostic API, now anyone can use it. Some of the reuse that we were always hoping to get with object technology comes to life in microservices because these little services are composable. The challenge, again, with having thousands and thousands of these things running around is when this machine breaks, how do we restart all of those? Where do we run them? When these guys are under pressure and we need to scale up and run a bunch more of them, who's going to handle that? Well, all of that falls into the dynamic orchestration piece of the puzzle.
The Cloud Native Compute Foundation has been a big champion of that and we've got orchestrators that have come from many places, but Kubernetes has sort of become the dominant orchestration platform in the space. That can be good or bad. The good news is you've got a community of many, many vendors behind The Cloud Native Compute Foundation. It's a nonprofit foundation, and Kubernetes is this really, really great tool and great facility for orchestrating systems, but it's always nice to have competition, too. We'll see where that goes. At this point, there's not much competition out there for Kubernetes. It has become sort of unilateral in its orchestration facilities.
The precursors to microservices are many. Service-oriented architecture, obviously, we went from apps running on a box to apps running in VMs to apps running in containers, and this apps running in VMs era, a lot of the things we learned about packaging and deploying applications and running independent services, that came from SOA. Some people call microservices just SOA done right or the next micro iteration of SOA, but really it's something fundamentally different about a microservice-based application than an SOA-based application. The granularity is finer. The tools that we have today, I mean, SOA exists before DevOps and Agile, so the way we think about building applications, it's pretty different today. Microservices, when you use that name, it really speaks to the modern DevOps and Agile types of worlds and fitting into those.
Domain-Driven Design, decomposing applications into business functions, again, SOA promoted that as way, but Domain-Driven Design gave us a lot of new things to think about, context and contextual boundaries, that sort of thing. Hexagonal architecture, Allister Cockburn argues for a breaking down layers that can hide business logic and really creating services that can be symmetric and communicate with each other unilaterally and directly. The whole Agile movement, continuous integration and delivery.
If you want to do continuous integration and delivery and you've got a team building this component and a team building that component and those things have to be built into a single component and then that has to be deployed, there's all these mutations taking place in order to get this thing out to an environment where in product it runs one way here, it runs another way, and in the developers' environment it runs another way. When you introduce containers into this model, CICD takes on a different tenor, and when you list the level of abstraction for your deployment of these services up to the point where you're just deploying your microservices and you really don't care about all of the other stuff, it's being provided for you as a service through Kubernetes as a service, for example.
All of the major cloud vendors offer Kubernetes as a service today, whether it's IBM Bluemix or Google Cloud or Azure. The one exception would be Amazon, but curiously, Amazon just joined The Cloud Native Compute Foundation last month or the month before, and so there's some wheels turning there and we'll see what happens. Even if you're using AWS today, the number one cloud deployment tool for Kubernetes, which is Kops, is its native target, its first target is AWS, so it's very easy to get Kubernetes running in any of these environments. Now, all of a sudden, at the click of a button you've got a 72-node cluster that you can deploy apps onto without any work on your part from an infrastructure standpoint. Virtualization, DevOps, all of these things have definitely informed our microservice progress.
One of the things that, from an architectural standpoint, building services, one of the things that has helped shaped the thinking of a lot of people is the 12-Factor App. The Heroku folks, being probably the first really successful PaaS, ran into a lot of bumps in the road and they found that there were certain things that you as a developer needed to do when you're building your services in order to deploy them and make them work well in the cloud under a platform as a service. If you did these things, then everything would be a lot smoother.
Now, the world of pushing code is sort of the PaaS world, so when we created a service-oriented architecture service running in a VM and we pushed that VM to the cloud, that's a big, heavy thing. That might be a gig, 5 gigs. I've seen companies distribute 8-gig VMs to their customers. It's hard to call it Agile. You can't download that thing in real time and run it. It takes a while to get it moved from point A to point B. It takes a while to boot it, but this was the infrastructure as a service world.
Then, when we moved on to the platform as a service world where Heroku lives, this world, we were pushing code. Wow, we went completely the other direction from a completely packaged, self-contained environment with the operating system, the file system, environment variables, command line arguments, everything is in there, to none of that. We just pushed some code. A code commit causes Heroku to trigger a pull and deploy your application. Wow. Okay, we're going to need some help there, so what did we need in order to make this all work? Well, we needed a bunch of things, and some of these things are relevant to microservices and some of them aren't.
One of the things was a code base. The thing you push should be one code base and I think that has some bearing in the microservices world, too. If you're going to build an application, you can create a single repo for that app and have different directories for different projects in that app. There's also some argument to be made for having each microservice have its own repo because then the team that's working on that service can care about every single commit and wire those commits into their Slack or their Spark feeds so they can see what's happening with the code base and how it's evolving.
Also, it means that if you are trying to wire that code base into a CICD system, the thing that you deploy in a microservice world or in the PaaS world is that code base, but in the microservice world, packaged in a container. That code commit to that code base could trigger an event that causes the code to be cloned, built, packaged into a container, and deployed into an automated test environment or an integration environment, or what have you. Single code base is a benefit.
Some dependencies, making sure that all of the dependencies that you got for your build are explicit. This is in a world where Heroku opened the door with Ruby, so we specify the bundler dependencies for the gems we're going to need and things like that. Then, they added support for Java, so now it became Maven dependencies and you have to set up your palm with the things that you will need to pull down and include in your project. Finally, it became NPM and specifying your NPM dependencies. The idea here is that Heroku, when they deploy your microservice or your PaaS service, they're going to deploy it in some sort of an isolated environment.
Well, it wouldn't do them much good to create a virtual machine for every one of these dinky little things that you're going to deploy. They began using Linux kernel isolation technology, Cgroups and namespaces just like Google was doing. Google, of course, contributed most of this stuff to the Linux kernel. Other people followed on, but at the end of the day, rather than the overhead of a VM, they needed to create a really lightweight container to deploy your service into. That container needed to be set up with some dependencies, and so you would describe these.
If you describe them in a very programming language ecosystem-specific way, you would describe them with Maven or your package.json or something like that. You have all of these different ways of doing this, and each one of these things, then, would pull all of those dependencies into this package that Heroku would run off and deploy. Whether it was called a cartridge or a build pack or a container, it was a container basically at the end of the day. The next step from PaaS was really to say, "Look, you know, you're going to build my container for me. I'm going to describe my dependencies to you, but what guarantees me that when you build it it's the same thing that I built?"
I've seen different versions of packages pull in Europe than in the U.S. I do something one day in California, fly to Munich and do it again, and I get a different package. What's going on there? That's the problem. You have externalities, you're not dealing with a static binary, so if we go to containers as a service, now when we build our microservice, we package it up ourself. That's the thing that goes up to the cloud, the thing that we've tested, that we built, that is exactly what we want.
While PaaS was reaching for this dependency isolation and using container technology, there were some bits that were missing, and they only supported the build pack that they supported. If you were using a language or some technologies or some libraries or something that they couldn't handle, you started running into problems. Your configuration, anything likely to differ between build and test, needed to be injected in environment variables. We see this carrying over to microservices and containers.
If you have a configuration file in your container, remember that's a static image that you're using to run your microservice. Having somebody launch it and have to change a configuration file to make it launch correctly, that's not very convenient. Having a command line argument passed to your container is easy to do, but the problem at least with Docker containers and the way that container tech works presently is that you can run a program, this is called an entry point in the Docker file, for example, and you can give it a bunch of arguments. This is called a CMD in a Docker file, for example, but if I provide one command, one switch on a command line when I run a container, none of these arguments apply. It's all-or-nothing deal with the CMD.
Command line arguments, configuration files, they're all popular in static environments, but they don't work so good in a container space or even in PaaS. What we would prefer is we'd prefer environment variables. Environment variables give us the ability to, A, make sure that important things like secrets and secret keys and stuff don't go to disk. There are ways to do... A few tmpfs's that don't go to disk, too, if you wanted to mount them as files, and that happens sometimes, but environment variables have a lot of these great properties. I can have environment variables baked into my config and then I can add new ones, or I can override the ones that are there without losing all of the work that the standardized variables have done to set things up for me. Config needs to be considered and made mobile and portable.
Backing services, we generally don't use backing services from within containers. The whole idea of a PaaS service, of pushing code, is that you've got this ephemeral thing that you can run as many copies of as you need. If you have one of them and the pressure gets too big from your users, you spin up another and use a routing mesh or a load balancer or something to distribute the load and so on and so forth. By the same token, if the load goes down and you don't need this guy anymore, you should be able to just delete him.
Building things around the cloud, we focus more on this dynamism, this ability to extend things and shrink them back, elasticity, and also, we focus on the fact that the things we are using are commodity. It's possible for a VM to just disappear in the cloud, or for a network connection to go down if you're coming over the internet. Resilience and engineering resilience into the application is sort of part and parcel to building web apps, for sure, and definitely microservices as well. We don't want to spend the money for that .001 extra bit of reliability that we would need for a mission-critical application. Instead, we want to get resilience through replication and redundancy because, A, it's a much better guarantee.
Complex systems tend to be the first ones to break and they're the hardest ones to fix. Simple systems tend to be really reliable and work really, really well, but if you didn't spend a lot of money for the best parts in the computer and what have you, maybe it goes down. If you've got 17 copies of the thing, you don't care so much. Building for a commodity world is something that we do and we have these stateless services that we can scale quite nicely, but the idea of the backing service as something that changes the tenor, as soon as you start talking about backing services, well, backing services are really all about state. Your state is critical, and so databases are the place that we keep state in applications.
Do we really care about disk volumes? Do we really care about hard drives and storage arrays and all of that sort of stuff and when we're building applications if what we're trying to do is build applications? That's really kind of an infrastructure function, and so the idea in the past was, "Look, we're going to make all of this stuff available to you." You just click a button and you get a database, and our guys know how to set up and manage that database like nobody's business because that's what we do.
What you do is build apps and you just consume this database. Many of these databases weren't relational anymore because relational databases are designed to run on a single computer. They don't scale well, and so we started seeing things like Cassandra and MongoDB and DynamoDB on Amazon and things like that where you just click the button and add more instances and you get more Scalability, more transactions per second. Also, a lot of people forget about message brokers. Messaging is state. If you put a message in a queue, that's state. It may be somewhat transient, but it's state, just like your databases. This is another thing that you don't want to build into your application. You want to consume it from the platform if you possibly can.
At this stage of the game, all of the major cloud vendors provide topic-based and queue-based messaging systems and both traditional and relational databases because those are wonderful tools, but they don't handle cloud scale very well. Depending on what you're doing, you might want that because there's a lot of great technology built around relational databases, or you might want a Scalable document store or key-value store or column family store to support your application. Then, we've got our stages, billed, release, run.
Another point is that each of the things that we're going to push, it's going to be a process, so microservices tend to be a single process. We package up a single process and we run it. That's our microservice. We don't built crazy, complicated multi-threaded servers that are doing 15 things. Each service has a single responsibility. If you're using Node.JS or Python or Ruby, you only have a single thread in user space anyway running through your code at any given time, so there's no parallelism, so you might as well get your scaling by horizontally scaling these guys. That's been the model in the PaaS and it's also the model in the containers as a service world and in the microservice world.
Bindings, concurrency, disposability, all of these things are bits that we've talked about. I will mention, also, logs, though. You don't want to be logging to disk, and you probably don't want to be logging to some complex wiring, either. You want your logs to go to standard out or standard error. That way, the platform can tap into this standard out and standard error and it can handle all of the traffic to a log folder, which might then send all of your logs to an aggregator, which might then in each stage here could filter, mutate, do what it needs to do.
Then, your data goes into an analytics platform. Maybe it goes into an HDFS, so you can run some Spark jobs on it and look for anomalies and things like that. Or maybe it goes into elastic search and you run Kibana Dashboard and alerting on it. Or, maybe it goes into Influx or the TICK Stack, what have you. The idea is that we want to decouple our applications from the infrastructure and the platform services that every application's going to need anyway, but that you don't want to specifically wired to.
What if you're using Splunk? What if you move over here and in Amazon you're going to use some of the services, CloudTrail and so on and so forth? We want independence and we don't want coupling there, and we also don't want to use anything language-specific. Part of this is because that the cloud native world, the microservice world tends to be a lot more polyglot and we want services that work across platforms and languages without impediments.
Out of PaaS really came this whole cloud native approach to doing things for the consumer. It was sort of a merger of interests. The hyperscale people were using microservices and building their own packaging for containers and orchestration tools. Platforms as a service systems were allowing people who just pushed code to get these sort of high-level abstractions that let them build applications rapidly and deploy them quickly without worrying about underlying services. Those things handled the scaling and the load balancing for them.
Well, taking the concepts of the PaaS for the enterprise community and the microservice approach from the hyperscale organizations and bringing this to the broad world is something that The Cloud Native Compute Foundation has been focused on. These guys created a reference model to help people kind of get their heads around the different layers and they also created some high-level definitions of cloud native systems. Cloud native systems are composed of microservices, so microservices being dynamically managed, whether it's through Marathon on Mesos or Aurora on Mesos or Swarm on Docker or Kubernetes or what have you, this is the way that we run these modern microservice applications. We package each of those little services up in these containers.
We need some infrastructure to run on, bare metal or virtual machines, either way, with storage and networking and all of that. There might be some interest in discussing or socializing the provisioning mechanisms that get us that infrastructure. Then, once we've got the infrastructure, we need to be able to run our services, so we need a run time and that run time has largely been Docker in this space, but there are lots of interesting new initiatives. Docker is a multilayered system and Docker actually contributed the lowest layer of their stack, containerd to The Cloud Native Compute Foundation, so the thing that actually runs the containers is a tool called runC, which is part of The Open Container Initiative.
The manager that watches them could be any number of things. There's containerd, of course, and at a higher level, Docker is Rocket from CoreOS and other tools as well, the Mesos Containerizer chain, top to standard containers. The ecosystem is really converging around this standard container format, the OCI format. That's promoted by the CNCF, but it's sort of a separate project. Then, you have the orchestration and management piece. That's where Kubernetes, Swarm, Marathon, Aurora, something like that fits in there. Rancher, perhaps. Rancher Version 2 is built around Kubernetes and other platforms.
Then, at the top, you've got application, definition, and development. This is where you specify the way that you want your application deployed. If you think of everything as code, then your microservice is written in code. It's Scala or it's Go or it's Rest or whatever, and then you think of, how are you going to package that? How are you going to create that static run time environment? Instead of Puppet or Chef or what have you, we're going to create a Docker file or something like that that creates our container packaging.
Then, finally, how are you going to specify the deployment and interface contract components of your service? You're going to need a name for your service and an endpoint. In Kubernetes, you define that as a service. Then, you're going to need a deployment that runs all of your containers and actually implements it. In Kubernetes, that's a deployment they call it, so you have these different resources that you can use at this high level to describe your application. All of this is code. It can all be checked into source code control and meticulously groomed and managed and monitored and incrementally improved and reviewed and all of those good things because the whole thing is code. That's pretty magical.
When we look at microservices and really kind of focus in on these guys, what are their attributes? What do we get by going from this thing, this monolith thing over here, to the more microservice model? Well, we get well-defined interfaces. That's a big one. Each one of these services is going to have an interface, and that interface is going to be a contract. If it's an interface that is properly built for microservice use, it's going to be platform- and language-agnostic, too. This interface could be any number of things. I'm using the lollipop as an example here, but it could be a message.
Maybe you use JSON to create a standardized message. Or, it could be an RPC system like GRPC where we have protocol buffers that works with all of the languages. Or, it could be a Rest API. All of those are viable options, but each one of them has a way of codifying your interface, making it a contract. If it's a Rest API, you would probably use OAI Swagger. If it's a GRPC interface, you're going to use Protobuf to describe the actual messages that go back and forth, so you're going to have an IDL, interface definition language.
We have these abstract tools that we had for a long time that we can use to define these interfaces, but now, the beauty of this is in a system like this, you might have a module here. Do you think people describe the interface of that module crisply and cleanly? In my experience, some do, but most you rely on the code and there are times where you might have coupling here that is hard to unwire and hard to understand. You could make a change in this module and it might affect something in an adverse way that's hard to determine.
In a microservice environment, if I make a change to this guy, I should have acceptance tests that guarantee my contract is still intact. I can switch out memory structures, I could change the programming language altogether, I could do any number of things. As long as this contract holds, none of my clients should care, so that decomposition of the application into these reusable components that have very crisp boundaries is a highly valuable thing in my experience.
Being technology agnostic, this is something that really can be a powerful tool because technology-agnostic systems allow you to experiment. They're polyglot-friendly, so you can use the right language for the job. I mean, too much polyglotism is a bad thing, of course. You want to be able to switch developers between teams and have cross-pollinating knowledge and things like that, but you don't want to have front end developers having to use C++, either, so polyglot-friendly is a real big win.
Another thing that's interesting about these guys is they're designed to be replaced. We build microservice applications so that if this guy has a CBE, we roll out a new version and destroy this one. We don't patch or update or modify or mutate our services. Why? The thing you tested is that immutable service. How long do you run your tests? Five minutes? 30 minutes? An hour? This thing could have been running for five hours and it could have had all sorts of mutations, and then you're going to go in and modify it again and expect that it's just going to work okay. That's specious in my view.
What you probably would be better off doing is just flat out replacing that service with a new service that was fresh, brand new, created from scratch, tested, and all of that. That's really what we want to do with these guys. Plus, this gives us the dynamism to be able to scale them out and destroy them and roll them back. Services should be capable of being started quickly, shut down quickly. That means that the clients that use them need to be loosely coupled. Clients need to be able to discover the service endpoint, connect to it, which is, of course, going to go through a routing mesh and give you one of the many instances of that service that's running.
Then, if that guy gets scaled out and goes away, you just reconnect. It's both sides of the equation. We have to have loose coupling for this to work. Then, services have single responsibilities, so they're highly cohesive. When you combine loosely-coupled and highly cohesive, you get the Liskov substitution principle. That means that I can take this guy, just whack it, drop a new one in its place, and the clients that used to be connected here that then switch and connect here don't know the difference.
Another really important thing about microservices is that state is decomposed by the service. In the monolith, you have one, big back end that everybody boils down to, and if this thing breaks or you have a problem with Scalability or if you need to modify it, it affects the world. Where in a microservice, if this is the billing app, it has the billing state. If this is the customer app, it has the customer state. If this is the stock trading component, this has the stock state, whatever. I'm just making up some examples, but you get the idea.
We decompose the state by the services. That's actually one of the hardest things that we probably can't get into too deeply in this one-hour talk, but in my view and in my experience working with customers, the two hardest things about microservices are getting the culture right. A lot of firms are Agile-ish. You really need to drive this from the business, all the way through to the technology, all the way through to the platform. Everybody's kind of got to buy in and be aligned to make this a success. It's pretty hard to do if only part of the organization is going to go there.
The other thing that I see a lot of people stumble on is the state because microservices should be stateless, but that doesn't mean you don't have state. Applications have state, so you need platform services that are Scalable and that can handle the state requirements of the apps. You don't want every microservice running off with its own database, so we have CouchDB over here, MongoDB over there, Lettuce over here, React over there. That would be a nightmare. It's your state. It needs to be backed up. It needs to be preserved. It needs to be curated. It needs to be secured. It needs to be part of a much more controlled, carefully designed platform component, but that doesn't mean it has to hobble the application development and the ability to generate microservices quickly.
If you've got a giant Cassandra cluster that's capable of scaling a massive throughput, you can let each microservice have it's own key space. Remember that a database is not a database server. You can have one massive database cluster, whether it's Cassandra, Mongo, Zillow, whatever it is, and you can have each microservice have its own chunk of that thing, yet it's still Scalable and it can be architected and designed to suit that service.
On the microservice side of things, how micro is micro? Well, if you're looking at Agile teams, they're typically supposed to be with the six to 12 zone, seven plus or minus two, whatever your metric is, but the reason for it is the number of connections between individuals grows geometrically with the size of the group, and so it's N times N minus one over two, and it looks like this. You get up into 50, which might be five teams trying to coordinate on putting together a monolith that has a bunch of different modules that they need to shift as a single unit and deploy. You know have somewhere around 1,250 connections, so 1.250 opportunities for a misunderstanding.
That's why a single team that owns a single service that can run it all the way to production, the Liskov substitution principle, it should not break the world if you roll out a new version that has new functionality. All of the old interface components are supported, and that's why Rest, Thrift, GRPC messaging systems that are based on JSON support evolutionary interfaces. We don't break the contract by adding parameters, by adding attributes to structures by adding method if we're using these technologies. Per Jeff Bezos, never hold a meeting that you can't feed with two pizzas.
All right, so we got maybe a few more minutes to cover a couple of other things. I'm going to mention some benefits perceived by companies that get it right and thought to be benefits by most people, but not always realized. Not everybody has the right problem and not everybody gets the benefits because maybe there's a failure to execute or something. In general, people are looking to deliver software faster. That's why they want microservices. That's the thing driving enterprises to microservices. We need to be able to roll things out faster.
By breaking these problems down into microservices, we take small risks, get them to production, and we have the ability to innovate rapidly. Remember any microservice should be able to be rewritten from scratch in two weeks. How long could it be? If you got it completely wrong, it's two weeks to rewrite. It's not the endeavor of a lifetime, building some giant, monster, interconnected thing. Fine grain scaling, so sca;lability, that's another reason people like these systems. If you do get lucky and it becomes a giant commercial hit, it can handle cloud scale, this architectural style. It gives you the ability to embrace new technologies.
Now, if you're using a monolith and somebody wants to use some new framework or messaging system or gizmo or whatever it is, language, it impacts potentially everybody and you get a lot of vetoes. Risk-averse companies are just not going to do that very often. On the other hand, if you've got a bunch of microservices and this team of seven people has autonomy and they say, "We're going to try this new thing," nobody else cares as long as they maintain their service contract. They might stumble on something that is this amazing speed to market unlocking magic thing that can then cross-pollinate throughout their whole system. That's nice.
Organizational alignment, my microservice environment over here and my monolith environment over here, the guy who's in charge of customer experience for billing comes over here and says, "Gee, who do I talk to?" Maybe the business function is distributed throughout this thing and several other monoliths, but in the microservice world, it's like, "These guys do billing." They're in the billing context and they maybe have two or three services, maybe one that does billing. When you ask them for something, you've got one group that can handle that problem, that can add that feature.
Now, of course, we don't want to be Pollyanna about this. If you build something in the back end and that service is now in this interface, but users come in through another service that provides HTML pages, well, yeah, these guys are going to have to update their stuff, too. The idea is you can atomically deploy that new functionality and people consume it when they want and that organizational alignment allows you to get these things to market faster. Responding faster to change, composing your applications from these services and reusing them, and the resilience to scale out and scale back.
Communications options, there are a lot of them, but actually, I have a diagram here that didn't show up, which is kind of interesting. In the microservice world, we try to use platform-agnostic communication systems. Imagine you have the internet and the internet, if you want people to access you through mobile devices, phones and tablets, or you're expecting IoT gizmos like thermostats and what have you to be sending you telemetry, whatever the case, you've got a lot of infrastructure out there that knows how to to do HTTP. Firewalls, load balancers, proxy servers, reverse proxies. It's hard to beat Rest when it comes to communicating over the internet. A lot of the APIs that people build for the internet are Rest-faced.
When you get inside your data center, if this is your gateway service, let's say, you get inside your data center, and this guy is calling this guy, which is calling that guy, in the old days of the monolith, you hit the monolith and it just called some functions and returned to you. In the days of the microservice, you're making tens of network calls for one of these responses or inquiries over the internet.
People have often gone to our RPC on the inside for two reasons. Number one, inside your data center there aren't any proxies, reverse proxies, firewalls, web servers, things like that. Or, they're often are less or fewer. Services can talk to each other directly. These technologies, GRPC, Protobuf, or Protobuf with some RPC system or Apache Thrift are going to be in order of magnitude faster than Rest, and they're often going to have a much smaller network footprint payload-wise.
They could be used to provide this next layer. Then, you're also going to want to have a responsive, fast kind of system so that when all of these results come back, the user has a great experience and they get their response back. That means you need to decouple slower services or services that depend on external resources. You may end up sending messages to some of these guys. In the messaging world, we've got all sorts of ways to decouple ourselves through async schemes, and some of the really big tools that you see here are Kafka and Nantz, for example, for messaging in these platforms. Those are some of the common communication options you run into with microservices.
Some of the history of RPC, so a lot of RPC stuff has gone on, but the guys that have sort of stood the test of time are Google GRPC. Now, that was just open-sourced in 2015 and it's now a Cloud Native Compute Foundation project, but the underlying serialization scheme is protocol buffers, so these guys kind of go together. Protobuf's been around for almost two decades. Then, Apache Thrift is a very, very similar thing. Apache Thrift came out of Facebook. It's now an Apache project and so you've got CNCF GRPC, you've got Apache Thrift, and you've got serialization schemes built into both of those, Protobuf in one side and Thrift has its own pluggable serialization as well. Those are both good options in the microservice world.
Other things tend to be more break-y. When you add something, it breaks the interface, or it's language-specific, like JMI or something like that or RMI. When we decompose monoliths, another thing that's nice about RPC is that if you've got a module inside a monolithic system, it's going to have functions. It's not going to have resources and verbs that you use and interact with it. It's going to have Java functions, C# functions, C++ functions. Taking that thing and putting into a microservice is very easy in these RPC systems, whether it's GRPC or Thrift. You just get one of the packaged servers, you describe the interface that this thing has in interface definition language, and then you build stubs for whatever languages you want to use.
Literally, to build a microservice, all you need to do is describe the interface and you're done. Now, you package it up in a prebuilt service. You've already written the code that does the thing that you want it to do, and then you just hook on the stub. Then, at this point, anybody in any programming language can now use the client stub to invoke that function and it happens over a network. When you're taking a brownfield monolith and you're piecing out pieces of it so that you can get better Scalability or easier speed of innovation, then that's a really nice way to go. Another reason why Rest is great for some things, but RPC still has some value-add.
This talks a little bit about the evolution of architectures, bounding context. I'm going to leave that for your reference. Then, gateways, anytime you cross a contextual boundary, a bounding context going into an application from the internet, we're going from one subsystem to another, you might want to have a gateway. Gateways can do things like monitoring, dynamic routing. You can use them for acceptance and stress testing. They can do circuit-breaking and caching, so a lot of useful features like that show up in microservice architecture.
An Envoy is just one example. Linkerd, Envoy, there's lots of these sort of gateway services that are out there that you could just deploy in the microservice world. Because microservice is becoming so important, many companies have had to solve this problem, and so it's about time for us to stop solving this problem and start using some of the tools that many of these great companies have open source for us to use. Linkerd, for example, is a Cloud Native Compute Foundation gateway product we can use to cross-context. Envoy is another one. Envoy is being considered for inclusion in the CNCF [inaudible 00:56:52]. My suspicion is that they'll do well.
Then, this last slide is where I'll wrap up. What are the problems? Nothing's magic. There's always issues. Well, if you're building something small, creating all of this container infrastructure and dynamic orchestration, it's not easy. It's a whole new set of technologies to learn and investing in that to build something simple and small is probably a waste of time. I mean, in the long run, those are good skills to have, but if you're trying to build an application and get it to market and it's something simple and it's something small, microservices are for building big applications with multiple teams. It's designed to solve some of the problems of gridlock that you get when you have 15 different teams trying to integrate a single monolith.
If you've got one team, well, I just don't think microservice is often the right fit if you have one team, if you're building one thing. Monoliths are faster to build, they're simpler to build, and if you don't have Scalability problems or time to market because it's small and it's new, this is a great way to go, and you're not dead if you decide that down the road you need to do microservices. You can take this piece and peel it out and you could take this piece and peel it out and, boom, you've got a state for microservice that owns its own state and has its own capability and provides an interface.
It's certainly possible to decompose monoliths, and if you know enough about microservices, you can sort of watch yourself and realize when it's time when you hit the tipping point. You can also build things a little bit more carefully, which is just good software engineering that will make it easier for you to peel these things out into microservices.
Another place where small things work well with monoliths, another place where maybe it doesn't make sense is in scenarios where you have a need to have things that are low latency. When you have microservices, these guys are communicating over networks with each other, and so a single request from the outside world might cause seven requests on the back end. I think that's the Netflix statistic. You need things that are really low latency. Obviously, calling a function inside your process is a lot faster than putting a packet over the network and having somebody do something and having them send the packet back to you. Definitely some things to think about there.
Also, there's an explosion in the number of processes to manage, so you've got to have tools. You're going to need to learn how to use Swarm, Kubernetes, Rancher, something in order to manage these guys. Also, this is new technology. Not a lot of teams have good experience with this stuff, so there's going to be a lot of learning and you have to be ready for that. Heavy network utilization. We're in a world where 10-gigabit Ethernet is sort of standard in the data center and uplinks are 40 and a hundred, so this works well. If you're going to run this on some old infrastructure, you might be in for some surprises on the problems that these links will cause.
Small-to-medium applications, again, are easier to build as monoliths, and integration is no longer anyone in development's problem. That's another big one. You have to have the culture right. You've got to have that piece figured out. You have to have integration tests because if I have a pipeline for my team to directly roll this thing to production, we have to be very explicit about these contracts and know that those contracts are holding and that people aren't depending on things that are outside the contract. The whole community's got to have a resilience and a different way of thinking about things here.
All right, so that was sort of where I wanted to wrap up. There's some other things for your... food for thought here in the deck that get into some ecosystem stuff and things like, but really talking about the pros and cons I think sums up a lot of the things that I wanted to get to. What I'll do is break here. I know we're kind of right at the end, but if there are any questions that folks would like to add, I'd be happy to take them.
Thanks, Randy. As you said, we're at the end, so if you need to drop, thank you for joining and just remember that we will be sending out the recording to everybody next week. We do have a couple of questions that came in, but if you have any others, please drop them into the box on the left.
Randy, the first one, could you just go over... On slide four, you were talking about API gateways. Do they replace ESB and microservices?
Great question. In a microservice application, a small application is going to probably have... Maybe I'll just do a little drawing here on this last slide. A small application is probably going to have one context, so it's an app. Let's say it's Uber or it's Lyft or something like that, and so we have billing and maybe this is going to go out and interact with some external service. We had notifications. That's going to interact with some external service. They have rider management, driver management. We have accounts tracking. Then, we have some UI components.
Well, if we've got mobiles and tablets and things like that coming at us, we want those things to be isolated from the architecture of this microservice application. We don't want them to have any coupling to these microservices because we want to be able to evolve the application. We want to be able to have cellular division over here and we want to be able to combine these two services and change the contract of this interface and update all of the clients and things like that if we need to without breaking the world. What people end up doing here is they create a gateway application. I think it's a lot richer, this gateway concept, in a microservice system than an enterprise service bus or even just like a load balancer or reverse proxy or something.
You're generally going to want to say, "This is the interface that these guys want." Netflix has done a great case study on this and they found that if it's a TV, they need to supply a certain interface to the TVs. If it's a phone, don't give the phone a hundred records back. Give them 10 because they can only show three on the screen and 10 is enough for them to scroll backwards and forwards and they usually have a crummy connection. If it's a tablet, given them 20 records, and if it's a desktop computer, give them a hundred.
They would actually implement different gateways for different devices, and that's exactly the concern of the gateway. This guy's concern is billing. It's got a single responsibility. It's a microservice with one responsibility and that's billing. This guy's responsibility is providing a good interface to a desktop. This guy's responsibility is providing a good interface to a phone. Whether that's one microservice or two really depends on your architectural decisions, but these guys sit on the event horizon of your bounding context. In the case of a simple application, this might be exactly what you've got.
In the case of a more complex application, you might have two subsystems. You might have the blue subsystem and you might have the red subsystem. In the red subsystem, you've got a bunch of microservices. Well, this is a team of people that... maybe two teams, one team that handles these first two, and then a team that handles these second two and they work with each other and they interact a lot and they're all building stuff for the support part of your business. They're building the support app, and so while you might think of this other subsystem of the app as being a different app, that might be the case, or maybe you think of it as just another part of the same app. You also might want to have a gateway service here so that when the red guys want to consume features from the blue system, maybe the blue system has failed, and so support needs to call sales from time to time to get certain capabilities.
The blue team can provide an external interface that's just enough for red to get the features that they need and no more, and that gives the blue guys, again, this substitution principle kind of ability to change their microservice structure and do what they need internally. When they make those changes, the only thing that they have to update is the gateway because they don't change the contract with the other subsystems that are making use of them. They just change the internals, and so this guy can fix up any differences, obviously, to some degree.
It's really an isolation layer that allows people outside the context of your subsystem or your application to get in. Then, there are a lot of typical operational concerns that naturally fit at that location like circuit-breaking, geo load balancing and failover, acceptance testing, and things like that.
Okay, thanks. We also had a question. "Is there a standardized communication mechanism between microservices?"
Yeah, a great one. I'm guessing that probably dropped in before that discussion on the communication stuff, but I'll just reiterate. There isn't a standardized one. No, there's no standard for microservices, but there are popular APIs which are Rest, and so this is... Ooh. I accidentally clicked there. This is the one that's used in the outside world to come in most commonly. If you're building a simple app and Rest is fast enough, then I just say use Rest everywhere because it's so widely adopted and understood. There are times when you're decomposing monoliths and when you need things to be fast inside your platform, you might want to use some form of RPC for synchronous calls.
A lot of times at the perimeter, you'll have a Rest API, and then internally, you'll have a bunch of RPC-based services, but these guys tend to do the synchronous stuff. There's usually some point at which within your application when requests come from the outside that tasks that are long-running or queued, like if I'm a trading system and this guy says, "Hey, I'd like to buy 500 shares of Microsoft." We're going to make sure they're allowed to. We're going to accept the trade and verify with a couple of different services maybe, and then we're going to send a message, "Due to that trade, is it going to be a decoupled piece of the application behind the scenes that then continues working?" We're going to respond and say, "Hey, I got your trade in. It's working."
Then, if these guys progress, they're going to send messages back and then that's going to flow out through WebSocket or something and you're going to see updates and things like that, or maybe it's a long polling, chunk-based communications to the outside. There's lots of possibilities there, but those are the three kind of main mechanisms, resource-oriented like Rest, functional like RPC, and then async messaging.
The big platforms on the async side are Kafka and Nantz. In my experience there's other things that are great, but the tools like RabbitMQ, ActiveMQ, that was a different era. Great, great beasts. They're awesome, but they're designed to be like relational databases. They run on a single machine. You have to do some pretty hairy stuff to set them up HA and they don't scale out very well, where Nantz and Kafka, these are native cloud systems that scale very easily to whatever size you need them to be.
Okay. Could you also further elaborate on how each microservice can have its own data store without replicating a DB for each MS?
Sure, so a quick look at that, imagine you decide you're going to use AWS. You can do this on-prem, too, I'm just going to use AWS as an example. Let's say you're going to use Amazon Web Services as your deployment environment and, say, you use Kops to deploy a Kubernetes cluster. Now, you've got this awesome hundred-node, hundred-VM Kubernetes cluster and you can just drop your microservices onto that thing with specs and so on and so forth. All of our microservices are running as containers somewhere on these VMs on this cluster controlled by Kubernetes and some of them are stateful. If this guy is stateful and this guys is stateful and if this guy is stateful, they're different services but they're all stateful, we don't want to have a database embedded in each one of these guys. That would sort of be crazy.
Instead, what we want to do is we want to have some sanity because backing up databases is critical. Securing data, data loss prevention, all of that, these are critical things that need a lot of thought to get right. What we often would do is say, "Look, let's figure out what our teams are going to need and let's pick one good solution that's going to work for them, or two, maybe three. Key value store, column family, document database. Worst case, add a relational database as well." Relational databases more and more show up in the data analytics space as like a warehouse kind of a function and less and less in the online transaction processing side of things because they can't go to cloud scale, at least not at a reasonable price, anyway.
We could say DynamoDB on Amazon, awesome. We can scale it as big as we need to scale it. We use that as our platform and we establish best practices for backing it up and making sure that the data is secured and we look at all of their SLAs and say, "Aha, they're replicating our data and everything looks good." Each one of these microservices is going to have an aggregate, its own private space in that DynamoDB, and maybe policy configured such that they're the only ones that even can access it. If my microservice here wants to get information that's in this database, I have to call this service and then it owns that state. It gets it for me and it returns that data, and if I want information to send this microservice, I have to ask this guy and he gets that state and he returns it back for me.
The obvious disadvantage is I can't go to one database and get all of the information I want, but the advantage is that now if this state needs to change or needs to be reorganized, there's only one microservice that you impact when you do that. This state is perfectly aligned with that microservice's needs. When we use relational databases, everything's normalized and so it's not good for anybody. Everybody has to do a join to get the data that they want and that's the most expensive thing you can do in that database, where in this world, you get a document. You get a blob, you get a column family that's the exact data you want.
Column family databases can store jagged columns, so you could have one row with five columns, another row with 30 columns, another row with 300 or 3,000. Document databases can store JSON documents of any shape and size. You have a lot of flexibility because you're working with a schemaless database. Now, you have a schema. Your application has things that it needs and they need to be stored in a place where it expects them to be stored, so it's not like we can ignore a lot of the things that we learned about good data design and organization. We still want entity relationship diagrams. We want to understand how to break up the aggregates and all of that, but we can use a single database server with cluster. In that example, you could have substituted a Cassandra cluster for that DynamoDB, and each one of those services could have had its own keyspace, for example.
Is there a framework or a language that you would say is more suitable than another for MSPs?
That's a great question, and I'm glad this is virtual so no one can throw eggs at me, but I'll just you my personal experience. There's a lot of people that have a big, deep devotion to certain languages. There's a lot of languages I love, but I fully admit that maybe they're not perfect. The more modern the language, the more likely it is to be a great fit for microservices, and containerization is a big issue.
Go, for example, Go by default is statically linked, so you have Go binder... I don't know if you've ever installed a Go program on your computer, but if you install a C++ program on Red Hat, it won't work on a BUN2 because it's depending on a different glibc, or it might not, anyway, or it's depending on some other structure or something. Where when you compile a Go program, you just wget it from the internet, chmod it to executable, and run it and it just works because the only thing it depends on is the Linux [inaudible 01:14:10].
This is exactly what you want in a containerized microservice. Go also has really great, straightforward concurrency. It uses the communicating sequential process model, so that's a nice language. Scala is a great language because it gives you functional features, which really map nicely onto these service-based services- you're calling functions, so that's nice. It gives you enough object technology to really get the benefits from that, and it's also based on the actor model, so it's got a great concurrency model.
The downside of Scala is that you need a JVM, so if you're going to run 15 Scala containerized apps on a VM, you're going to need 15 copies of a JVM, so you get a little bit of a bloat. The tool chain... Java did some great and amazing things, but it's sort of last generation. It was all about, "Hey, our package is the WAR file or the JAR file, and our run time is Tomcat or WebSphere," or something like that. We're kind of moving into a level that's up one more notch abstract from that. It's not specific to any one language. There is a little bit of baggage with Java and Scala that is kind of a bummer.
Then, you've got Node.js, and that one... Node.js, it's single-threaded, but you don't have to worry about that when you're containerized because you can scale horizontally with all of these platforms. You can get the scale that you need and Node is event-driven. Because it's event-driven, everything's async inside that system, and so it becomes very, very efficient because you're not switching threads. Go sort of does this because it's using [inaudible 01:15:52] genes under the cover. To me, Node and Go, these are the languages I see people driving rapid, rapid change with, and Go is the language of container tech.
Kubernetes is written in Go, Docker is written in Go, and the list goes on, but Node is skyrocketing, too. Node and Go, Scala's great. Rust is another one that I like. C++ is a language that only an engineer could love. I love it, but I fully admit it's evil. Rust is a rethinking of C++ in this new era, and so I think that's another neat language. The great thing about container technology is you can use the language that you have great skill in, so Ruby, you see stuff in Ruby, you see stuff in... Python has had sort of a resurgence because of its strength in the machine learning and data science space, so Python shows up in containers a lot. That would be my long answer, sorry.
Thanks, Randy. We do have a few more questions here, but we're getting pretty short on time here, so if we haven't answered your question yet, we're going to send the answers out with the recording next week. We'll get Randy to answer those for you offline. Thank you, everybody, for joining. We hope you enjoyed today's webinar, and you can now disconnect. Thank you.