By now you no doubt understand the advantages of using a microservices architecture, especially in greenfield applications, and in new organisations that need to achieve efficiencies wherever they can. But what about your legacy code and applications? Do you totally rewrite the monolith, or do you chip away at it with new functionalities, added as microservices, over time?
You could pull out some functionality from the monolith, something that isn’t scaling well that you need to rewrite anyway. You might choose to implement it as a standalone service. But then you’ll have a hybrid application that presents delivery challenges. How do you deliver those hybrid apps while they’re in transition to a more flexible architecture based on microservices?
These are just some of the challenges organisations face when planning future products. Fortunately, there are several strategies you can use when planning a microservices-based system. But the issues can be complicated when existing monolithic systems are in play.
Be a monolith…at least for now
Whether you’re working with legacy systems, trying to incorporate microservices alongside a monolith, or you’re working in a greenfield, the process can get complicated.
Boundary decisions can be tricky as you’re designing a greenfield app; you don’t want to spend all your time changing the boundaries for your services. Service A is responsible for doing a particular thing, but what if you realise that’s wrong? That thing belongs within service B, because it needs access to this other service that only B touches.
When starting out with a greenfield application, Martin Fowler and several other experts recommend letting it be a monolith, initially. Don’t try to separate everything out into microservices before you know where all the dependencies lie, they say. That period might only last for the first few sprints. But until patterns start to emerge, and you understand the problems you’re trying to solve, don’t spend time on a microservices architecture.
Of course, there will be a few obvious things to separate out, such as an identity service, or a login or profile service. There will be a few obvious, fairly common things you can carve off and start working into a microservice right away. But you want to wait to see how things shake out.
Observe patterns before setting services in stone
If you haven’t solved a problem before, see how patterns emerge before you start breaking down the dependencies into microservices. Data tends to be difficult, whether you’re working with greenfield apps or interfaces to legacy apps. Whenever you’re dealing with relational databases, you’re going to run into interesting data structures that you must resolve or overcome.
You may have in your monolith (or in your idea for an application you’re about to write) things that you want to do to, or with, a chunk of data. But there are foreign keys involved, and you have two tables, with a foreign key from one to the other. What if you want these two tables ultimately to be serviced by two separate microservices?
All of a sudden, you must load a foreign key from one table to the other, but if you don’t have the code for the object to which the foreign key refers, you’ve got lots to think about: Am I caching things here? Can I ensure transactional integrity? These are the kinds of sticky problems you must resolve when dealing with relational database issues.
In fact, there are many things in your problem space that don’t have to be transactional. For example, if you’re using a recommendation service within a shopping cart application, that data doesn’t have to be transactional. And if the system hasn’t finished processing the last 1,000 searches, and the service doesn’t render the benefit of that specific recommendation, then nobody really notices. If my Twitter or Facebook feed doesn’t have the very latest post from someone’s wall, well, that’s ok.
But that’s very different from a banking application, where I deduct money from an account and send it to another account. That had better be a consistent, endurable, and atomic operation.
Understand the benefits of the monolith
But many problems are not of that nature. Companies that start investing in microservices very rapidly gravitate toward architectures like messaging, and key value stores, and the kinds of things that facilitate communication and data sharing between services. And suddenly, this becomes a difficult problem.
In one sense, it is easier to construct a monolith, since you don’t have to worry about some of these problems. If you load all of the data, and you have two services working the same process, that’s ok. But once they’re in two separate processes, in two separate containers, in separate virtual machines, you must worry about data coordination. So there are definitely some problems you must solve as a result of choosing to go with microservices.
Consider the benefits of microservices and containers
How can a decision to use containers help solve some of the architectural problems that you face? Containers are a good fit for microservices because they serve a single purpose. A container’s mission is to run one process, and to be listed only on one port, although there can be exceptions. But generally, you’re using one container, one service, one port. Containers boil things down to the essence of what a microservice really is, and they match the delivery pattern quite well.
One of the reasons that containers caught on so quickly within development organisations is that they are so fast. They’re quick to create and tear down, because you’re just starting a process. If you don’t have to build an image that already exists, and you don’t have to download it from the repository, then starting a container is just as fast as starting your process.
If I’m an engineer working on a service, and I need to do integration testing or profiling, then I require that the other 10 to 30 services on which I depend for my services to work, and I need to deploy those. If this were a monolithic application, then I’d have to deploy the whole thing, which is slow and error-prone.
Often, there will be one server that everybody shares. But, is it up to date? And yes, the engineers run continuous integration (CI), and after each new CI build they test against it. That’s really inefficient, especially since you end up interrupting other people’s work.
But if you have a collection of services you can start, those will start very fast, they don’t usually consume many resources, and they go away quickly. That’s a more efficient way for development organisations to work. What’s more, in that container I can deliver the entire tool-chain on which the microservice depends. I don’t have to read a wiki page to figure out what the 12 environment variables are that I must get right in order to successfully start my application. So there’s a great match between microservices and containers.
Microservices want to be the very essence of one capability, and containers just want to serve a single process. They go hand in hand.
While you could run a monolithic application inside a container, you would not reap the benefits that containerisation offers.
The impact on IT Ops
While many operations teams are embracing containers, there is less adoption in older, brownfield, organisations because the tooling is very different. There’s a massive difference in the tooling you use when you’re using a WebSphere application server for a monolithic app, which runs on a cluster, talks to a database, and has a front-end Apache server.
Maybe you could do this by using Chef or Puppet to configure the virtual machines on which your app is running, but these are very different tool sets from using Kubernetes, Mesos, or the Amazon elastic container capability. From an ops perspective, you’ll need to learn a whole new suite of tools.
Something interesting is happening with the infrastructure vendors. If you take Docker containers, for example, the image created from a Docker file is similar to a Chef or a Puppet recipe. So you’ll have some choices to make when you stop using some of the more common infrastructure tools and move to containers. Or else infrastructure vendors will have to learn how to play within the container ecosystem—and some have already begun to move in that direction.
On the container side of things, your operations team’s ability to monitor a microservices application is important, because if something goes wrong, every outage turns into a “treasure hunt,” as one of my colleagues on Twitter stated recently. If that happens you’ll be asking which of the 50+ services is to blame.
Monitoring and logging become much more critical in the age of microservices. They’re already critical, of course, but it’s more complicated if you’re going to do well in a microservices architecture. You’ve got to be on top of your game.
Understand the workloads of the businesses
When you are working with 5,000 or 6,000 applications, as some of our Fortune 20 banking customers do, you’re not going to convince the business to rewrite everything. But they still want to use core technologies as part of new applications. This is the conundrum.
As they’re making the shift toward microservices, do they need completely different technology stacks for the software pipeline, depending on whether they’re doing traditional monolithic applications, or newer microservices applications? Or a hybrid approach, where they have monoliths actually using microservices?
Enterprises have a choice. Even if you don’t want to switch your runtime container service, you’re probably going to want to run locally on your machine, so you have more control while you’re doing QA, debugging, and tracking down problems.
One nice thing about containers is that they tend to be alike. So if you use, for example, the same Docker file, your container becomes a cookie-cutter template that helps you in production. You have the same infrastructure and the same tooling you had in testing.
We want to help our customers shift the workload to where you they want it to be, whether that’s in the cloud, in their virtual infrastructure, or into containers that they can push to the cloud. At Electric Cloud, we’re working on being agnostic to container hosting services—to model our customers’ microservice applications inside of our product, and allow them to deploy one day to Kubernetes, and the next day to Amazon. In this way, container services can help you avoid that classic vendor lock-in issue.
Anders Wallgren, Chief Technical Officer, Electric Cloud
Image source: Shutterstock/Kalakruthi