Software applications have been following a standard design approach for decades: teams build and configure databases, they implement server-side services and features, and they develop a user interface that makes interactions between their application and users possible. These three main components are usually complex, have many interdependencies, and can be an entire application or system. As applications evolved and software teams experienced attrition over the years, these systems often turned into monoliths that are difficult to maintain and upgrade. Challenges such as “dependency hell” can emerge, where it becomes difficult to track how various components interact and send/consume data to each other. This ultimately makes dependency management a full-time job for teams, as modifying one area of the application can have an unexpected outcome or behavior in another part. Another challenge is adding new features to the application. Having many interdependencies across components can make it challenging to understand the responsibilities of each application component, and where a responsibility begins and ends. This adds an increased burden for teams who are maintaining the application and scaling it for future requirements.
Microservices are a design approach where components of an application are broken down into lightweight, independent services that communicate through Application Programming Interfaces (APIs). They can maintain their own state, manage their own databases, or remain stateless. Microservices focus on solving specific domain or business capabilities and should be granular. This approach comes with many benefits, including:
Ensuring a modular design
Decreasing the risk of failure in one service impacting another
Updating and enhancing services are straightforward and focused
Deploying services independently and easily
Selecting the technology that best fits the requirements of that service
In contrast, traditional and monolithic applications need to be completely rebuilt and deployed when components change, lose their modular structure over time, require scaling the entire application over individual components, and eliminate flexibility with technology choices. There are many topics to think through when approaching microservices, such as the design of each service and whether you’re migrating a monolithic application or building one from scratch.
The Integration of Microservices & Containers
Containers have become ubiquitous in software development and deployments, and our Federal clients have embraced containers over traditional virtual machines. Containers provide the ability for development teams to build features and services for an application and guarantee that these features will work in every environment – including development, testing, production, and both virtual and physical servers. They ensure a definitive separation between one another while sharing resources, enabling containers to run on the same server, but run in isolation and not impact each other if there’s a technical issue. Containers can also be ephemeral and be created or destroyed easily. This enables teams to easily deploy and test new features in isolation and in any environment without impacting another developer’s workflow or other components of the application.
A container will maintain its own run time environment, tools, databases, and APIs – creating a completely isolated environment for service development. This provides a natural approach for creating and deploying microservices, while incorporating microservice development into a team’s DevSecOps pipeline and workflow. A developer within a team can develop their microservice and use Docker or OpenShift to create a container in seconds to run, test, fix bugs, and deploy their microservice. Once the developer is finished, they can destroy the container instance in seconds with no impact to other team members or other features within the application. This process speeds up the development cycle and the time to market for new features and enhancements.
With tools like Docker Compose, teams can define each microservice as a Docker container within a single file and execute multi-container Docker applications on any environment (e.g. testing your services in a staging or testing environment). Your Docker containers can then be deployed to Docker Swarm or Kubernetes for container orchestration, deployment management, and automatic creation and tear down of containers as needed (i.e. scaling). Leveraging Docker in conjunction with Docker Compose provides complete container and microservice integration, as each service is configured and managed in the container ecosystem.
Drinking Our Own Champagne
At EGlobalTech, we’ve developed microservices for multiple clients, including development of new systems and the migration of monolithic applications to lean microservice-powered systems. An example of a recent client success is migrating an existing system with dozens of interdependencies and a monolithic architecture, which made maintenance and upgrades cumbersome. Our team leveraged the Strangler Pattern (a design pattern to migrate legacy architecture components to microservices) approach to develop, test, and deploy dozens of new microservices that powered system messaging, alert aggregation and notification, and data formatting across multiple data formats. This enabled us to simultaneously test our microservices against the existing services and transition each service to the new microservice without any interruption to users.
Even though microservices aren’t a silver bullet and require thought in the design and implementation of these services, they build a foundational application design that is modular, scalable, and extensible as the system evolves over time.
Stay tuned for Part 2 of this blog series as it will cover some common design techniques.
Contact info@EGlobalTech.com to learn more!
Copyright 2019 | EGlobalTech | All rights reserved.