Refactoring Towards a Composable Architecture: A Technical Perspective
As digital transformation accelerates, modern organizations are seeking flexible, scalable, and efficient approaches to software development. Composable architectures, built on Packaged Business Capabilities (PBCs), offer a compelling alternative to traditional development methods. This deep technical dive is intended for seasoned developers and IT professionals who are interested in how to refactor towards a composable architecture.
What Are Composable Architectures?
As digital environments evolve, so too must the architecture that supports them. One such evolution is the rise of composable architectures. This maturation of the microservices and serverless architectures that initially displaced monolithic applications allows us to bundle similar services into headless components that can be deployed rapidly and efficiently.
Composable architectures leverage Packaged Business Capabilities (PBCs) - self-contained business functionalities that can be used, reused, and interchanged based on an organization's dynamic needs. The fundamental principle is creating these PBCs in a modular, decoupled way, enabling businesses to assemble, disassemble, and reassemble their digital ecosystems at will.
The Power of Composable Architectures
Composable architectures facilitate the rapid development of business systems with interchangeable components, enabling organizations to adapt to changes in the business environment swiftly and at a lower cost. In essence, the core of this approach is to split monolithic systems into independent components or modules that can be deployed and operated separately.
The real power of PBCs in composable architectures can already be seen today with tremendous success in the retail and e-commerce market, reducing complexity, increasing personalization, and reducing license maintenance costs. For a further analysis, please see the original article explaining Composable Architectures and go a step further to explore how Composable Architectures accelerate chatbots within the enterprise.
Navigating the Transition
While the move towards composable architecture is filled with promises of increased speed, flexibility, and scalability, transitioning from monolithic applications is not without its challenges. It requires a deep understanding of your business requirements, existing architecture, and a roadmap for implementation.
Let me repeat, it requires a deep understanding of your business requirements. This is absolutely the single most important precursor to any transition or transformation. If you do not firmly and deeply understand the target, desired state then it is impossible to know what will best align with those goals. You will end up building or refactoring something that at best won't be used at worst negatievly impact your overall business. So it bears reiterating - take the time to understand, deeply, what the actual business need is before making any decision, composable or otherwise.
Key Considerations in the Transition
As you embark on this transformative journey, here are ten key considerations to guide your path:
- Understanding Business Needs: Understand your business requirements and processes thoroughly. Composable architecture is all about aligning technology with business capabilities. This alignment will help you decide what components should be packaged and exposed as services.
- Assess Current Architecture: Perform an assessment of your existing system. Identify its strengths, weaknesses, and areas of complexity that could benefit from a composable architecture. Identify the components that can be isolated and made into microservices.
- Granularity of Services: Finding the right size for your services is essential. Too coarse, and you lose many of the benefits of a composable architecture; too fine, and you risk creating a distributed monolith with high inter-service communication overhead.
- Data Management: In a monolithic application, data is often centralized and accessed through a single database. In a composable architecture, each service typically manages its own data, requiring a shift in how data is stored, accessed, and managed.
- Inter-Service Communication: Composable architectures rely heavily on inter-service communication. It's crucial to decide on a communication protocol (like REST, gRPC, or GraphQL) and consider how services will discover and interact with each other.
- Service Coordination: In a distributed system, coordinating changes across multiple services can be challenging. You'll need to implement patterns for distributed transactions and consider how to handle eventual consistency.
- Testing Strategy: Testing becomes more complex in a distributed system. You will need to design a robust testing strategy that includes unit testing, integration testing, and end-to-end testing.
- Monitoring and Observability: With many moving parts, it becomes essential to implement comprehensive monitoring and observability to understand system health and behavior.
- Deployment and DevOps: Adopting a composable architecture means more frequent deployments, so it's crucial to have robust DevOps practices, including automated deployments and continuous integration/continuous deployment (CI/CD).
- Cultural Shift: This is more than a technical transition. It requires a shift in mindset from large, project-based development to smaller, more frequent updates and deployments. Building a culture that embraces this change is crucial to your success.
The Future is Composable, So Leverage It
The MACH Alliance, a global community of tech companies advocating for open, best-of-breed technology ecosystems, is paving the way for a future where composable architectures will be the norm. As organizations prepare for a future dominated by composable architectures, the key question is: are you ready for the transformation?
Traditionally slower markets entrenched in monolithic applications like the public sector are ripe for disruption by the composable revolution. The momentum of composable architectures is already palpable, and the trend is only going to accelerate. It is absolutely critical to survey and evaluate existing headless platform options before attempting to build it yourself. Shuffling off the Not-Invented-Here (NIH) Syndrome allows organizations to rapidly develop and deploy full solutions that continue to evolve even beyond their own engineering teams, leveraging the full capability of the open source and commercial communities.
Refactoring to Composable Architecture: A Technical Example
To better understand how to refactor towards a composable architecture, let's look at a code example of a simple API built using Node.js with Fastify, a high-speed web framework, and Node-EventStore, a Node.js event store implementation with a focus on Domain-Driven Design (DDD) and Event Sourcing.
Refactoring towards Packaged Business Capabilities (PBCs)
After surveying the available headless options that may align to your required PBCs and finding none that fit, it will be necessary to refactor or net-new build those PBCs fit for your purposes. To accomplish this effort, you will need to extracting existing logic into separate but logically co-located services - this serves as the basis for internal PCBs. Each PBC will likely have its own codebase (you could monorepo, but it is not recommended) and will run on its own server or container and therefore should have similar, if not exact, scale, security, and operating parameters. Due to this, find the natural business "fault lines" within your application systems be it based on how the system is used (frequency analysis, etc.) or how it is assembled (where Object Oriented Program once shined). With these fault lines, begin to section them off into first logical PBCs within the existing code base and eventually transitioning them to physically separate composable elements interacting with the code base via API calls or event sourcing.
SLO-Fast Transformation
In the early adopter phase of "Digital Transformation", teams would immediately saddle up and start rewriting, refactoring, and.... well unfortunately most times they would miss the mark and/or deliver way past the timeline and budget, if at all. This sad state of affairs is generally due to a lack of understanding the objectives for the service. It is for this reason that Service Level Objectives (SLOs) have rapidly become no longer an instrument of the operations team, but an incredible driving force within the development and refactoring space. Similar to the rise in code quality that is generally associated when teams adopt a Test / Behavior Driven Development philosophy, so to do SLOs drive quality and success in refactoring projects. The OpenSLO Language provides a declarative way to describe the objectives that existing services have, enabling teams to understand the operational requirements and build against a test harness in the SLO itself.
In practice, defining the SLOs for each natural business fault line allows teams to rapidly and accurately target and drive change with minimal disruption. Furthermore, it accelerates the Mean Time To Recovery (MTTR) of not just the focused PBC, but all of the PBCs, formed and unformed, that rely and interconnection, directly and indirectly, to it. This provides an incredibly powerful insight as to what changed and what was impacted throughout the transformation from monolith to composable architecture, while also ensuring operational parameters are demonstrably satisfied or improved.
Refactor Doesn't Always Mean Rewrite
In the great hype of new technologies, there is often a critical element missed which is the value of production time. Just because there are new programming languages that sport potentially faster or nicer development experiences does not automatically mean they will magically be better for every situation. In fact, more times than not, going down the path of transformation means rewrite trends towards a never-ending spiral of incomplete or misaligned implementations that are worse than the original, despite all of it unsightly parts. This happens due to a lack of appreciation for the value of production time, which is to say that the longer a system has been in production is directly correlated to the validation level that it is approaching accuracy in the service it provides. Note this doesn't guarantee accuracy of the service, but it does imbue that if the service has been in play for an extended period of time, for better or worse, the ecosystem around it has made it work and thereby the overall accuracy is "dependable". The quotes around dependable are because the result may not be ideal or completely accurate, but it is dependably the result that is provided, which means that systems that rely on that response, both directly and indirectly, will have built transforms that make it accurate to their means and because of this, rewrites - even if to make more accurate - often trend toward broken systems.
Conclusion
Transitioning to a composable architecture is not a trivial task. It requires careful planning, significant refactoring, but not necessarily rewriting, and a commitment to continuous learning and improvement with SLOs as guardrails. However, the benefits — enhanced scalability, flexibility, and speed of development — make the transition worthwhile. As the digital world continues to evolve, the ability to compose and recompose software quickly and efficiently will be a key competitive advantage.
Remember that moving towards a composable architecture isn't just about technology - it's also about changing the mindset and culture of your organization. Be prepared for challenges, but remember the potential rewards are worth it.
Let's Make This Real For You
If you're looking to explore how these concepts and technologies can be applied to your organization, reach out to Chris Williams, the maker of improbable things and author of this article. Chris can provide the insights and expertise you need to turn the improbable into the achievable and propel your organization into the future.