Blog 10.23.2018

Evolving a Software Architecture

Where Were We?

A few years ago, I was asked to help a Client migrate their monolithic applications to a more scalable and maintainable architecture. The existing design had been experiencing performance issues and there was literally nowhere else to go. The apps were running on the best hardware available but being monoliths, they could only run on a single server. They also had issues with application deployments and reliability. Any change that was made required an entire app to be deployed. A single error could crash the entire application. We decided that it made sense to go to a microservices based design. That was then. I was recently asked to return and help with a reassessment.

Just Enough Intentional Architecture

In the “old days”, software architects could spend months defining all the organizational rules that were required to develop software. Intentional Architecture is meant to be much more lightweight. It is intended to be a set of design guidelines that give developers sufficient guardrails to start building software in a consistent manner. Correspondingly, Emergent Design is what you learn when working in sprints, building said software. The idea is for Emergent Design to inform and improve the Intentional Architecture. Defining the Intentional Architecture becomes an iterative process. We defined “just enough” Intentional Architecture to kick-start the sprint teams and have them start their work. Our initial Intentional Architecture can be summarized by two main facets:

  • Use component-based design: break applications into single-purpose, self-contained, reusable components
  • Leverage Data Services: applications must access data through Data Services

Components would give us the ability to scale across multiple servers, improve reliability by better segregating faulty code, and deploy the application in smaller chunks. Building a data services layer also improved the deployment process by allowing us to separately deploy database changes.

We also defined a technology platform along with coding and database standards, so any developer could work on any module, keeping what they needed to relearn to a minimum.

Continuing To Do Better

The team had done fairly well. They had made great progress in stripping away functionality from the monolithic legacy apps and move it to smaller, microservices-based apps. The deployment of the new apps was significantly easier. Reliability and performance were also vastly improved.

There were three areas identified where progress still needed to be made:

  • Easier and more frequent deployments
  • Leveraging Cloud services more effectively
  • Migration from the monolithic database

Application deployments were usually scheduled once a week because they still took a lot of resources to test. The team also wanted to migrate to the Cloud more quickly and spend more intelligently on Cloud provider services.

How Do We Get There?

I suggested a two-step process:

  • Rethink the database
  • Leverage serverless functions
Rethinking the Database

The existing database is monolithic and has a fairly complex schema. Many of the tables are interdependent making deployments difficult. Any schema change could impact any of the services. The idea is to break up the database into separate databases, each one dedicated to a specific microservice. This would segregate database schema changes to a single microservice facilitating deployment and reliability. I would also suggest leveraging DBaaS (Database as a Service) if possible. We would simplify database administration and possibly lower costs by going to a “pay what you use” model.

Leveraging Serverless Functions

Migrating appropriate business logic to serverless functions could also simplify deployments as well as give us additional alternatives for improving application scalability, resilience, and reliability. It’s not always possible but should be actively explored.

What’s Next?

We need to start small, learn, and repeat. It’s very difficult to break apart a monolithic database. It can’t be done all at once. We need to pick an entity that can more easily be pulled away, one with the least number of dependencies, and build a self-contained service and database.

I hope to get into more gory details in future blog posts. Let me know what you think and what you would like to see.

This Blog, written by a former Senior Principal and Thought Leader, is used with permission from the author. Trexin welcomes comments and discussion on this topic. For additional information email Trexin at

Tagged in: Blog, Technology


  1. Lorenzo De Leon

    Mike, that’s a question that comes up a lot. Us “more mature” folk need to think more in terms of services as opposed to database tables. I would argue that tables are too limiting. Given that, what should do we when a service fails? I think a pending/success/failure pattern may be best for most cases.

  2. Mike Kelly

    How do you handle transactional integrity with the multiple database model? E.g. imagine a service for billing and a service for inventory. A purchase involves debiting the inventory and creating a billing record. Traditionally, we’d use a transaction to ensure bills aren’t created if inventory couldn’t be debited, and inventory wouldn’t decrease if billing failed. How do you handle this problem? Do you just need an “undo on failure” approach?

Comments are closed.

Social Media Accounts