Business logic
Here’s an example.
“There shall be no business logic implemented in the integration layer.”
This makes sense, right? Having business logic in an integration layer violates the separation of concerns principle. Integration should be purely about connecting systems that are typically unaware of each other so that you can make changes to a particular system without impacting all the systems that somehow interact with it. Integration logic should be clean, stateless, and only change with the systems that use it, not with changing requirements from the users of these systems.
For instance, think of a change to a database schema. In case a certain property has a set of possible values, and the set gets extended. Suppose you introduce a product of a new category. You can implement the change in a number of ways, as a validation in the user interface tier, as a business rule in the business logic tier, or even as a constraint in the database. But the integration layer should be insensitive to the meaning of a value and just pass it through.
Not so fast
Well, actually there can be a number of reasons why this is a problem. Security, for instance, where malicious users try to derail your systems by feeding invalid values into them. How to know what is a valid value if you know no business logic? Or privacy, where credit card numbers have to be masked or filtered out for certain users. Or routing rules, where different values have to be routed to different servers. Having a partitioned topic in your event broker may come to mind. Or shards in your database. Again, it requires a certain business logic to do the right thing.
Being clueless on the meaning of the data in transit seriously restrains the potential power of your integration services.
Tradeoff Analysis
In architectural speak, these are tradeoffs of not having business logic in the integration layer. You trade security, privacy and scalability for maintainability. So, it makes sense to raise the question why maintainability should be negatively impacted by having business logic in the integration layer. Sure, it’s complicated to manage a change when there are interdependencies in the deployment of systems.
Updating a shared integration bus is a sensitive procedure in the first place, because many business processes depend on it. Hence, you don’t want to update it often, and you do want to make sure that you can update it without impacting surrounding systems. And yet, putting the brakes on updates opens a whole new can of worms.
Why?
Now, when the problem is in having shared integration software, the question is again: why did we choose to centralize this in the first place? Perhaps you’ve once decided that it’s complex to develop integrations, you’ve considered scarcity of adequate resources, or perhaps the expensiveness of your middleware. Or you simply value reuse of solution components. But is this still valid? In other words, if we’re able to take away these downsides, would it make sense to take a more distributed approach towards your integration competence?
Changing realities
In case you haven’t got the memo, the integration field has changed drastically in recent years. The heterogeneity in protocols and styles has largely gone away, and with it the complexity we used to face. The industry has instead adopted RESTful APIs, which are pretty straightforward to implement. And with mature API Management Portals, the engagement of developers providing and consuming APIs is settled.
Having to update multiple container images simultaneously, an integration container containing anti-corruption logic and its context-independent implementation, is gently managed by Kubernetes – zero downtime guaranteed. You can even run them in one pod, if you want. Additionally, there’s a rich choice in powerful cloud-ready open source middleware technology nowadays. All in all, the distributed nature of integration systems is no more problematic than the distributed nature of the worldwide web. What’s the problem, again?
All in all, using modern technologies, it’s no longer a no-brainer to trade security, privacy and scalability off for maintainability. You can have it all. Moreover, the massively distributed microservice architectures we see today would not be possible when having to rely on a centralized integration team.
Integration service
Obviously, with the integration logic being owned by the same team who implement the core logic, we can see the discussion about where to implement ‘business logic’ in a different light. It is no longer unusual to separate generic logic from implementation-specific logic, and thus make reuse work in a different way. Everything implementation-specific, be it technological or functional, gets captured in the integration service. This works with off-the-shelf software, home-built software and cloud services alike. If contextualization includes business logic, so be it. Who cares?
The demise of the enterprise
Enterprise integration is just one of the functions that are being decentralized. Master data management is another. Also metadata management. All for similar reasons. The demise of the enterprise and the proliferation of all things agile does put a new strain on governance, but that’s a topic for another blog post.
Blown to bits
Some developers start taking the new paradigm to the extreme by positioning the event broker middleware as the single source of truth. I get the logic. Evaluating a series of immutable events will give you the current state of an object, and as a bonus, its state at every point in its history. It’s hard to argue with the logic once every state change gets triggered by an event. For an architect, this is certainly a new pattern in the toolbox to take seriously, especially for master data management, case management, and other object types that don’t change frequently. Another principle blown to bits.
Streams
Talking about stateful integration, I have to briefly discuss streaming integration too. This is perhaps the best example of how keeping state in the integration layer and being extremely scalable go together these days. It works best in cases where meaningful data has to be extracted from volatile data and acted upon. Think of sensor data, click streams, video surveillance, and log file processing, as implementation areas.
The kind of things you can do, processing massive amounts of data in a streaming fashion, simply wasn’t imaginable using traditional paradigms. You can use machine learning, for instance, to recognize unusual patterns automatically and reliably. That even works in a stream of API invocations.
And yes, it’s also suited to solve many of the typical “enterprise” problems you probably face in your ETL (Extract, Transform, and Load) pipelines. Real-time data integrations in distributed streams. Fancy that.
Encapsulation
If you’re old enough to remember the three core principles of Object Oriented Programming, like I am, you know the importance of encapsulation – having a single class containing all logic to access an object. No other class shall access any object bypassing the access logic of its parent class. If the object definition changes, you only have to change one class. Again, makes total sense.
Along comes the “Command Query Responsibility Separation” pattern. In short, in microservice architectures, it’s not uncommon that different microservices are used for searching and querying objects as opposed to updating objects. Temporary inconsistency between different instances of the same object are traded off for scalability and performance. Now I think of it, the cloud-native microservices school is obliterating many unassailable principles all at once. And that’s a good thing.
Bottom Line
Once the technological, human skills and financial constraints start shifting, new paradigms become viable, and may even become dominant in a short period of time. I’ve seen my fair share of different styles being practiced over the years. What I’ve learned is that there’s no such thing as a single best style.
If your current way of working fits you just fine, then don’t get yourself distracted by the hypes in our industry. After all, it can be costly to change your core principles and the benefits may be disappointing. Proven solutions, such as the WSO2 Enterprise Service Bus (ESB), are here to stay.
At the same time, now that ‘Big IT’ has paved the way for a new, more agile way of working, do not simply stick to your old patterns without any reconsideration. You really might be better off in the microwave. It’s quickly becoming the new normal. With our cloud-native WSO2 Enterprise Integrator and the WSO2 API Microgateway, we can help you either way.
So, next time don’t ask the question “What would Google do?”, but instead ask, “What would Google do if they were us”.