fb
API Management 8 min

Digital Transformation at LAC 2017 – a Retrospective

Hans Bot
Hans Bot
Senior Solution Architect
Hans Bot LAC 2017
Scroll

LAC-impression.pngThis November marked the 19th edition of the Dutch National Architecture Conference LAC – and the first Yenlo sponsored. This year’s conference carried the theme “Digital Transformation”. Three main topics definitely sparked the most interest during and in-between the sessions.

First, many attendees were more than just curious to learn how to minimize architecture efforts in an Agile process. How to achieve more, with fewer sweat. Sorry to disappoint you, but an agile process is not a veil to disguise your laziness, instead, it is about team effectiveness. And that will most likely still involve hard labor.

Secondly, the questions how to support an organization in its digital transformation was on top of mind of many architects. In theory, it all seems sensible and doable. But where to start? What to do differently tomorrow? And how to build momentum? After all, existing architectures and practices are often in the way of any transformation – digital or otherwise. A new balance of agility and control is needed to guide organizations successfully through the transforming digital universe.

Finally, API’s were the talk of the town. And rightly so. After all, API management is widely regarded as the key technology that fuels digital transformation.

Encouragingly, the message Yenlo brought to this year’s conference, really resonated with the LAC audience. Consistently, we conveyed our three mantras:

  • API first
  • Single point of control
  • Start swiftly, move freely

API first

API’s are popular among architects. It sure feels everybody is buying in into the new paradigm. According to our observations, enthusiasm grows with experience – meaning architects who worked with mature API-oriented systems, tend to be more positive than architects who are more in the beginning of API adoption. The more you practice, the more you understand that API’s are in fact much more than just another interface model.

A well-designed service portfolio serves as your business proposition to front-end developers. In fact, it is the language to interact with your information systems. And designing an interaction language is core business for many architects. The good thing is that the API management technology introduces a service proxy that allows the interaction language to be independent of the internal system interfaces. It’s like the good old canonical data model, only this time managed per API –hence much better manageable– and limited to the data exposed through an API –hence less megalomanic. Many architects also subscribe to our view to design API’s from the outside-in, as an integrated portfolio of services, rather than a more or less random collection of system primitives. API’s, in other words, are a valuable enterprise asset, and should be designed as such. Systems implementing the services should be architected around API’s, not the other way around.

By the way, this is much the same message the Domain-Driven Design movement has been preaching for many years now. And if you’re looking for inspiration how to properly design your API portfolio, the DDD theory should definitely be your first stop. And this promises a viable way for architects do be more effective, and focus on the efforts that render real results.

API first promises also is a novel approach on co-development. With API Management technology, it is easier than ever before to test your interface design before you start implementing it. Effectively, you can simulate the integration, based on prototypes, to test the design of the interface. This will yield early feed-back on the design, and less rework on the implementation. If you are pursuing a co-development strategy, API Management will surely help you to succeed.

Hans_Bot_LAC_2017.png
Photo credits: Joost Lommers.            

Single Point of Control

The recognition that API’s must be the single point-of-access to really become the universal connector everyone envisions, is perhaps still a bit less manifest today. However, this may well be the key to a successful digital transformation. Were API’s are to become just another integration pathway – next to ETL, EDI, SOA, file-based data synchronization, and even shared databases –in a way adding to the integration spaghetti instead of replacing it– the agile future we dream of today will, indeed, remain just a dream. Instead, no system or application should be able to cross this barrier other than over the API bridge. No more exceptions, no more discrimination. By the way, the traditional separation between “internal” and “external” systems is fading anyway, so policies based on this classification probably need reconsideration too. And this is, perhaps, the unruly part of an API first strategy. If you still have a legacy problem to fix, if you still have technical debt, your API first strategy will encourage you to clean up your systems and unclutter your architecture.

So, the toughest part of API adoption is probably implementing API’s as the exclusive gateway in a new perimeter. But the benefits will be significant. Having a single point of access is key to an effective enforcement of policies – if you can bypass the gateway, how effective are your policies anyway? This is important to understand. Suppose you want to shield a back-end system from an overload of concurrent requests through rate limitation. Obviously, the only way to do so, is by tunnelling all request over a single point of control.

On top of that, by virtue of this single point of control, you can generate valuable management information in real-time. In fact, you create instant situational awareness. We are only beginning to discover the many benefits this brings. For one, the detailed information on the usage of your services helps you to create a more intimate relationship with your consumers. After all, you have valuable insight in the development efforts as well as the run-time success of the developers. You can simply build a two-way line of communication with them, and grab the opportunity to sync your efforts with their needs. That’s cool. But there is even more.

At Yenlo, our DevOps teams hate unplanned maintenance work – especially during off-hours. That’s why we embrace a noOps strategy. Hence, we are continuously looking for ways to automate operational work. We automate our deployments, and run them unattended during the night. We’ve developed scripts to auto-scale our clusters. No worries to absorb peak loads anymore. And now, we’re using real-time data on the API gateways to take this practice a step further. We are processing the API event stream to look for any suspicious behavior, and, upon detection, counter it with proper counter-measures. If a developer suddenly registers dozens of applications and subscribes them to all our API’s, he might see his account blocked automatically – possibly preventing an attack. Of course, when he offers a reasonable explanation, it’s easy to undo this manually. But we’d rather not take the risk.

Even so, a sudden flood of messages from a single source may see its IP-address automatically blacklisted. An application that gets rogue on, say, Android devices, may see it’s throttling limit automatically downgraded to a single request a second. And so on. Our strategy here is zero-repeat. Every time our DevOps team is distracted by operational hassle, we develop a script or procedure to prevent the same problem from ever occurring again. And with our API Management solution, we feel empowered to pursue our strategy. We’re now in the process of investigating the power of machine learning to automatically detect suspicious event patterns and take our defense to the next level. And we think you should too.

Start swiftly, move freely

Our zero-repeat strategy is just one example of learning while doing. In fact, this is part of a larger strategy, which we call “start swiftly, move freely”. Starting as soon as possible with a minimum viable product is common sense, these days. However, this all to often limits DevOps teams to take only baby steps at a time. Just build on top of whatever is available, and work around the limitations. This way of working leaves a lot of legacy permanently untouched. Moreover, by keep building new stuff on existing legacy platforms, the problem to get rid of the legacy will only increase. It’s an accumulation of technical debt, effectively putting a mortgage on the future.

What to do?

When you think about the problem and its root cause, much of it stems from an arduous inability to move freely. Punitive license policies and proprietary technologies succeed to lock you in to the vendors you once selected. And these vendors are not necessarily offering the best products or conditions to tackle your current problems. Fortunately, there is a way out. Open source technology is there to escape from a lock-in. With open standards, you are free to mix and match different technologies. Together, this means that you can take much of your technology decisions on a tactical rather than a strategic level. There is no longer a need to use a single ESB for all your integration efforts. Or a single message broker. Or an API Gateway. There is little harm in having your DevOps teams decide for themselves what technology works best for them. Let them experiment, and embrace their successes.

Starting swiftly implies the ability to decision swiftly. Decisions to adopt a new technology, and decommission a prior technology, or to extend an existing one. With open source technologies, you’re free to scale up and to scope up swiftly, as and when the need arises. You don’t have to invest years in advance, on the basis on a predicted future need. And that’s why we favor open-source technologies.

Closing thoughts

Like every year, the LAC gathering proved to be the prime networking event to catch up with digital architects across the entire spectrum of industries, technologies and science throughout the Netherlands and Belgium. Thought provoking speakers energized the attendees, and will most likely inspire them throughout the coming year. During the interactive sessions of the second day we exchanged a lot of knowledge and experience across our community. This will keep us all running until next year’s conference. We’re looking forward to meeting you all again at the 20th anniversary LAC in 2018.

Full API lifecycle Management Selection Guide

WHITEPAPER

smartmockups l0qqucke