Ever had that feeling while writing a piece of software that you could write it as an API such that it can be re-used by ANYONE? I have. Especially with enterprise applications the temptation to write an API for every class/function can be quite strong because not only would we like to offer our software with the UI but also as a set of APIs (the more RESTful the better) because then anyone can use it – internal and external users. Yay!!! The application would be modularised to the nth degree because that’s what you want, isn’t it? Loose coupling and all the rest of the good stuff!
But hold on just a second there, while in itself the goal is very noble and commendable, you might be overestimating the benefit. The obvious risk is that you are engineering for a potential use case that may or may not happen, with very little information. You are having to make a lot of decisions and assumptions right now about a very uncertain future with very little understanding of the actual needs of the said future.
As an example, recently I have been building a library for use with the new product we are building at work. The main purpose of this library is to expose Microsoft Azure Cloud Storage SDK in a more developer friendly way. It encapsulates all the grunt work of setting up Queues and Blobs storages on Azure cloud so the developer only has to add a reference to this dll in their ASP.NET MVC web project, inject the settings at the run time like queue name etc and call the relevant methodsĀ via cleanly defined interfaces. One other reason of wanting to use Azure Queue storage was that I wanted to use Azure Web Jobs to process some tasks in the background for e.g. data upload, which means my main application can be freed up to do its own thing. The first version of this infrastructure helper library integrated fairly well with the main application and I was able to enqueue jobs for the Web Job to pick up from the queue, process them and put the result in an output queue which I could poll and get the results of the operation from.
Then about 2 weeks ago, someone in the team had the brain wave to make this library available as an API (preferably a RESTful one) so that other applications don’t have to resort to adding dll references, they can just use the HttpClient derivative to send GET and POST requests. The obvious benefit is that if the dll had a bug fix to it, you won’t have to update all the referencing applications with the latest version of the dll. Oh and while I am it, it wouldn’t hurt to expose these API endpoints to authorised external users/systems as well which means any authorised application can use my APIs to enqueue background jobs!! Us developers can’t pass a good design challenge by so I decided to take on the task of putting a WebAPI front to my storage library, my goal was to make it as RESTful as possible which, trust me, sounds much easier than it really is. I also thought, “hmmm, wouldn’t it be nice if I created a custom client as well to go with my API so that the consumer only interacts with my client without having to hand craft their own API calls?” I got to work.
At the end of the 2 week sprint, I had a design that I felt worked very well. All the API end points were RESTful (funny, because I was damned tired!! Hint: REST joke! :)) i.e. all the endpoints received everything they needed in order to process a request without having to maintain any state. All the operations were the usual HTTP GET or POST methods. I had used attribute routing in ASP.NET MVC 5 that allowed me to design the route template such that it mimicked the resource/sub-resource structure of Azure Storage services and of the RESTful style. As a by product of designing the API though, I also had to refactor my core library to make it more API friendly, you know, honouring Uncle Bob’s SOLID principles which I really hold dear! This is one good thing that API design does, it forces you to simplify your core code quite a bit. Anyway, I was pretty chuffed that I had achieved something brilliant, even though there was some more work to be done but I felt accomplished rest-ing (excuse the pun!) on the fact that I can continue working on this story in the following sprint and really tie it all up properly. I was on a high on a Friday afternoon (the day when our 2 weekly sprints finish).
That afternoon as me and my colleague were doing a little retrospective of this story, it dawned on me: I HAD OVER-ENGINEERED IT!!! Bear in mind, Microsoft expose the Azure SDK which is already a front for their Azure Cloud storage REST APIs, I then wrapped their SDK in a library that encapsulated the grunt work and then I put another API in front of it and then put a client in front of that API which the consuming application would use (as you can see from the below block diagram).

But why? What’s the real use case for creating a RESTful API for external access? Sadly this epiphany came a little too late!
We were exposing an infrastructure service as opposed to a business service. Even if these external systems were able to enqueue jobs on our Azure infrastructure (even if I ignore for the time being that they will be doing this on our dime because we are paying for the Azure subscription, not them), they would just sit there and consume resources unless they tell us how they need to be processed and then we will go and write a web job for it. This takes us far off on a tangent from what we are supposed to be doing: building business applications/services that deliver business value, whereas, this is all about infrastructure. For any external user, if they want to use one of our business APIs to perform a background operation they can do that via that API’s endpoint, the API might then choose to queue up a job or process it in memory but that’s transparent to the user. We don’t need to expose a generic “catch all” infrastructure service, that’s way out of our remit. Also, for the external system it fragments their application logic because they are consuming the API endpoints whilst we are building their WebJobs. Cross contamination of concerns!
Following this brain storm it was amply clear to me that I had spent too much timeĀ on something that would have delivered very little real value and not the right one to begin with. The only logical thing left after that was to scrap the whole project right there and then which I did and deleted all the API projects that I had created. And like I mentioned earlier, the only good thing that remained in the aftermath, was that my storage library design was leaner, more testable and more maintainable honouring the SOLID principles. A quick project reference update in the main application and the library still worked like magic ready to be deployed! I will let the REAL use cases drive how I deploy and consume this library in the future but for now its a project reference in Visual Studio.

Moral of the story: always do this brain storming at the start of a potential over-engineering project because if you are not sure who this API is going to benefit and how, chances are You Ain’t Gonna Need It. Whatever you do please…Do Not Over Engineer Your Software!