When we first started, what we thought would be a small venture, the sky was bright and clear. But suddenly, I realized the fog was coming and getting closer, and it was necessary to change the course.
Could storms draw something out of us that calm seas don’t?
Should we sail straight to the clouds, or should we fight the stream?
Was it already too late for the radical maneuver – straight up to the sky?
At the beginning of the journey
It all began, as it usually does, as a small project. Time went by, but things did not roll out as planned – so every obstacle guided us to the Cloud-native.
And, yes, I still believe that it proves to be the right decision.
The year was 2018. and I got a new work assignment – to start a new project for a large heating, ventilation, and air conditioning (HVAC) company in Europe. At first, it was a very small team with a small goal: we needed to implement a new module for the system that existed for years. The client has had a rock-solid reputation in building HVAC systems and has been installing their equipment and software around the globe.
But they have lacked system integration – everything was packaged as a suite of desktop applications, with contained databases and files. They had to synchronize everything by cumbersome import/output processes (each per their module). Every team has been doing a separate project, and they seldomly communicated. There was no central place to store the data, and there was no monitoring of what’s going on in the system. There was also not much collaboration between the separate areas of business processes.
We were about to start this new project, which needed to serve as a proof-of-concept (POC), putting data into the cloud and making the web app. We planned this web app to be installed on a local laptop of engineers who travel around and configure their systems on the installation sites.
That was the idea.
But all of this is about to be changed…
Clean architecture as the initial development downwind sailing
Firstly, we decided on Microsoft .NET Core as our main development framework since we’ve already used it to quickly build the solutions and port them to every platform on the cloud. Hence, we were all very familiar with Microsoft Stack. Note that there were no Azure DevOps, no Azure Cloud patterns, and microservices were not reached their momentum yet, at that time.
We used Clean architecture principles to build our projects and concentrated on a canonical three-layered architecture design. We employed code quality by writing unit tests, adopted static code analysis, and used third-party collaboration and documentation tools to help us out. We also had a background in implementing various solutions, so we tried to introduce the important design patterns we knew as we progressed further.
Our initial application diagram based on the Clean architecture
Having today’s perspective, it looked maybe oversimplified and not suitable to our case, but it was necessary to start with something. We never stopped to include new ideas and improvements to the implementation code. It is always a good idea – no matter how the architecture is well designed – to return to the existing code, look at it from a different angle, and try to make it better.
Addressing cross-cutting concerns
We also needed to build a reliable data structure to make this system work, and soon it became clear that everything should be integrated if we were going to do it over the cloud. Cross-cutting concerns like authentication, authorization, logging, and auditing, soon became too crucial that they would be treated just locally – applied for a single running machine.
We decided to go with the OpenConnect and OAuth standards for the security concerns because we searched for well-adopted and community-accepted solutions for all that we’re building. We agreed on Identity Server 4 and quickly created a single-sign-on capability of our solution. It is a very well-designed system for addressing JWT token authentication, flexible, extendable for other security providers, and suitable for us since the ASP.NET backs it up.
Authentication of clients via Identity Server 4
Later, we implemented our own authorization system as an extension of the identity-based security solution because neither Azure AD nor other third-party solutions were enough to address our needs. Since we modeled attribute-based access control, together with RBAC, it was necessary to create a separate service to provide us with the protected context, together with the data models and REST API interface to accessing its functionality.
For the auditing, we used serverless computing in terms of the Azure function, which handles requests from the services and converts them into database entries. At first, the services were tightly coupled with the function to provide it with data, but we soon realized that this would not scale and be a reliable solution, so we had to wait for some better time when this will be optimized.
For solutions when we needed collaboration with third-party providers, we use special services or serverless Azure Functions or Azure Logic Apps if the workflow of processing involves multiple steps and needs to be graphically depicted and configured. But as with other functionalities, we began to feel something is missing. The solution we architect would not fit the business needs unless we shift to the next level of distributed architecture.
Serverless computing used for addressing cross-cutting concerns
Client web applications
Simultaneously, we were developing web client applications built upon the Single-Page Application (SPA) paradigm, implemented on Angular 4 and TypeScript, and connecting them to the backend via the REST API interface. We were lucky that all the guys involved were part of the same team since we shared knowledge and a sense of code quality. The client applications containing standard functionality were constructed as NPM libraries for reusing them from domain-specific projects. Angular projects were internally structured by submodules – all following the modular approach in building SPAs. Soon we introduced SignalR as our implementation of real-time communication via WebSockets between the clients and servers.
Modular approach in building SPAs in Angular
And the number of backend services and client applications continued to grow…
Picking the right Cloud platform
Up to now, we were not really concentrated on addressing the business requirements since they have been evolving with us or sometimes not clearly defined. Hence, quality requirements became our pillar in determining what will soon become – a cloud platform. Our initial project evolved into a fully-fledged distributed platform. We addressed authentication, notifications, user management, module handling, and auditing – all with the separate business services that communicated with the clients through the HTTP.
After presenting this platform to the stakeholders and convincing them of the necessity to go toward a distributed architecture consisting of services and applications, using Microsoft Azure as a target cloud platform, and introducing the DevOps process, it all began to clear out. We started to learn and use many of the PaaS resources we have available on the Azure and get used to not just planning, implementing, and testing the code but also thinking about the infrastructure, QoS factors like scalability, security, availability, and resiliency. It was fun to dive into the PowerShell to track the application logs and configure services through the portal, create ARM templates, provision the resources, and use Azure Monitor to see the services operating in action.
Defining DevOps process
That was the time when an Azure DevOps emerged as a successor of a VSTS, and we were early adopters of it. We soon created our first CI/CD processes wrapped around the DevOps projects that we bound to our Git repositories. Build pipelines use task groups that run on CI triggers that are related to the GitFlow-based source-controlled code. Soon, release pipelines emerged, which continues with the build artifacts to deploy them on different stages. We initially started with many projects that have been configured per each of our Git repositories, but it gradually became tough to maintain this structure. We later switched to one project for a whole platform and make our pipelines share their task groups, variables, and artifacts.
Our first Azure DevOps CI/CD process workflow
We started with Azure App Service slots, but we soon realized that we just have too many services, and our App Plan reached the max limit of hosted services. Currently, we have the test, QA, i18n, and preview stages in our Dev/Test Azure Subscription – each corresponding to their usage roles (development team, QA team, localization team, sales). This proves us as an exemplary good deployment structure for the development and testing, except that everything is statically provisioned on the Azure App Services. This will be changed when we start with containerization.
The number of services is growing, and we’re supporting it with the corresponding DevOps infrastructure-as-a-code. We have created ARM templates for the things that are provisioned and some of the resources we create dynamically in the code. We also run our daily maintenance tasks that check service’s health, database consistency and monitor messaging.
Modular architecture
New projects emerged – each addressing its own business needs, and we adopted modular architecture with projects built with several services, applications, and serverless code. It was clear that we cannot endure just to provide Service-Oriented Architecture (SOA), and that we need something “bigger “.
As each service needed to know user management context (which is in a separate service), required to be authenticated and authorized, services started to have references to other services, which we packed to the NuGet packages. Although providing shared libraries is not a bad thing in general, there is no separated deployment – even development, since every time a new change was introduced into a referenced library, the service that used it broke. We couldn’t even continue to work on new features since we produced further inconsistencies, and with the existing projects inheriting from a base library, it was hard to press on. It was easier on some projects because the same team has developed them, but it isn’t easy to follow.
Both applications and services introduce new challenges
There is also a single point of failure concern. As all services and applications employed direct communication with each other, every failure of a particular component broke the system. There is nothing we could do to remedy the situation. It was challenging to find the issue in the right place since each service contained a full log of operations regardless of the service that provided this part of the functionality (e.g., if security problems occur, the error is spread across different services that used it).
But a turning point really happened when we needed to address multitenancy. Since providing data related to different partners (subsidiaries, branch offices, or companies) meant that the data is vertically partitioned, it was evident that the communication between the services (and applications) must be optimized and put into a rigorous order. It was not only essential to provide the data but to provide it precisely who and when it needs it.
So, Microservices came into our way.
That will be a new destination on this journey and the new chapter in this unique story yet to be told.