Background

Systems evolution is an eternal challenge for banks , retailers and ecommerce players; often this is incremental, but occasionally the change is punctuated by inflection points where technologies leap to a new generation. The pace of such change is accelerating, and oeCloud.io represents EdgeVerve’s perspective on how a next generation platform should be delivered.

There are some key drivers underpinning this point of view: end-to-end digital experience, high availability and resilience, reduced cost of operation, scalability and flexibility at a granular level, commodity technologies and hardware, open SDK and APIs by default, non-disruptive upgradability, implementation independence and speed of inclusion for new technologies.

However , many of the product lines today have a very rigid architecture that demands a heavy TCD (Total Cost of Development) and TCO (Total cost of ownership) and takes a long lead time to see it operate in production. This state severely restricts the growth of the customer base and requires heavy investment in people and infrastructure to enable growth.

Therefore the strategic direction should be towards converting many of the product lines to be more flexible , dynamically changing and PaaS enabled brining in new innovative revenue models based on consumption patterns.

To achieve this goal , clearly, there is a need for an architectural blueprint that aids in developing, deploying and operating product lines with very low TCO, TCD and Time to Market. oeCloud.io was conceptualized with this objective in mind.

What is oeCloud.io?

oeCloud.io Framework is an architectural blueprint for building enterprise systems leveraging open source framework and powered by automation tools.

It provides framework enablers and tool-sets for end to end agile development of enterprise applications.

Current Situation

An architectural scan of the current product lines reveals common pain points as below:

  • Frontend apps evolution is dependent on server-side updates/upgrades. This is a major issue with mobile apps where the lifecycle of the app is not controlled by the server.

  • Multiple technical ways to consume the same service. Weakens integration story of the product with the ecosystem

  • No holistic view of the applicable business rules per functionality, no consistency.

  • Higher TCO (for the customer) & Higher TCD (for product owners).

  • Duplicate investment in UI technologies, inconsistent UX, higher TCO.

  • Multi-layered service consumption results in performance overhead and execution inefficiencies (UX, TCO).

Architecture Principles

oeCloud.io is built on open source technologies, within a standards-based framework. oeCloud.io components are finely-grained microservices with well-defined APIs that are integrated using key principles of distributed systems such as :

  • Model Driven Architecture

  • Everything Personalizable

  • Eventual consistency

  • Event-driven

  • Small footprint – Self Boot strapped

  • Cloud Readiness

Model Driven Architecture

The Model-Driven Architecture is a design approach that express domain concepts as specifications called as “models” which can be implemented independent of the underlying platform .

The main aim of the MDA is to separate design from architecture. As the concepts and technologies used to realize designs and the concepts and technologies used to realize architectures have changed at their own pace, decoupling them allows system developers to choose from the best and most fitting in both domains. The design addresses the functional (use case) requirements while architecture provides the infrastructure through which non-functional requirements like scalability, reliability and performance are realized

For this reason, MDA changes the classic code-centric development process to a modelcentric one . Thus, the developer can focus on the semantics of software systems to model it without regarding the details relative to the underlying platforms.

Everything Personalizable

Personalization concept deals with the ability to change the property and behavior of a model and the model instance at run time . It provides the basis for a dynamically changing application that needs .

The ability to make changes on the fly is important in the API economy where connected customers expect an “always on” service level.

oeCloud.io’s approach to personalisation and the level of flexibility offered is unique in the market. Personalisation is possible on multiple levels, for example:

  1. Model Personalisation

    a. Personalised models

    b. Personalised APIs

    c. Personalised user interfaces

    d. Scope is attached to model definition to allow us define different scopes for models

  2. Service Personalisation

    a. Expression based service request and response personalisation

    b. Personalisation scope is attached to personalisation expression

  3. Data Personalisation

    a. Personalisation scope is stored along with data

  4. Datasource Personalisation

    a. Datasource can be personalised

Personalisations are stored in the database (not in code). This makes it easier to perform application upgrades and to backup configurations and restore if necessary. This personalisation capability is essential for companies that wish to realise an agility to launch individualised tailored product offering to their customers based on unique insights gained from customer behaviour analysis.

A Personalization function is a function which when applied on any object transforms the object properties and its behavior . This transformation can also be attached to condition which is evaluated at run-time and if the condition is true then the transformation is applied to the object . A single personalization service is used to define the transformation rules and conditions which is applied to any model or service or data in transparent way

Eventual consistency

Eventual consistency is a characteristic of distributed computing systems such that the value for a specific data item will, given enough time without updates, be consistent across all nodes

Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability.
In distributed computing environments, the CAP theorem postulates that it is impossible to simultaneously guarantee consistency, availability and partition tolerance (CAP) and that administrators have to select the two out of three that are most important to them. 

oeCloud.io embraces the concept of eventual consistency and advocates the use of MongoDB for distributed data persistence

Event-driven

Most computer systems are built on a command-and-control scheme: one method calls another method and instructs it to perform some action or to retrieve some required information. But often the real world works differently. Real world is full of events ( eg : The alarm goes off; the phone rings; the “gas low” warning light in the car comes on)

As computer systems become more and more modelled on real world events they become highly interconnected and have to start to publish and receive an increasing number of events.

oeCloud.io will adopt this principle exhibiting the following set of key characteristics :

  • Broadcast Communications. Participating systems broadcast events to any interested party. More than one party can listen to the event and process it.

  • Timeliness. Systems publish events as they occur instead of storing them locally and waiting for the processing cycle, such as a nightly batch cycle.

  • Asynchrony. The publishing system does not wait for the receiving system(s) to process the event(s).

Small footprint – Self Boot strapped

Portability is quickly becoming a big chasm in cloud computing as the need to divide applications into distributed objects. Breaking applications up this way offers the ability to place them on different physical and virtual machines, in the cloud or on-premises. This flexibility offers more advantages around workload management and provides the ability to easily make fault-tolerant systems.

oeCloud.io will adopt containerization technology ( Docker ) to divide the applications into small containers . With this developers can ensure that their applications can scale , are resilient and can be clustered , scheduled and can participate in orchestration with other containers

Cloud Readiness

For applications to meet the demands of a cloud computing environment the code needs to be designed with some basic principles like :

  • Atomicity - An atomic piece of code has a specific and clearly defined purpose.

  • Statelessness - A stateless object does not hold information or context between calls on that object. In other words, each call on that object stands alone and does not rely on prior calls as part of an ongoing conversation. A caller could call a method on one instance of a stateless object, and then make a call on a different instance of the same object, and not be able to tell the difference

  • Idempotence - If code is atomic and stateless, it may take on other attributes, like idempotence. If a method is idempotent, you can execute it repeatedly and get the same outcome without adversely affecting anything else, like state.

  • Parallelism - building code that can run on multicore CPUs . Although oeCloud.io will be based on NodeJS a single threaded single process platform , the throughput of the services built can be increased by running NodeJS in cluster mode behind a web-proxy like nginx.

Architecture Realization –Service Layer

oeCloud.io – a Loosely coupled highly extensible Architecture

oeCloud.io supports a loosely coupled architecture that brings flexibility and extendibility to the rigid core of enterprise systems . oeCloud.io is designed for cloud first, whereby, it is multi-tenant, deployed via containers and elastic in its response to increased customer traffic demands.

Image

Model Driven Architecture

Model driven architecture is adopted with clear separation of platform dependent and platform independent concerns to improve reusability, adaptability, maintainability, portability and interoperability

Image

a) A model can be considered as representing a functionally cohesive domain concept ( eg: Account ) which is separated from its persistence concern (eg : Oracle DB) .

b) A Model can simply be presented by a JSON definition eg:

  {                        
  "name": "Account",                          
  "properties": {                             
  "accountNumber":{                           
  "type" : "string",                          
  "required":true,                            
  "is":12,                                    
  "unique":true                               
  },                                          
  "accountType":{                             
  "type":"string",                            
  "required":true                             
  },                                          
  "openDate":{                               
  "type":"date",                              
  "required":true                            
  }                                           
  }                    

a) Model properties ( eg: accountNumber , accountType) can be declaratively specified ( eg. : type as string , required as false ).

b) Models can have local methods and remote methods

c) Remote Methods can be configured for a given model using prototypical inheritance ( i.e. mixins in javascript )

d) Model also have operational hooks with emit operational events like “before save” , “after save” which can be subscribed to perform actions.

e) Models are accessible through REST APIs

Model Composition

Model composition is an aggregation of services collectively that represents a new service (aka compound service or a composite).

In simple words, Model composition is the ability to treat multiple models together as a single entity. This should allow user to do certain operations on multiple models same as you do on single model. It also allows user to combine non related models and do similar operation you do with single model. 

There are two types of model composition. Implicit Composite model and Explicit Composite model.

Implicit Composite : Loopback provides a way to relate one or more models using relations. oeCloud.io will use these relations while doing get/post operation to retrieve/save data from/to model. For example, you can get or post data to parent model and it’s children in single Web API call.

Explicit Composite : If models are not related and you wish to get or post data of those unrelated model using single operation, you require to construct explicit composite model where you have to tell that newly constructed composite model consists of what all other models.

Model Personalization

oeCloud.io Framework Models

Models in Loopback

oeCloud.io has adopted the highly-extensible, open-source Node.js framework (http://loopback.io/) to realize model driven architecture

Following are the Core loopback concepts:

API Driven Dynamic Flexibility

Models are automatically created and deployed as dynamic end to end REST APIs which can run both on premises and/or in the cloud.

These APIs form the interaction points for various user interfaces. The APIs respond contextually based on the various aspects of the calling system, providing the ability to create different responses for different channels.

Image

The above picture represents a run time request which the API serves. Every such request is enhanced with additional properties that helps identify the request uniquely. We call them “context contributors”. So, we can define context contributors like:

  • “device” having values like car dashboard, ATM , mobile device , tablet , wearable etc.

  • Or geographic location with values of the actually latitude / longitude or country

  • Or life event of the logged in user like birthday or wedding day etc.

The options to contextualize based on different parameters is limitless. Each of the context contributors are intelligently injected to every request by the context contributor which picks ups this information from various banking systems or social sites or any 3^rd^ party subscribed services.

Based on the context contributor injected the scaffold API now can respond differently enabling adaptive experiences based on the channels through which the request might come from

Data Source abstraction using data source juggler

This layer of abstraction provides the ability for a given model to connect to multiple data sources. Currently there is support for MongoDB , Oracle , Postgres , MySQL and also any REST or SOAP based end points.

The layer contains a data source juggler component which intelligently switches the connection implementation based on the data source to be connected. Hence this decouples the models from the data sources providing the ability to create model without worrying about the capabilities and restrictions of the underlying database or service. This capability allows architects and business analysts to model business entities that represent the actual business scenarios. Traditionally the entities are restricted by the schema of the already existing databases and it could be a costly exercise to inject any change.

oeCloud.io has adopted loopBack DataSource Juggler - an ORM that provides a common set of interfaces for interacting with databases, REST APIs, and other data sources https://www.npmjs.com/package/loopback-datasource-juggler

Integration abstraction using visual flow integration

This layer provides the ability to listen to any model events and initiate a flow of activities (or nodes) defined as “pipelines”. Each node can be modelled to perform an action.

This provides us with a powerful abstraction layer to integrate with 3^rd^ party systems and core banking APIs/service end points and perform data transformations without needing to disturb the model code. The flows can be designed both as synchronous and asynchronous processes and hence can be used to support direct online transactions and offline transactions.

The application model APIs can continue to serve user requests even when the core systems are down or not responsive. An intelligent syncing mechanism can ensure that the data of the offline database is in sync with the online core services

This layer therefore acts as an enabler to integrate with any core services and 3^rd^ party services ( eg: OCF services) and compose new meaningful model data which can be used to service business scenarios.

oeCloud.io has also adopted Node-RED - a tool for wiring together APIs and online services in new and interesting ways. (http://nodered.org/)

**Encapsulating existing functionality **

Model Personalization enables creating variants of the same model yet retaining the same API signature. Therefore a model can intelligently point to oeCloud.io local data store or a REST/SOAP Service based on the context and the feature for which it is being accessed

Model composition is also another feature that provides the capability to stitch functionalities of different systems to be represented as a single API service

These different patterns can be used to design a heterogeneously operating system landscape but still achieving the business scenarios designed.

Architecture Realization – UI Layer

Dynamically Personalizable User Interface

As you have seen, the API layer is adaptively responding to requests coming from different channels and context.

Additionally the user interface supports dynamic screens based on the context of the user journey.

oeCloud.io User interface consists of:

Adherence to Google Material design standards ensure that the library is aligned to the cognitive behaviour of users on mobile and wearable devices. Promising to provide an excellent base to design rich and intense user experience designs

However, using the user interface library is not mandatory. The UI can also be custom built in any technology and data bound to the exposed APIs. oeCloud.io does not create any hard coupling between the APIs and the UI implementation. It supports fully custom built HTML5 screens along with partially meta driven and fully meta driven screen development

Architecture Realization – Deployment

Resilience & Containerization

oeCloud.io will provide a cloud PaaS Software platform (oeCloud.io CEP) that will provide the following features and act as a strong foundation to build resilient applications:

Containerization

oeCloud.io will ship containers as deployable units. This task will be integrated into the continuous delivery and deployment pipeline. This is core to implementing DevOps.

The containers will be hardened based on industry best practice to ensure security. EdgeVerve has a IS (information Security) group that defines these best practices and establishes security gates before the containers can be marked for production release. Most of these gates will be automated in the DevOps pipeline. Docker is the default container of choice, however other container technologies can be considered.

Service Scheduling

The Containers will be automatically deployed onto a cluster using service scheduling services . oeCloud.io CEP leverages Opensource Docker Swarm toolkit to orchestrate the services.

We have performed an evaluation of various cloud PaaS open source frameworks and concluded that Docker Swarm is the best option

Fault Tolerance (Should withstand failure)

The fault tolerance is taken care of at multiple levels. The swarm cluster can have multiple master servers which can take over should the primary master goes down. The applications run on Swarm master and the scheduling capabilities ensure that the application runs on the node which is the best suited for it e.g. MongoDB will run on the Swarm nodes which have SSD and is backed by a highly available file system. The file system could be any resilient storage provided by cloud providers like Amazon, Azure etc. It could also be a storage provided by a NFS server. 

The application/service specification is written in such a way that it is resilient on its own e.g. A Mongo DB service is a set of at least 3 evmongo containers running on 3 different Swarm Nodes with SSD storage. Should a container go down the Swarm scheduler takes care of starting it again, Should a swarm node go down the Swarm Scheduler will bring up the container on another node. The Mongo DB replica sets ensure that elections happen in time and a new primary is elected for the service to be available always. 

We do plan to write a Health Monitoring plugin for Consul which would be able to check the health of the services and take corrective actions as required.

Note: Similar features and capabilities exist on Mesos and Cloud Foundry and can be leveraged based on the technology of choice for PaaS

Load Balancing (Should not crumble due to traffic)

oeCloud.io CEP infrastructure uses HAProxy an Open Source , Reliable, High Performance TCP/HTTP Load Balancer and Consul – a tool for discovering and configuring services

As soon as a service comes up it is registered with a Consul Discovery Backend. A Daemon process modifies the HAProxy configuration to ensure that the new service is exposed from the Load Balancer and is accessible to the external world.

The HAProxy is a time tested load balancer and the numbers can be found on HA Proxy’s website and it shows that it can support up to 40000 Requests per second. 

Note: Similar features and capabilities exist on Mesos and Cloud Foundry and can be leveraged based on the technology of choice for PaaS

Elastic (Can grow or shrink based on demand)

Applications developed using oeCloud.io are primarily stateless which enables scaling out easily

Docker inherently supports elasticity by allowing to scale the service roles using docker-compose tool or docker swarm APIs . oeCloud.io CEP will provide a service to ensure that we are able to scale up and down a particular service based on metrics such as, Load on the HTTP Service, Number of messages in the queue etc. 

Note : Similar features and capabilities exist on Mesos and Cloud Foundry and can be leveraged based on the technology of choice for PaaS

Multi-tenant and Isolation (Can have org wise dedicated clusters with isolation ensured )

At present all the applications which run inside oeCloud.io CEP are run inside a software defined network which ensures that services from one application cannot access services from another application. In case a developer wants to consume as service written by another developer running on the same instance of oeCloud.io CEP, they would have to access it exactly the way anyone from outside world would access it. There are no shortcuts provided inside oeCloud.io CEP to allow them to short circuit the route. The volume level isolation can be achieved for various tenants in case of a multi-tenant deployment.

Note: Similar features and capabilities exist on Mesos and Cloud Foundry and can be leveraged based on the technology of choice for PaaS

oeCloud.io CEP Architecture

The architecture of oeCloud.io CEP is depicted below

Image

  • Application Developer provides a docker-compose.yaml file which specifies the dependencies and the services it wants to expose to external world.

  • At a high level yaml file provides the deployment architecture e.g.no. of nodes in mongo DB replica, no. of nodes in NodeJS, which service can talk to which one, which service to bring up first, which ports to expose and map etc.

  • docker-compose tool is used to provide this configuration to swarm cluster which in turn schedules these containers on various swarm nodes.

  • There is a registrator running on each docker daemon in the swarm cluster, which listens for any new containers which come up and have a exposed service.

  • The registrator registers these containers as a service to the Consul Cluster which is running outside the swarm cluster.

  • There is a consul-template daemon (again outside the swarm cluster) which generates a new configuration for the haproxy and sends a SIGHUP to haproxy.

  • The HAProxy reloads it;s configuration and the new service is now available for consumption through HAProxy.

Architecture Realization – DevOps

Docker containerization is the foundation for continouse delivery .Docker allows us to easily package up an application in a container that can then be moved around from one environment to another in a nice self contained package.

This has many of the benefits of traditional virtual machines without the cost of very large files that are difficult to move and update.

For a developer, these containers are easy to work with because they are light and they have several features such as layering. Layering enables updating the docker container only for the changes performed

Thus ,from an operations perspective these containers are easy to consume, run and have a light footprint. These charteristics from the basis for development of distributed applications or micro-services .

Below diagram represents the CI and CD pipeline flow

Image

  1. oeCloud.io Application developers can start with the oeCloud based Application by following steps mentioned here and clone his fork to his development machine for coding.
  2. GITLab CI triggers on each developer code commit or Master merge request. Multiple shared runners are made available by default in oeCloud.io CEP Infrastrcture (remote docker swarm) which can perform parallel execution of CI request. Dedicated GITLab CI runners can be configured for a project or a fork if needed (please refer oeCloud.io CEP guide to configure dedicated GITLab CI runner).
  3. Post code build, acceptance tests executes to validate coding standards, coding styles and Unit test cases. On successful execution of acceptance test cases, a docker image (alpine OS based image with nodejs runtime, code base along with dependent npm and bower packages installed) will be built and pushed to the oeCloud.io docker Private registry. An independent image for each developer fork with naming convention «GIT UserName»-«appname» (eg: Pradeep-refapp) will be created on fork commit and one image for each app will be created (eg: refapp) on master merge. On failure, CI terminates with a notification to the developer.
  4. Continuous Delivery includes pull the latest application docker image (on either fork commit or master merge), deploy into Docker Swarm and executes post deployment test cases. On successful execution of test cases and deployment - for fork commit, developer can access the application (with his changes) with a URL convention https://«username»-«appname».«domainname» (eg: Pradeep-refapp.evfapp.dev). On Master merge, developer or QA can access the application (latest master code base includes all developer changes) with a URL convention https://«appname».«domainname» (eg: refapp.evfapp.dev). On failure, CD terminates by reverting to the previous stable build and notifies both developer and QA.
  5. Approver periodically approves a stable version of the application for public availability. On post approval, a latest stable version will be pushed to the docker public registry, deployed into the controlled public cloud accessible by the sale team and customers. On request, access can be granted for customers to pull the images directly from the docker public registry.

Architecture Realization – oeCloud.io Workflow Engine

Many time business application using oeCloud.io need to run some business logic in a workflow manner. A workflow may or may not involve human intervention.

Following diagram depicts the oe-workflow supported by oeCloud.io.

image

oe-workflow is a oeCloud.io based app which provides its own workflow engine, as well as support to connect with external workflow engine such as Activiti and other engines in future. So oe-Workflow exposes set of generic models which can be used to interect with workflow engine (internal/external based on config).

The model which are exposed are:

ModelName Description
WorkflowDefinition This model maintains details about all the workflows that have been published. Each Workflow definition can have multiple Process Definitions
WorkflowInstance This model maintains details of all the workflows that are deployed. Each WorkflowInstance can have multiple Process Instances
ProcessDefinition This model maintains details about all the process definitions deployed and generated through WorkflowDefinition.
ProcessInstance This model maintain details of a created Process Instances by WorkflowInstance
Task This model is used to manage user tasks for a process instance executing a Workflow.

How to configure workflow engine

OE Workflowf Engine (core-engine) will always be enabled.

To enable activiti, create a new boot-script and enable activiti for workflow support using below code :

    var url = "http://{ACTIVITI_HOST}:{ACTIVITI_PORT}/activiti-rest/service/";
    
    app.models.Activiti_Manager.enable(url,options,function cb(err,res){
        if(err){
            log.error(err);
            return;
        }
        log.debug(res);
    });

Attaching workflow to models

Lifecycle of any entity in oeCloud.io based application can be controlled via a workflow process.

Following API endpoints are available for this purpose -

Endpoint Description
/Activiti_Manager/attachWorkflow Attach model operation to Activiti Workflow Engine
/Activiti_Manager/endAttachWfRequest Complete workflow request given to Activiti Engine
/WorkflowManager/attachWorkflow Attach model operation to EV Workflow Engine
/WorkflowManager/endAttachWfRequest Complete workflow request given to EV Workflow Engine

Workflow can be enabled by posting the below JSON to attachWorkflow api of Workflow Manager/Activiti Manager Models.

{
  "modelName" : "Model Name",
  "workflowBody" : "Workflow Definition Body",
  "operation" : "Operfation Name",
  "wfDependent" : "Workflow Dependency Option"
}

Custom Observer hooks will be enabled for the model to which POST request has been done with corresponding modelName.