Sunday, March 23, 2014

Decomposing system into microservices

In my previous post on microservices, I made a point about modelling behaviour vs features.

"Often, developers take an approach, where they endeavour to capture an entire feature in a micro service. That might very well be a mistake. The right approach is to model behaviour and not feature within a micro service."

This is a rather crude and concise way of putting across the point I wanted to make. In this post, we shall expand on and refine the thought behind it. We will look at a generic which assists developers understand what constitutes a microservices.

How are microservices organized?

Microservices are organized around business capabilities.

Togaf describes a capability as
"An ability that an organization, person, or system possesses. Capabilities are typically expressed in general and high-level terms and typically require a combination of organization, people, processes, and technology to achieve. For example, marketing, customer contact, or outbound telemarketing."

We can expect a bunch of microservices to help realize a single business capability.

What constitutes a single micro service? When envisioning a microservice, how does a developer go about it?

Let us dub all the cool things that are required to support a micro service such as plumbing for monitoring, dependency health check, etc as the "infrastructure libraries". Every microservice will have those by default.

What we are bothered with is how to carve out the business logic that goes in a single microservice. To understand this, our narrative must take a small detour.

What are architectural styles?

We must understand what an architectural style is. An architectural style is an abstraction a level above an architecture. In simplistic terms, one can think of an analogy - What patterns are to class design, architectural styles are to architecture.

An architecture is usually defined by combining multiple architectural styles.

The developer would want to mix the microservices architectural style with other architectural styles such as RESTful architectural style, event sourced architectural style, or Pipes and filters architectural style.

Many architectures a system has

In large systems, it is common to have systems decomposed into sub systems, and each sub system can have its own architecture. What is required is that the sub system's architecture comply to the overall constraints imposed at the system level and the sub system interact well with its collaborating systems.

We are concerned with systems where the common theme is the "micro services architectural style"

If we take an example of a large online shopping system. We might have a subsystem that takes care of the shopping cart, and another sub system that takes care of the CRM activities. We can envision a system where the shopping cart sub system is authored as a set of restful microservices where as the CRM system is authored as a set of microservices adhering to the pipes and filters architecture.

The approach

Consider the following approach to the problem of modelling microservices in scientific manner,


  1. First we must settle down on what architectural paradigm we shall follow for the business capability under consideration.
    The architectural style in use will lend the vocabulary that is used to describe what constitutes a single microservice.
    For example, if we are creating "resource oriented micro services" i.e. mixing rest architectural style with microservice architectural style, it makes a lot of sense to think in terms of resources.
  2. If we are dealing with legacy systems, think behaviour-first rather than code-first.
    Developers tend to get attached to the code and attempt to preserve it when breaking it down into micro services. This leads to a though process of  "How much of this code can I carve out into a microservice".
    Remember that one of the benifits of microservices code is that the code can be thrown away and rewritten rather than refactored. What is important is that the service is well behaved.
  3. For each microservices
    1. Write down the single responsibility of the service in one or two lines in plain English (or which ever is your preferred language). If more then a couple of lines are required, then revisit and try to break the responsibility down further.
      Draw on the vocabulary of the chosen architectural style to ensure coherent thought and clear communication.
    2. Write test cases to reflect the behavior.
    3. Start implementing the microservice. If the service becomes too large, then revisit the described responsibility, break it down further and repeat points , b and c until a microservice of desired size is obtained.
  4. Ensure that each microservice is independently deployable. This can be achieved by
    1. Ensuring that our microservices are loosely coupled and well behaved. 
    2. Each microservice manages its own data. Considering the size of microservices, this can become quite challenging. It might often require a paradigm shift in the way data is managed.
      To achieve such data independence, the developer may rely on the following approaches
      1.  Polyglot persistence
      2. Data duplication is acceptable. i.e. Some data might get persisted with multiple services. 
      3. Accept eventual consistency within the system where it makes sense.
Decomposing a system into microservices at times seems like art, while we have endeavoured to present a more scientific approach here. Regardless, the only way to truly understand how to do it, is to actually do it!


Thursday, March 6, 2014

ScalaTips: Finding implicits applied from within eclipse

Scala is considered as a language easy to learn and hard to master. It can be a tad tricky to understand code written by a peer, even after acquiring an intermediate proficiency.

The Scala tips series of posts will focus on tips that will help our life as a developer simpler using tricks of trade. It will NOT focus on material for understanding the language itself.

Scala is a language suitable for development using ide, as is evident from the blossoming ecosystem of tools and plugins.

In this post we will take a quick look at how scala ide for eclipse assists in figuring out how implicit conversions has been applied.

Implicit conversions are particularly tricky bit when reading and understanding code. Implicit conversions allow automatic conversion from one type to another. So a piece of logic gets called without any visual indication of it being called.

Implicit conversions can be brought into scope by import statements. As we can have imports embedded in the code practically anywhere (package level, within class, object method, or code block) when using scala, we can have a number of disparate implicit conversions for the same type at work within a single eye span.

IDE to rescue!

Most of the modern IDEs will provide some visual indicator when implicit conversion is expected.

For example, by default the scala ide for eclipse will underline the expression, whose resultant value will be applied an implicit conversion.




In this code, "httpRequest.getHeaders(Names.CONNECTION)" is underlined. The getHeaders method returns a list of Strings. The return object is of the type java.util.List<String>

An implicit conversion is being applied here to wrap the list with a decorator "AsScala" which can then be used to convert the collection into a suitable collection from scala's own collection library -  scala.collection.mutable.Buffer[String].

There is an indicator on eclipse marker bar. On hovering on the marker, the details of the implicit conversion can be found.



Eclipse even provides us navigation to where the impiicit conversion is defined. Just take the cursor to the expression to which the implicit conversion is applied and press alt+F3

Sunday, March 2, 2014

Microservices architecture

Micro services architecture is the "in thing" these days. It is generating sufficient chatter. Twitter is doing it, so is tumblr and number of different online services organizations.

It is right now at a stage where it is more a buzz word then a concrete concept. Technical folks are figuring out how micro services architecture would work for their particular domain or product.

In general what benefits do people derive from micro services?
  1. Modifiability
  2. Ease in rapidly deploying code to production (Multiple times a day)
  3. Reduced testing effort
  4. Faster feedback.
  5. More freedom to experiment.
Rather than providing a formal specification of the architectural style, let us jot down some constraints that are applied to architecture conforming to this style.

Size

  1. Services are very small. To quantify very small, it is typically code that fits within a single screen.
  2. Small enough to rewrite rather than refactor.

When we talk about the size, we are talking about the business logic embedded in the micro service.

Rest of the working code which supports the business use case should be consumed as a part of a framework. For example, if the service requires to perform some database operations, then we are going to depend on a framework that facilitates the communication.

The key here is to understand that the frameworks are generic. As a philosophy I prefer opensource,

Single Responsibility Principles

  1. Each micro service does just one thing and does that very well.
  2. The micro services are loosely coupled
  3. Represent behaviour and not feature

The process of establishing and realizing an architecture is inherently bound to the process of decomposing a problem domain into small manageable units.Single responsibility principle is a sound one regardless of what  architectural style we are dealing with.

Fred George makes a point in his talk where he advices that a micro service should not even be aware of any other micro service.He talks about a rapids - rivers - ponds concept which is pretty interesting way of decoupling the services. Have a look at the talk here

Often, developers take an approach, where they endeavour to capture an entire feature in a micro service. That might very well be a mistake. The right approach is to model behaviour and not feature within a micro service.

A feature can have any size, and therefore may be limited to a single micro service or scattered across multiple services.

Deployment

  1. Automated provisioning.
  2. Containerless.

Supporting micro services architecture without a robust continuous delivery pipeline is not possible.

Micro services should be as lightweight as possible. It should be possible to manage life cycle of each micro service independently. To do away with any excess baggage, it is advisable to not deploy the services in heavy application servers.

Versioning

Micro services at the very base is all about Litheness+Loose coupling . Each of the micro service should have its own life cycle independent of the other.

In practice, we might have a service being upgraded and modified much faster then it's consumer.

API Versioning could help us achieve the velocity of changes we need, within this architectural paradigm.

Well behaved services

Well behaved services adhere to published contracts when participating in conversations -

  1. Obeying business rules under all circumstances
  2. Consistent error reporting
  3. Do not consume excessive resources and obey all non functional requirements.
For ensuring good behaviour, it is important to monitor the services behaviour. The number of services in the deployment are expected to be higher when compared to other architectural styles. Naturally, robust automation is required which allow introspection of services. 

Following are the recommended practices
  1. Watchdog processes for in-app monitoring
  2. Publishing current status and health information on a well known URL
  3. Publishing the health status of all the dependencies



Thursday, February 20, 2014

Scala major version incompatibilities

Scala major version incompatibilities broke some of the most basic expectations I had out of a language and ecosystem. Being from Java world, backward compatibility had become a fact of life not to be given a second thought.

If I have built a class using JDK 5, I do not have to worry about bytecode compatibility when running with or consuming it from a piece of code I am compiling with JDK 7.

Whereas in Scala, code compiled in Scala 2.8 cannot be consumed in Scala 2.9 or 2.10. It has to be compiled all over again.

This is a major hurdle for the scala platform in being enterprise ready.

Here is a quote from David Pollak, the creator of Lift.

As traits change, the classes that depend on those traits “break” unless they are recompiled.  This means that as Scala grows and evolves, all the libraries that sit on top of Scala must be recompiled against the latest version of Scala.  This has presented a significant challenge as Scala 2.8 has evolved.  Each of the layers necessary to compile libraries (e.g., Scala -> ScalaCheck -> ScalaTest -> Specs -> Lift) must be compiled against the same version of Scala.  Because of this issue, it’s been challenging to keep the growing number of Scala libraries up to date with Scala 2.8.

What did he do? he started a community initiative called “Fresh scala”. He talks about impact of version fragility problems and why he started the community here

Since then,  ”Scala guys at typesafe” took notice and have been investing in solving these problems. For example, they have introduced a “Cross compilation” feature within SBT builds. Well, It is a start!

Major version incompatibilities is a factor large enough to shape the Scala communities efforts and ecosystem as a whole.

Thursday, December 26, 2013

Increased relevance of Functional programming

If we think about the computer as an FSM, Imperative program specifies commands to update the machine state. Each statement causes a change in the state. The state changes a million times per second.

Even if there is a meaning to a program beyond the hardware details, it is often obscured such that only a handful of experts can truly comprehend it.

As study of languages has progressed, different paradigms have emerged, which aim to allow the program to be structured in a more comprehensible manner; where the abstract notions are more easily representable and understandable.

Each paradigm has brought to the table a way of not just organizing code, but a way for developers to organize thoughts on how to tackle a specific problem; be it structural programming, object oriented programming, data flow programming or any other paradigm.

I found the following statement very cogent about functional programming.
"Functional programming aims to give each program a straightforward mathematical meaning. It simplifies our mental image of execution, for there are no state changes. Execution is the reduction of an expression to its value, replacing equals by equals. Most function definitions can be understood within elementary mathematics."
Functional programming paradigm and functional programming languages have been around for a long time. However, until recently their adoption has been limited, largely marginalized to academic and niche domains.
This has been due to the fact that functional programming languages have been traditionally slower then the other prevalent solutions.

However, this is changing. The change is largely dictated by changes in the direction hardware industry has gone.

Rather than focusing on creating increasingly powerful and fast processors with single core, the focus has shifted to multi-core solutions. With advances in networking and storage, the cost of having distributed systems has gone down.

If we talk about functional programming at a depth that just scratches the surface, then the following 2 properties are the most important
  • Functional programming encourages writing functions that have no side effects.
  • Immutability is a corner stone of functional programming paradigm. Use of immutable references, providing solid support for persistent datastructures.
Just looking at these two facts, one can immediately start working out and relating how the changes in the hardware industry are synergetic with functional programming.

A deeper analysis of how these trends have help emerge functional programming languages stronger is outside the purview of this post. We will get to that eventually.

Functional programming languages have been hovering on the horizon for a long time, may be their time to truly shine has finally come.