Orchestration Proper

An expression I think is an absolute misnomer is Cloud Orchestration often employed by Cloud Resource Management platform vendors.

Orchestration, in my opinion, is configuration driven process execution which is flexible enough to control anything that has an (electronic) interface of some description as well as drop off into manual/human activities where necessary. Orchestration isn’t, in my opinion, limited, as in most Cloud Resource Management platforms, to a small set of hard coded processes for spinning up virtual machines, configuring a bit of virtualised network or provisioning some storage on a predefined datastore.

The immutable nature of processes in Cloud Resource Management platforms which can only control their own little universes makes it at best very difficult and at worst impossible to incorporate anything new, whether that is manual or automated, beyond the out-of-the-box functionality of the platform. Thankfully, the majority of Cloud Resource Management platforms offer a reasonable API to call upon although sometimes it is necessary to dip into their databases to configure something on the underlying platform Cloud Resource Management platform works so hard to abstract from you in the first place.

Some of these Cloud Resource Management platforms also include the capability to extend the platform through other mechanisms which result in tightly coupled custom extensions which need to go through the arduous process of being migrated to the out-of-the-box capability when (and indeed if) it becomes available.

Extending beyond the capability of the Cloud Resource Management platform is especially important in the service provider world where stiff competition and the need to differentiate means it is often necessary to be an early adopter of technological revolutions such as solid state based next generation storage platforms, converged compute/network solutions and software-defined networking (SDN) – we’ll save the service provider conundrum for another post.

The answer is to introduce a higher level orchestration engine which can be used to do the real “heavy lifting” across a disparate collection of end-points. Once you’ve decided you need one, the next question is build or buy? I guess a quick disclaimer is needed here: being an ex-developer some moons ago, my view may be somewhat biased.

The buy option typically means either a broad enterprise grade integration platform, or a run-book automation capability quite often attached to a heavy-weight infrastructure management or ITSM suite. Although they do exist, there really aren’t many lightweight commercial orchestration engines, fewer still that claim to operate in the cloud/infrastructure space.

Any orchestration engines built by vendors that operate in the cloud/infrastructure automation space, whether lightweight or heavyweight, often include a reasonable library of standard adaptors, as long as there is an acceptance of their biases in that most work very well with the vendor’s own infrastructure products, those vendors with whom they have built strategic relationships, and industry leaders if it is not a space where the vendor doesn’t really have a choice – usually in that order. Conversely, support for emerging technologies, competitive products from other vendors, or products which aren’t industry leaders as such from other vendors have comparatively poor support which sadly lags behind or in most cases is completely non-existent.¬†Extensibility mechanisms are often in place but this invariably means highly specialised and often non-portable development as well as coupling to the vendor’s release cycle.

A further curious point that I’ve noticed is that the orchestration engines in the infrastructure management/ITSM space typically each implement their own process description language and mean further non-transferable specialist skills are required, rather than using a standardised language such as BPMN – I totally understand BPMN has its flaws but is this really necessary beyond creating unnecessary lock in? The vendors may not agree but I for one can’t wait for the day that vendors are punished for not being compliant to well established standards.

On the build side of the fence there are two predominant options. One such option is to choose from one of a number of embeddable open source process runtime components such as jBPM or Activiti which can quite easily provide a solid foundation for an in-house platform. These typically components have a tendency to be standards based employing BPMN as their process description language of choice and most seem to be written in Java. Some even have a reasonable UI included which can be used as a base for a proper management UI. Being open source there is even the option to modify the runtime component itself. Adaptors can be realised relatively easily and it is often the case that a mere 20-30% of the platform’s API is actually needed to build one as the rest is not actually needed for the use-cases that need to be satisfied.

Given the above there seems to be little point in choosing the other option which is engineering an engine completely from the ground up as this would be unnecessary effort in most cases – as is very typical this would only be worth looking at if there are totally unique requirements which warrant such development accompanied by a very a good reason not to modify an open source component to achieve their realisation – another subject for a later post.

It is also very necessary to be very clear in terms of what is really required from the orchestration platform. It is necessary to ask questions such as: how are orchestrations triggered? how important are manual tasks and referral mechanisms? Would in-flight process visualisation provide any value? Is managing jeopardy important? What sort of reporting would be truly useful for continual process improvement? Is rapid construction of new orchestrations important? Would reuse of common automation steps across a number of orchestrations provide any real value?

The answers to these questions varies significantly from use-case to use-case and will invariably have a substantial influence on the choice of what to buy or how the backlog of features are prioritised if the approach is to build one.


Leave a Reply

Your email address will not be published. Required fields are marked *