Thinking About the People in Process Automation

I had a very interesting discussion recently that got me thinking about how to execute the move from largely manual workflow engine managed processes to largely/fully automated processes. This topic raises a number of interesting questions about where the human element fits into process design and evolution towards full automation.

Automated processes, in most and hopefully all cases, include sequences of fine grained tasks. This level of granularity enables reuse of these tasks across different processes as well as other functions of the enterprise’s architecture to deliver a service orientated architecture. Fully automated processes are, from a purist’s perspective, designed for the utmost elegance and maximum efficiency at the expense of readability or the complexity of manual fall-back. This is because only expert technicians should ever have to read them and fall-back should be a rare event.

Organisations, however, are not designed such that every atomic task can be allocated to a team dedicated to that task and that task alone. Organisations are designed such that individuals with a skills in a related set of areas are put together into teams with the duty to fulfil a broad set of related activities. Organisational design, therefore, has a major influence on process design, especially largely manual processes in that a task in the process is usually an invocation of a team to perform a set of activities falling under its remit which are dependant on the previous task (and therefore team) and a prerequisite for the next task (and therefore team). In some cases additional inefficiency is intentionally introduced for reasons such as ensuring a team are not repeatedly called upon in the same process, and making the process easier to read and understand by those non-technicians who have to execute them. The motivation for the latter is to counteract the fact that such processes are more prone to issues due to the human element. Whilst a theoretical ideal, it simply isn’t practical that a team receives a task, completes it only to then receive the next task in the sequence in the same process instance – even suggesting this to a team will likely result in a tirade explaining why it’s operationally unviable.

I don’t think that any process is really established without a view to automate as much as is reasonably possible as soon as it is economically sensible to do so. Whilst many processes have this target many take quite some time to receive even their first automation. It is important, however, that one eye very much remains on this target and a view is quickly established as to when and how this is actually going to happen.

I also don’t feel it is unreasonable to design processes that cannot be automated immediately but will be automated in very short order such that they are built to accept an operationally unviable model in a short term tactical phase with a view to near immediate term evolution without changing shape. Short order here must absolutely mean short order for this to to be in anyway justifiable.

The other theoretical ideal of rapidly iterating on processes to automate them is very dependant on integrations being delivered on a regular basis and fits nicely with the approach of designing a process for automation in the first place. In reality though integrations are seldom delivered with such regularity taking the rapid out of rapid iteration.

So what does this all mean? Long running complex processes are often largely manual and tend to stay that way for a while. How should these processes be designed and then be transitioned to automated processes?

Such processes should be built to establish maximum readability and simplicity when invoking teams within an organisation to counteract the errors that the human element may bring. Automation of such processes without changing their shape will lead to, from a purists standpoint, inefficiency (and a lack of elegance) compared to processes designed to be automated right off the bat. To make these processes efficient (and hence elegant) from this standpoint means revolution rather than evolution. Revolution means a not insignificant re-engineering effort which may not actually provide any benefit other than to appease a finger wagging architect- if there is genuine value in this re-engineering effort then the business case will be clear.

It is also necessary to take care about thinking about those who actually execute an activity in a process when introducing automation. For example if there is a task in a process which involves a team selecting a banana, peeling the banana and then eating the banana, the work instruction underpinning this task is always the same when the team is invoked. This makes decision making very simple with the team having to blindly execute the work instruction for every work item they receive. If a subsequent update to the process automates part of this task then, at best, when the task is made shorter by automating activities at the beginning or end of a task the team will now need to think about what they have to do for a period of time, selecting the appropriate work instruction to execute depending on which process version the process instance represents. Until all instances of the older version of the process are flushed this decision making process can only lead to errors.

Worse still if one of the middle activities are automated causing the team to be invoked twice, the team must ascertain and execute one of several work instructions depending on which process version an instance belongs to and in the case of the newer version of the process which fragment they are being invoked for. The decision making becomes simpler once all instances of the older version of the process are flushed, but a decision still needs to be made which can lead to issues in process execution. To counteract this teams are often logically split into two sub-teams each responsible for one of the new tasks so that the one team for one task model remains – this brings about new inefficiencies because it is no longer a large pool of resource but two pools of resource demand can execute.

Rapid iteration on a process implementation can only serve to worsen this problem as the work instructions are constantly in flux with additional decision making and organisational change just introducing further propensity for error. A better idea may be to execute this as iterations which minimise fragmentation of a teams work (and often their morale too when they rebuked for the errors made) seeking to reduce the work of the team by having them start later or finish earlier. Although this comes at the cost of slowing down the iterations it maintains better quality.

Leave a Reply

Your email address will not be published. Required fields are marked *