Today John and I sat together to do some final testing of his decision table implementation. As so often the problem was with the data (data is root of all evil), not the software.
During our obligatory lunch pizza I mentioned to him that I think that programming with and for workflows differs from "normal" programming.
As a "normal" programmer, if I change code, I can compile it, test it, run it and if it does what it should, thats fine. In other words, I am pretty free to change all aspects of my system. Granted, I need to make sure that if I change the API, I change the implementations that use my API, but the whole process is pretty much independent of a time component.
When you deal with workflows, the picture changes profoundly. Why? Because everything you do needs to be done in such a way that long running processes still work.
At first that sounds pretty trivial, or logical. But if you work on a project that has processes running for a year and longer, the impact of this requirement is quite massive. It means that whatever you do (to improve the system, to fix a bug), part of the system will still need to run with the previous, maybe buggy logic. It means you cannot just change things, you need to find a way to know what you are allowed to change, and what not.
It would be nice if a system understands that a "normal" programmer is in a different state of mind, and provides mechanisms to protect her. For instance, once you launch a flow, all definitions could be versioned, so that the programmer is free to change them later without affecting the running system.
Alternatively, instead of versioning, a tool could warn me that there are still running processes that use a definition that I have just changed.
In the context of OpenWFE, workflow definitions are versioned, and the workflow that you launch has a copy of its definition stored with the engine. The same is not true however, if you define (sub-)processes to be loaded at the startup of the engine (a library). These are not versioned, but instead are valid as long as the engine runs (they can be overwritten if you want, since in OpenWFE's Scheme-like semantics, a function/method/flow-definition is the value of a variable, so you can change that value at runtime). Thus, if you change them, and restart the engine, running workflows use the new version of the library, not the one that was present when the workflow was launched. So there is a difference between a sub process defined in a "normal" workflow definition and one defined in a library even when they are syntactically equivalent.
This difference is exactly the difference you need to take into account if you start working with workflow engines. A workflow programmer needs to be in a "workflow state of mind", and take into account that whatever he changes, there might be running processes that depend on previous versions of a what he is changing.