Wednesday 30 January 2008

SOA Governance: WS-CDL meets Policy

Governance seems to be a pretty hot topic right now. Everyone seems to be focusing on it. IBM, Oracle and Microsoft have portals dedicated to it and offer solutions. I noticed that Mark Little of Redhat and JBoss fame has been blogging recently on it, "Tooling and Governance". All very interesting in particular Mark Little's blog.

The OMG has been firing up on the topic of governance too. Dr Said Tabet, co-founder of RuleML, gave a talk recently at the OMG conference entitled "Regulatory Compliance: Key to the success of BPM and SOA". This talk was interesting for a number of reasons. Back in 2004 Dr Tabet, Prof Chakravarthy, Dr Brown and I gave a position paper at a W3C workshop on policy "A generalized RuleML-based Declarative Policy specification language for Web Services,". In this paper, written before WS-CDL came into being, we postulated the idea of policy and WSDL and how they relate. Our thoughts have since clarified and the link is more meaningful, more valuable and more appropriate between WS-CDL and RuleML, more generally it is the link between a global description of an SOA solution and policy statements attached to that description.

All of this is great stuff but I feel a little bemused that few if any (none that I could find) have really gone back to the basics and understood and certainly explained what governance means. Instead we are left with market-speak such as "SOA governance is an extension of IT governance" and "".

What we need to do first of all is understand just what we might mean by governance? Then we might understand what it is we need rather than join the bandwagon and talk about SOA governance in either a purely technical way or in an abstract way that is not related to requirements and solutions.

It is surprisingly difficult to arrive at a consensus definition of what governance means. Online dictionary definitions do little to shed light on what we might mean. Having searched the web the one I liked most, which conveniently fits what I want to talk about is "The systems and processes in place for ensuring proper accountability and openness in the conduct of an organization's business.". I was very glad to see that the same problems related to understanding of what is meant by governance is not confined to me alone. One of the early movers in the space, Actional, thought so too.

I like this definition because as a serial entrepreneur I have been subjected to governance as it pertains to the running of a company and this pretty much describes one of the key responsibilities of a board of directors.

A system of processes would imply that there are documented procedures that one should adhere to. Accountability might be seen as a measure against those documented processes and openness might be seen as a means of conveying the levels of compliance (an interesting word given the talk Dr Tabet gave) to those processes.

How can we apply this to an SOA solution? Obviously for a people organisation records are kept and managers manage according to some rules. Should rules be broken documentation is provided to show where, how and why the rules were broken and if needed corrective action can be instigated. As people we are good at doing this sort of thing because we are flexible and can deal with missing information much better than computer systems.

Computer systems like clarity and ambiguity is not something that they deal with very well at all. So we need some way of describing unambiguously what the procedures and the rules might be against which we can govern. Without unambiguous procedures and rules we have no hope of governing anything computationally at all.

Governance and how it may be applied to an SOA solution for my mind must deal with some understanding of what the solution is supposed to do. It matters not a jot if we govern a solution that fails to meet its requirements because the governance will be abstract and have no grounding in the business to which it is being applied.

If a business fails to meet it's basic requirements, regardless of what they are, the board of directors would be failing in their duty of governance if they did not take action to steer the company in a direction in which it did meet it's requirement however that is be achieved.

It seems to me that it would be a good idea to be able to demonstrate that an SOA solution meets it's requirements.


Requirements applied to any system (SOA is irrelevant here) are often divided into functional and non-functional. Functional requirements tend to give rise to data models (i.e. a bid in an auction and all of it's attributes) and to behavior (i.e. the auction system accepts bids and notifies bidders and accepts items for auctions and notifies sellers). The former is often captured in a model using UML or XMLSchema and so on. The latter is often captured as a set of functional signatures (i.e. newBid(Bid), notify(LeadingBid),notify(LoosingBid)) and sequence diagrams showing how things plug together.

Non-functional requirements are constraints over the functional requirements. They might relate to the technology used (i.e. use JBossESB use WS-Transaction and use WS-RM and use WS-Security) or they might relate to performance (i.e. a business transaction will always complete within 2 minutes). In all cases we can think of these as policy statements about a solution. Those that relate to performance are dynamic policy statements and those that do not are static policy statements.

Good governance of an SOA solution is one that can show for any transaction across it that it complies to the requirements. That is it does what it is supposed to do. More specifically we can state that for any transaction across multiple services in an SOA solution that all messages flowing between services are valid, meaning that they are valid against some model of the data and that they are in the correct order (i.e. payment follows ordering). Furthermore that the flow of messages completes at the user within the agreed SLA for that user (i.e. it all took less than 2 minutes).


Figure 1: Example of a sequence diagram

Today's governance solutions lack a description of what is supposed to happen. They have no notion of correct order. This is probably why most solution are siloed and concentrate on individual services in an SOA solution rather than looking at the big picture.

If you imagine a set of services that make up an SOA solution each may have constraints upon it. The auction system may have a constraint that bidders are notified when outbid within 30 seconds. Payment processing completes within 1 minute after the auction finished and so on. The entire process however rarely is described and so the effect that individual policy statements on each service might have across the entire solution is lost. There is no where to say it and no description of the entire system against which to attach it.

One might suggest that BAM solves such problems. But is does not because it has no view of the underlying requirements of the system and so cannot determine correctness of behavior.

If we have a description of an SOA solution in terms of how services interact (the messages they exchange and the conditions under which exchange occur) and that description is unambiguous then we can start to see how governance can be achieved. Such a description is the basis for the procedures against which we measure governance. Governance is always measured, otherwise we cannot say we have good or bad governance. If such a description existed we can attach policy statements to the description. Some might be static and some dynamic. What it gives is an overall context to governance and enables us to say that a transaction was correct and importantly that another is not and examine why it is not.



Figure 2: Example of a description of an auction system (top level only)

This is why WS-CDL as a description is so powerful. It can be validated against sequence diagrams to ensure it models the functional requirements. It can be used to drive construction and with policy attachments can ensure that the right technologies are employed. During runtime it can be used to ensure that messages are correctly sequenced and that messages are flowing within allowable tolerances based on attached dynamic policies.



Figure 3: Example of policy attachment (to the bidding process)

Attaching policies to WS-CDL provides an overall governance framework in which we can be sure that exceptions are better identified and the solution as a whole steered in a more effective manner to achieve good governance across the board. Without it, or at least without an overall description against which policies are attached we are simply navigating without a compass or stars to guide us. Governance then becomes a matter of guesswork rather than something concrete that we can measure.

Friday 25 January 2008

Is the Pi4SOA toolsuite supportive of RM-ODP?

I have been looking at RM-ODP of late. John Koisch recommended I look at it. I must admit I find it very interesting indeed. Here are some extracts that I want to blog about:

RM-ODP defines five viewpoints. A viewpoint (on a system) is an abstraction that yields a specification of the whole system related to a particular set of concerns. The five viewpoints defined by RM-ODP have been chosen to be both simple and complete, covering all the domains of architectural design. They are, the enterprise, information, computational, engineering and technology viewpoints.

RM-ODP also provides a framework for assessing system conformance. The basic characteristics of heterogeneity and evolution imply that different parts of a distributed system can be purchased separately, from different vendors. It is therefore very important that the behaviours of the different parts of a system are clearly defined, and that it is possible to assign responsibility for any failure to meet the system's specifications. RM-ODP goes further and defines where these conformance points would sit.

What is interesting here is to match up what we are trying to do at the Pi4 Technologies Foundation with WS-CDL-based tooling. Of the RM-ODP's five viewpoints the view that is of interest here is the "computational viewpoint". It is defined as:

"the computational viewpoint, which is concerned with the functional decomposition of the system into a set of objects that interact at interfaces - enabling system distribution; "

The key phrase is "a set of objects that interact at interfaces" because this is exactly what WS-CDL can express and it does so in both an architecturally and service neutral way from a global perspective. Thus WS-CDL would appear to be a really good fit as a language to support the "computational viewpoint".

The other area of interest lies in the notion of conformance and essentially service substitutability, the ability to "purchase separately" different parts of the system whilst maintaining overall behavioral correctness - the system as a whole continues to do what it is supposed to do from a computational viewpoint. Again this is very much where the pi4soa tool suite and the work in the Pi4 Technologies Foundation is heading.

The ability to test a computational viewpoint against higher order viewpoints is the basis of ensuring that the computation viewpoint is correct. In the pi4soa tool suite this is done using example messages from the information viewpoint (as information types) and scenarios (sequence diagrams) that represent flows. The model is checked for conformance. Conformance in this case means the model meets the requirements used to check it. Once this is done we can ensure that the expected the behavior of the services that participate are correct by monitoring them if they already exist and by generating their state behaviors if they do not exist.

In short the pi4soa tool suite provides a sound basis for a computational viewpoint language, and provides much in the way of automated conformance checking both at design time and at runtime. Thus the tool suite supports RM-ODP.

Comments please .....

Sunday 20 January 2008

A Methodology for SOA

Methodologies

A methodology is a set of measurable and repeatable steps by which some objective can be met. In the case of software systems this needs to reflect an understanding of the roles that are played out by people in the delivery of a system and the artefacts that they may produce and which can be used to measure a system against its requirements whatever they may be. The measurement of artefacts is essential in being able to track progress towards achieving a common goal which is to deliver a solution that meets a specific set of requirements. Immediately we can see that in an ideal world if we can measure the artefacts against a set of requirements we can determine if the system does what it is supposed to do.

The methodology we describe herein does not deal with the wider issues of scope and requirements capture, these are best left to existing approaches such as TOGAF, rather we concentrate on the delivery of suitable artefacts and how they are used by different roles and how they can be used to measure the system at various stages from design to operation and to guide the construction of such a system.

Roles

The roles that are played out in the delivery of a software system start at the business stake-holder and involve business analysts, architects, implementers, project managers and operations managers. These roles are a reflection of tried and tested methodologies that are used in construction projects. In a construction project there is also a business stake-holder, namely the person or persons that expressed a need. There is also an architect who captures those needs and writes them down as a set of drawings, there are structural engineers who add derived requirements for load bearing issues, there are builders who construct, there are project managers who manage the construction and report to the business stake-holders and there are the people maintain the system day to day. The roles for a methodology for software construction are analogous The aim being that what is delivered measurably meets the requirements.

To carry the analogy further we can list those roles played out in construction projects and list the equivalent roles in software construction.



In explaining what the equivalent software construction role do we describe the relationships that they have with each other.

The business stake holder, the operations manager and the business analyst work together to document the requirements. The business stake-holders articulate the business requirements for meeting the key business goals and the operations manager articulates the requirements from a day to day operational perspective. The business analysts role is to elicit requirements from the business stake holder and the operations manager and record them in a way that enables the business stake holder and operations manager to agree to them and the enterprise architect to model them.

The enterprise architect liaises with the business analyst to model the requirements and fashion them into two artefacts. A dynamic model that describes the flow between services or application and a static model that described the data requirements that the dynamic model uses.

The technical architect liaises with the enterprise architect (often they are the same person but this is not always the case) to ensure that the technical constraints (e.g. what technologies can be employed, what the expected throughput might be and what the user response times should be) can be met.

The implementers liaise with both the technical and enterprise architects to construct the necessary pieces that make up the system be they services, applications and so on.

The project manager has to manage the way in which all of the roles above meet the requirements agreed with the business stake holders and to ensure that the system is delivered on time and on budget.

The business stake-holder and the operations manager also liaise with the enterprise architect to determine how the system is to demonstrate its acceptability to the users. This may be done in part by showing that the system implements the documented requirements and in part by use acceptance testing, the latter being better suited to issues such as usability and meeting non-functional requirements. In the case of usability issues nothing can substitute for user testing and so no automated mechanism can be employed. Whereas it may be possible to check non-functional requirements in the context of some dynamic model of the system to determine consistency and so show where the system as whole may fail to meet one or more of these requirements or under what circumstances such failure will occur.

Artefacts

We have already alluded to one artefact, namely the documented requirements. Whilst we do not focus on how these are gathered we do need to focus on what is recorded. By so doing we can use them to measure other artefacts as we drive towards the delivery of a system. Using the requirements as a benchmark for success will ensure that the delivered system does what it is supposed to do and so meets the overall business goals of a project.

Requirements
Requirements come in three forms and are of two types. There are functional requirements, such as “the system receives orders and payments and emits products for delivery based on the orders and payments” and “orders occur before payments which occur before delivery”, and non-functional requirements, such as “the time taken between an order being placed and the good delivered is not more than 3 days” and “the system needs to be built using JBoss ESB and Java.

The non-functional requirements can be captured as a set of constraints or rules that the system needs to obey in order to deliver a good service. An example is the end to end time from a customer placing and order to the order being satisfied. A more granular example is the time it takes for a customer to place an order and receive a response that the order has been accepted. In either case these can be represented as rules using a language such as RuleML although other rule and constraint language may well suffice.

The functional requirements can be decomposed into static and dynamic functional requirements.

Static functional requirements deal with data and what is needed to describe a product, a customer, an order and so on. The dynamic functional requirements describe how such data, as artefacts in their own right, are exchanged between different parts of a solution. For example how a product might be sent to a shipper as well as sent to a customer and how a customer sends an order to an order system which might give rise to a product that is shipped to the customer.

The dynamic functional requirements build upon the static functional requirements to lay out responsibilities and to order exchanges of artefacts between the different areas of responsibility. For example an order system is responsible for fulfilling order but a warehouse system is responsible for stocking and shipping products to a shipper or a customer and a payment system is responsible for dealing with payments and updating the books and records. Each service (e.g. the order system, the warehouse system and the payments system) has specific responsibilities but need to co-operate with each other to deliver a business goal such as selling products for money. The dynamic functional requirements describe, as examples or scenarios, who does what (their responsibility) and whom they exchange information with (the message exchanges) and when they do so (the ordering of those message exchanges).

It is the responsibility of the business analysts to gather and document these requirements. In so doing they need to deliver them as concrete artefacts and to do so they need tool support, although it should be possible to do all of this without tools using a paper and pen, tool make their life easier and more productive.

A tool to enable the non-functional requirements to be captured should provide an editor that enables rules and constraints to be written in English and can then turns them into RuleML. This would enable RuleML compliance processes to be used to determine if any of the constraints or rules are inconsistent and provide feedback as to where such inconsistencies lie. For example one might have a constraint that say the end to end performance from the point a user buys a product to receiving the product is less than or equal to 2 days. Whereas the point at which the order is acknowledged is less than or equal to 3 days. A consistency checker can easily pick up errors in which the wider constraint is breached by a more narrow one.

Likewise the functional static requirements need to be supported with an editor to create forms that can be used to create example messages, perhaps saving the message in different formats including XML. The XML messages then becomes artefacts in the same way as the non-functional RuleML artefacts.

And finally the dynamic functional requirements need tooling support to enable the business analysts to describe the flows between responsible services as sequence diagrams that can refer to the static functional requirement and non-functional requirement artefacts.

In short the base-artefacts from requirements gathering would be as follows:

  • a set of RuleML rules and constraints

  • a set of example XML message

  • a set of sequence diagrams over the messages, rules and constraints


Providing easy to use tools that support these artefact enables the business analyst to gather requirement rapidly in a form that is easy to review with the business stake-holders, can be measured for consistency and can be passed to the architects to draw up suitable blueprints that meet the requirements.

Models
When the business analysts hands over the artefacts that they have created the role of the architect is to create static models of the data and dynamic models of the behaviour that meet all of the requirements. Collectively we may consider these models as the blueprint for the system. The artefacts that the business analysts hands over to the architect serve not only as input to the static and dynamic models created but serve as a means of testing their validity and so measuring them against the requirements.

A static model needs to describe the data that the system uses as input and emits as output. Such a model needs to constrain the permissible values as well as constraining the combinations of data elements needed. There are many tools available which do this today. Some are based on UML and some are based on XML Schema. The ones we like can do both and this provides a reference model for the data that can be tested against the described static functional requirements to ensure that the model captures everything that has been previously agreed.

The artefacts from modelling are as follows:

  • a data model that represents all of the documents requirements for data

  • a dynamic model that represents all of the documented dynamic requirements


Validating models against requirements is not a one time operation. It certainly ensures that the system when delivered does represent all of the necessary data but also acts as a governance mechanism when the system runs, as data can be validated on the fly against the model and can also be used to control and guide further modifications of the system as it evolves.

The dynamic model needs to describe what data is input and when and what the observable consequences might be. For example a dynamic model might need to describe that an order is input and then a check made against the inventory to ensure the good are available. If they are available then payment can be taken and if payment is accepted then the good scheduled for delivery. Of course we might need to model this as a repeating process in which a customer may order several good in one go or after each check against the inventory and so on. Clearly any dynamic model needs to be able to describe repeating processes, things that happen in sequence and things that are conditional.

Validating dynamic models will require us to check our sequence diagrams against the dynamic model to ensure that the dynamic model captures the described dynamic functional requirements of the system and so show that the model behaves according to these requirements. As with the static data model this is a process that can be repeated to ensure that the system behaves correctly as it evolves and that the system behaves correctly when it is operational. The key aspect of such a dynamic model is that it is unambiguous and can be understood by applications which may verify models against input data from sequence diagrams as well as messages that flow through a running system. This provided behavioural governance of a system through its life-cycle.

WS-CDL provides a suitable language for describing dynamic models. It makes no assumptions about the underlying technology that might be used to build systems, it is unambiguous and it is neutral in its description so that it can describe how disparate services (i.e. legacy and new services) need to work together in order to deliver some business goal.

Services
The dynamic model only describes the order of messages (data) that are exchanged between different services, so it would describe the exchange of an order followed by the exchange of a payment and the subsequent credit check and so on. It does not describe how credit checking is performed nor does it describe how an order is processed. Rather it describes when and what messages are exchanges.

To implement, or construct, the services we can use the unambiguous dynamic model to determine what the interfaces of each service should be. You can think of these as the functional contract that each service presents. We can also use the dynamic model to lay out the behaviour of the services and so the order in which the functions can be invoked. Collectively this provides a behavioural contract to which the service must adhere to ensure that they do the right thing at the right time with respect to the dynamic model. Using the dynamic model in this way ensures structural and behavioural conformance to the model and so to the requirements against which the model has been validated.

Contractually the business logic need only receive messages, process them and ensure that the messages that need to be sent are available. The service contracts ensure that messages are received and sent as appropriate.

Validation of messages against the static model whilst the system is operational can be added as configuration rather than in code and validation against the dynamic model can be added similarly. This ensures continual governance of running systems against models which have been validated against requirements and so show that the system itself conforms to the requirements. This level of governance is akin to continual inspection that an architects and business stake holders and other parties may apply to construction projects.

Summary of the Methodology

The artefacts and the roles that are involved in their production and the way in which they artefacts are used are summarised in the two tables below:



Table 1: Artefacts and their uses



Table 2: Artefacts and roles

Wednesday 9 January 2008

CDL meets JBoss ....

Every one that knows me knows that I am a keen and avid supporter of WS-CDL in all of its guises. So I will not be blogging about its virtues today. Rather the more interesting topic is that of Redhat and JBoss and their use of WS-CDL to promote what might be called "open processes". I shall deal with what's going on first and talk about what "open processes" might mean second.

For several months Hattrick Software (and the Pi4 Technologies Foundation) have been engaged with Redhat's JBoss team through Mark Little to build the necessary integration between the current open source pi4soa project and JBossESB. The aim is to provide an out of the box easy to use and easy to understand way of creating large scale SOA solutions that may incorporate existing services as well as requiring new services to be constructed. The WS-CDL part of this, through the Pi4 Technologies SOA tool suite, supports a methodology - the one I have started blogging about and no doubt will continue to do so. What this combination of technologies will do is reduce the time it takes to construct large scale SOA solutions and ensure that they are correct with respect to the requirements that are used as input to design and build them.

I have been working with Mark Little on this project and we are all very excited because, possibly for the first time, we can see a road to delivering such systems that links requirements formally to design and so validates design against requirements before a line of code is written. It extends this traceability from requirements all the way through to construction and operation of such SOA systems ensuring that we can govern them against the requirements be they functional or non-functional. To our knowledge this has never been done before. Our collective aim is to move SOA solution building away from being an art and towards being an engineering discipline.

"Open Processes" takes advantage of this discipline and ensures that the SOA blueprint that is the design model of a system becomes a key artefact in construction and operation of systems. By so doing we free the code from the plumbing and enable migration from and to differing platforms. It would make it easy to take an existing system and then port it to a new platform simply through a few button presses and clicks.

Imagine you have an existing SOA system based on the International Big Mega company. If you could retro-fit the SOA blueprint to the existing system you bring it under control of this key artefact. You can do this by constructing the SOA blueprint and monitoring the existing system against that blueprint. When you are happy that it has captured everything you can switch generation and deployment to a new platform and so gain the freedom of process from any vendor lock-in. Equally you can do it based on new requirements and apply the same technique to existing services in your current platform thus ensuring that the new requirements are met and that the existing system continues to function even if you reuse some of the existing services. It brings service re-use to a new level and ensures processes (or systems) are really open.

We have had some success already moving an existing system from Axis to JBoss. It all took less than an hour to do but then again we did already have a SOA blueprint - which in and of itself took less than a week to construct.

We look forward to making this available. Not all of these features will be present first time around but many will. We intend to provide an open source version and a licensable enterprise version so that you folks out there can try before you commit. When it is made available later this quarter we look forward to you feedback.