We’ve been talking about validating SOA Governance approaches for three years now, but surprisingly, we have found that very few enterprise IT shops of any serious scale are actually using them to their potential at this point. I had lunch yesterday with one of our wily gurus on this topic, Ken Ahrens, and he aptly noticed that the practice of SOA Governance just hasn’t kept up with the grand expectations we had of it. Why?
It may be that these companies haven’t seen any tactical value to that registry, which they picked up along with the rest of their SOA shopping spree. They are more concerned with the integration of their existing and new technology assets. They found that while they can put all of the Service descriptions and locations in one place, it didn’t add a whole lot of incremental ROI if the developers in that department already knew about the Services they were planning to use.
You see, a lot of people like the concept of Governance and having Policies, but it’s not necessarily practical all by itself for the way businesses construct and leverage applications. This happens because SOA Governance without Validation doesn’t provide any assurance that it’s actually meeting the business requirements set forth in the Policies. In essence, it is like posting a speed limit, but not having the radar gun to ensure that drivers are obeying the rules of the road.
Think about the gap between what the BUSINESS is trying to do, and the actual IMPLEMENTATION of the technology that makes it happen. The further you are from the actual implementation logic, the more difficult true validation becomes:

As you can see here, the more layers of abstraction you have, the less likely all of these layers will work together when you hook them up, and the harder it can be to validate behaviors at other layers that may affect the one you are testing.
It can be a long road indeed to dig that deep. You might have a great idea of what you actually want to do, but that may not be grounded in a way that all of the teams are addressing it. This is becoming even harder in today’s economy, where you have a lot of partnerships, acquisitions and outsourcing – and therefore very little control over the day-to-day activity of your development teams that you are relying upon.
Being able to connect a UDDI registry, where you store everything in one place, with a strong Validation strategy can be a big advantage in this environment. But to make it work, that process has to be continuous. There are several types of Continuous Validation:
1. Checking continuously at Build Time. Every time you are checking in a new service, you are automatically validating that it works before it becomes available as an asset.
2. Continuously checking on a Scheduled Basis: We call this the “belt & suspenders” approach of making sure that your applications are still working as described, as you may not know if the service was changed by another party, or brought down by a performance issue or dependency problem.
3. Leveraging UDDI to make your BPM and Integration tools, and their associated tests, find the appropriate services for the workflow they are validating. With UDDI v3, it's even easier to read your endpoints, and that can be integrated with many different tools.
4. Reporting on Usage and Value: Figure out which services are popular, and are becoming key components of the SOA architecture. For organizations looking to "trim the fat", this gives the SOA team the knowledge of where to focus testing, capacity planning, and future integration and development labor.
All of the above are examples of how the Validation practice can add teeth to that SOA Governance effort. So we’d like to catch issues at Change Time, and validate each service then, but we also need the additional safety of checking structural, behavioral and performance factors at runtime, and reporting on the success of each effort to maximize efficiency. It’s not just about checking the WS-* protocols to make sure they are structurally correct.
Take for instance the idea of pure Web Services compliance. If you were to open up a registry and scan all of the services out there for WS-I and some flavor of WS-Security compliance, that would inevitably return with about 1% of the services being compatible. It turns out, the tools and processes that generate Web Services make a lot of custom decisions in constructing the WSDL and SOAP messages in order to be more robust and compatible for their own platform choices. Not to mention the underlying databases and apps those are talking to.
There are so many non-standard decisions we see every day in the field. So we need to clear our minds of the idea of simply thinking of “compliance” testing as the only way to achieve Validation for SOA Governance. We need to test and validate a very Heterogeneous SOA world to get the compelling value out of that SOA Registry that will make the CXO sit up and take notice.
For instance we were at a major telecommunications company that is using Systinet, which has a lot of great features. They were storing Services and Policies in there, but they weren’t really relying on that registry for everything they need, because those services couldn’t be meaningfully Validated at change time and runtime.
An example is the service that returns "true" for compliance checks, but doesn't actually do the work that it was supposed to perform. Depending solely upon the web service response (it came back true and didn't SOAP Fault, so it must be OK, right?) can lead to a false positive. We can perform deeper validation to ensure that the "true" response is backed up by an actual system of record or transaction layer that was exercised. The same values apply if you are using CentraSite, BEA ALER, or other leading Registry/Repositories to manage those services.
Validating Non-SOA-ready elements too?
Outside of the registry, most companies connect several tools together that aren’t WS-I compliant, and they can talk quite effectively. This isn’t a new concept – in fact it has been around since CORBA. And it is still how most integration happens today. Now we need to automate the Validation to account for all of these flavors of service integration, and when and if those technologies are Service-enabled, pull them into the SOA Governance platform as a system of record. Of course, we offer a solution for doing this kind of UDDI validation in LISA.
So, if you already have a SOA registry, and you are frustrated with being able to get meaningful results out of it, don’t give up on it – there is lots of ROI in there! Take the next step, and don’t just test for compliance, but actually validate that those services meet your defined SOA policy. If you can define a policy, then define an accompanying test that actually validates and executes that policy, and use it at change time and runtime.
Labels: Posts by Jason English, soa governance, SOA testing and validation