An SOA odyssey

Tuesday, December 20, 2005

Localizing Services

Since our service-oriented infrastructure is meant to be extended for use by partner countries we architected it so that different service consumer's culture settings could be taken into account.

As discussed in my post on Messaging Standards this meant incorporating into our standard SOAP header the Culture and UICulture elements defined as follows:

  • Culture (Optional) The culture setting for the client specifying the formats of date time, currency, and numeric data that follows RFC 1766 derived from ISO 639-1 and ISO 3166, e.g. US English="en-US"

  • UICulture (Optional) The culture setting for the client specifying the language to use that follows RFC 1766 derived from ISO 639-1 and ISO 3166, e.g. US English="en-US"

  • In order to set the culture and to return messages in the appropriate language we then had to effect several areas of our code.

    From the service consumer's perspective a call to a service operation can be made through a Service Agent using our Service Agent Framework. The layer supertype in that framework is our ServiceAgentBase class from which all service agents are defined.

    When a method on one of the derived service agent classes is called, the implementation delegates much of the work to a protected method of the ServiceAgentBase such as ExecuteOperation. This method is then responsible for building the SOAP request, choosing a transport, and then sending the request and receiving the response.

    In order to build the SOAP message the method uses a set of schema classes we generated using XSD.exe and then modified. For example, we have a class simply called Compassion.Schemas.Common.SoapHeader that encapsulates our header schema. In that class we have the following:

    public class SoapHeader
    private string _culture =
    private string _uiculture =

    As a result, clients who instantiate service agents can first set the culture on the current thread (typically the culture is already set of course) before doing so and thereby transmit the culture through the SOAP envelope.

    Thread.CurrentThread.CurrentCulture =
    new CultureInfo("de-DE");

    Thread.CurrentThread.CurrentUICulture =
    new CultureInfo("de-DE");

    // Create the update request
    ConstituentUpdateRequest req = CreateRequest();

    // Create the service agent
    ConstituentServiceAgent agent = new ConstituentServiceAgent

    // Send the request and get the response
    ConstituentUpdateResponse resp =

    In this way, by using the service agent consumers will automatically get consideration for their culture when service operations are processed.

    WebService Transport
    When the SOAP message reaches the server EDAF intercepts the request and processes through one of its transports. Thus far we've used the WebService transport exclusively and so we modified the standard code to be aware of our SOAP headers.

    Within the WebServiceInterfaceAdapter.cs file we added code to process our SOAP headers if the headers match our namespace. If so the code unpacks the header and then builds a CIHeader object which it then places in the EDAF Context object like so:

    // Create the header if it doesn't exist yet
    if (h == null) { h = new csc.CIHeader();}

    switch (sun.Element.LocalName)
    case "Culture":
    h.Culture = sun.Element.InnerText;
    case "UICulture":
    h.UICulture = sun.Element.InnerText;
    //Add the header to the context

    One of the interesting aspects of this is that if the WebServiceInterfaceAdapter does not find the culture settings in the SOAP header it will try and read them from the underlying transport itself, in this case HTTP like so:

    if (h.UICulture == null)
    // See if you can find the culture in the HTTP header
    if (HttpContext.Current.Request.UserLanguages !=null)
    (HttpContext.Current.Request.UserLanguages.Length > 0)
    // Take the first one in the collection
    h.UICulture =
    if (h.Culture == null)
    h.Culture =

    At this point the culture is now captured on the service provider side.

    After the request passes through the adapter and any handlers that have been configured it makes it's way to the business action where the actual work is done. All of our business actions are derived from our BusinessActionBase class which as you might expect, contains the code to read the culture information and use the .NET ResourceManager to allow the derived class to pick up the correct strings.

    The supertype does this by simply examining the CIHeader object in the Context if it exists and then pulling the values out.

    //try to pull it from the ciHeader.
    if (Header != null)
    if ((Header.Culture != null) &&
    culture = Header.Culture;
    if ((Header.UICulture != null) &&
    uiCulture = Header.UICulture;

    //try to pull it from the app settings.
    if (culture==string.Empty)
    culture = ConfigurationSettings.AppSettings["Culture"];
    if (uiCulture==string.Empty)
    uiCulture = ConfigurationSettings.AppSettings

    //call SetCulture

    You'll also notice that this code includes a last ditch effort to pull the culture settings from the application's configuration file. This allows the implementer of the service to set a default culture other than the neutral culture.

    The last line of code above calls the SetCulture method in order to actually set the culture. The key part of the SetCulture method sets the current thread's culture to the codes passed in and instantiates the ResourceManager that is then exposed via a protected property.

    Thread.CurrentThread.CurrentCulture = new

    Thread.CurrentThread.CurrentUICulture = new

    m_objResourceManager = new ResourceManager

    The end result is that dervied business actions use the exposed resource manager to return exception and other messages to the client in their own language. For example, when an exception is raised the business action would code like the following in order to return an exception code and message to the caller.

    exception.Message =

    Business Rules in the SOI

    One of the architectural topics that caused our team more than a bit of thinking was our approach to implementing rules.

    The original idea for handling rules as noted in the high-level architecture diagram below (to the left of the Business Action) was to implement a common rules engine that would be responsible for the storage and execution of rules. This approach would allow for one-stop shopping for Process and Entity Services as well as a common repository for easily changing rules.

    Initially three different options were discussed:

    • BizTalk Rules Engine (BRE). Since BizTalk is a major player in the Compassion SOA and includes a rules engine, our first thought was to leverage this engine to store and execute rules.

      While the engine is versatile and can be called from inside .NET assemblies (such as the business actions we’ll be developing), the major impediment to using the engine was twofold: 1) The BRE does not include an easy to use interface for managing the rules. As a result, other organizations have created their own interfaces using tools like Microsoft Excel to manage the rules. Without doing something like this rules could not be managed by business user, 2) when using the BRE a BizTalk installation is required with the appropriate licensing. This model may not be desirable if services are deployed in a partner country since it would force the adoption of BizTalk at those locations.

      From a previous concall with Microsoft it is also not anticipated that Microsoft will ship the BRE as a separate product in the near future.

    • NxBRE. A second approach that was considered was adopting the open-source rules engine for the .NET Platform called NxBRE, which is a port of the JxBRE engine available at SourceForge. This engine use rules defined in RuleML and allows for rules to be defined using Visio stencils. One of our consultants performed an evaluation of NxBRE to determine how it could be invoked from within business actions.

      After evaluating this option it was decided that although it overcomes the licensing restrictions implicit in the use of the BizTalk BRE it still does not provide a simple way for business users to easily change the rules. Further, as with the BizTalk BRE, NxBRE is an inference engine that works off the concepts of Facts, Queries, and Implications. This structure would work well for simple data validation but would be unwieldy when trying to represent data modification rules since a large number of facts would have to be loaded into the engine and the return data from the engine would be complex (for example to represent rules regarding how an entity’s attributes are set based on all of the data associated with the entity).

    • Ultimus 7.1. Version 7.1 of the BPM suite includes a rules engine called Ultimus Director that can be used from within Ultmus workflows to represent rules. It is designed with end users in mind and so can be used by business users to change and update rules without developer intervention. However, the engine is not meant to be invoked from outside Ultimus and it is assumed that licensing restrictions would apply. As a result, this option does not appear to be a general solution.

    In order to clarify Compassion’s position regarding the implementation of business rules in the SOA architecture a conference call with Gartner analyst Jim Sinur was held. He provided the following clarifications and best practices.

    1. The need for and use of BREs should be determined based on business rule volatility

    • That tipping point is usually in the range of 25-35% of the rules to be processed
    • Compassion should rank business rules by; change frequently, hardly ever change, and middle-of-the-road, that is, perform a volatility analysis
    • BREs may be cost effective for managing the rules that change frequently
    • Compassion may have few business rules that change frequently and a BRE would be overkill

    2. BREs are expensive and there are currently no solid candidates in a .NET architecture

    • Virgin Atlantic has created a custom spreadsheet interface to BizTalk for volatile business rules, this allows business analysts to change these rules without developer assistance
    • Microsoft may add a similar interface to BizTalk, however, this is just something Compassion should watch and not count on

    3. Create guidelines for business rule placement

    • "Flow-Related" business rules should be placed in BizTalk or Ultimus
    • Business rules should be parameterized whenever possible - especially the more volatile rules
    • Avoid placing business rules within custom business framework services - especially any "flow-related" rules
    • Create and use a mechanism for business rule management
    • Report on what's there
    • Keep pulse of how often the rules are changed
    • Change the strategy if you discover a high degree of change
    • Create rules so they can be tailored to meet global requirements

    As a result of this analysis and the Gartner discussion the decision was made not to pursue adopting a BRE as a single repository for all rules within our service-oriented infratructure. The decision was based primarily on two factors. First, although a volatility analysis has not been done it is highly unlikely that Compassion meets the 25-35% threshold where implementing a BRE would be cost effective. Secondly, within our technology scope there doesn’t appear to be a BRE that could be employed without creating a custom UI and/or incurring licensing costs. As a result, the direction we’re following is to employ the native rules engines within BizTalk and Ultimus (Process Services) and then implement custom rule development within Entity and Infrastructure Services.

    In following this approach the following rules matrix was developed in order to provide guidance as to how and where rules would be implemented.

    Process Rules
    Rules related to the flow of a system to system process or system to human process, for example, a rule that changes the routing of a form within Ultimus based on partner country

    Implemented in the BizTalk BRE or Ultimus Director when available

    Data Integrity Rules
    Rules related to how data is ultimately stored in a data store such as Compass or an XML document including FK relationships and link tables, for example, a rule that dictates that when an email is updated the shared email addresses are also updated

    Typically implemented in stored procedures that persist the data and invoked from business actions. If the data is persisted in XML documents, then XSDs can be used to implement these rules

    Data Modification Rules
    Rules related to how data can be inserted, updated, or deleted. Typically, these rules are dependant on the state of the persisted data in a data store, for example, a rules that specifies that in the US only two email addresses are stored

    Implemented directly in business actions that perform inserts, updates, or deletes.

    Data Validation RulesWhat?
    Rules related to the format of data, for example, the requirement that an email address conform to a regular expression

    Implemented directly in business actions that perform validation typically prior to inserts or updates

    In following with the best practices outlined by Gartner the Data Modification and Data Validation rules include support for externalization. Specifically, this support includes:

  • The ability for a business action to be notified as to which rules to process based on a configurable value (such as the partner country) and on which client is invoking the service

  • The ability for the business action to use specific parameters for a rule based on a configurable value (such as the partner country)

  • We implemented this support using a custom EDAF handler as shown in the following diagram.

    As indicated in the diagram the rules handler is invoked as a part of the service implementation pipeline for specific business actions. The handler passes configuration information indicating the local value along with the service action name to the rules store. The rules store returns two data structures, one that is a collection of rule identifiers (GUIDs) that should be processed for the business action, and a second that includes a collection of country specific values to use when processing the rules. Both data structures are placed into the EDAF Context. The Rules Handler caches the data structures once retrieved for performance reasons.

    The business action then uses the data structures in the Context to determine which rules to process and what values to use when processing. The semantics of the rules themselves are encoded in C# or VB .NET code, typically in private helper methods that make them easy to maintain. The collection of rule ids are used only to indicate which rules to process and so the presence of the id in the context triggers the execution of the rule.

    Because the two collections are not related, multiple rules can use the same local value in its processing.

    When rules are retrieved the handler queries against these two tables. Given the LocalValue, ServiceActionName and the client making the request (From information encapsulated in the SOAP header), a set of Active RuleIDs would be returned. If the the client doesn’t appear in the BusinessActionClientRules table, the Active flag from the BusinessActionRules table applies. Notice that the syntax of the rule is not included in the table, only the id and whether it is active for the business action and client. This data would then be placed in a hashtable within the EDAF Context object by the Rules Handler.

    Inside a business action, particularly the BusinessActionBase class, we've written code to check the hashtable for the RuleID before executing the rule.

    We've also built an ASP.NET user interface to manage the rule data.

    Overall, this solution has workd quite nicely and in production we use these rules both to provide specific values to use when evaluating a rule and to turn rules on and off for specific clients.

    Thursday, December 15, 2005

    Using NLB

    One of the design requirements for our service-oriented infrastructure was to support a scale out architecture. For our needs we chose to use Microsoft's Network Load Balancing (NLB) system.

    Early on we created a reference architecture in order to test the integration between our various components including EDAF services, BizTalk 2004, legacy ASP code, and Ultimus BPM 7.1. In our test environment we set up two clusters, one for the EDAF services and one for the Ultimus services.

    In the reference architecture simple tests of both the EDAF services cluster and the Ultimus cluster were performed. In the case of the former 200 requests were sent through one of our orchestrations which ended up invoking our constituent service via the service agent framework we devleoped. In the case of the latter 50 requests that created Ultimus incidents through our facade service at a time were spawned by a test harness. In both cases the tests (using performance monitor) revealed that all requests from a particular client were being serviced by one machine in the cluster. The machines servicing those requests were the servers that had the higher priority set in the NLB configuration in the Host Parameters tab.

    NLB settings were then reconfigured from single to "no affinity" and balancing load at 50% in the port rules from within the NLB UI. Tests were then re-executed with no difference in the results.

    Details regarding the Microsoft NLB algorithm were consulted. The important paragraph of this document is:

    "When inspecting an arriving packet, all hosts simultaneously perform a mapping to quickly determine which host should handle the packet. The mapping uses a randomization function that calculates a host priority based on their IP address, port, and other information. The corresponding host forwards the packet up the network stack to TCP/IP, and the other cluster hosts discard it. The mapping remains unchanged unless the membership of cluster hosts changes, ensuring that a given clients IP address and port will always map to the same cluster host. However, the particular cluster host to which the clients IP address and port map cannot be predetermined since the randomization function takes into account the current and past clusters membership to minimize remappings."

    In other words, when “no affinity” is set, the host that services an incoming request is determined by the IP address and client port number in a deterministic fashion based on a randomization algorithm and the number of servers in the cluster.

    As a result we looked into how client port numbers are generated on client machines.

    It turned out that within our Service Agent framework we have a ServiceAgentBase class from which all servicce agents are derived. Within this class requests are executed using the SoapHttpWebClient class. Internal to this class the following code is in the constructor.

    public SoapHttpWebClient(string requestUri)
    _requestUri = requestUri;
    _httpWebRequest = (HttpWebRequest)HttpWebRequest.Create(requestUri);
    _httpWebRequest.KeepAlive = false;
    _httpWebRequest.Method = "POST";
    _httpWebRequest.ContentType = "text/xml;charset=\"utf-8\"";
    _httpWebRequest.Accept = "text/xml";

    Tests using a throw away application and a network monitor utility revealed that requests were load balanced when the System.Net.Sockets.TcpClient was used to generate the requests but not when HttpWebRequest was used as in the constructor code above. This was the case since using TcpClient generates a new connection and therefore a unique port number for each request.

    However, by adding the following line of code to the constructor above…

    _httpWebRequest.KeepAlive = false;

    the HttpWebRequest class creates a new connection for each request which allows the TCP stack on the machine to assign a unique port number to the request. Making this change to the SoapHttpWebClient entails a slight performance penalty on the client machine since connections cannot be reused. As a result, we added a configuration setting, <KeepAlives> to the configuration section for the service agent so that clients have the option of using this setting (with the default set to false).

    Inter Action Communication

    One of the interesting issues we tackled in this release was that of allowing EDAF business actions to communicate their state information.

    Our original design made the assumption that we would have controller type business actions that would invoke other business actions using the in-process dispatching adapter in EDAF. Each of the invoked business actions would when necessary create exceptions and return those to the controller who would then add them to its exception collection that eventually is sent back to the caller as a series of XML elements.

    While this design allows the business actions to be independent and the controller to rollback the work of all the invoked actions, it does not allow for business actions that have dependencies to communicate when called by the controller. In other words the communication model here is essentially from the controller to each invoked action whereas pairs of invoked actions have no visibility to each other.

    Fortunately, EDAF allow for this by supporting the concept of a Context object. So the mechanism to do this is to add items to the EDAF Context object (this.Context) before invoking the action. We were using this technique already to send flags like a ValidateOnly flag that instructs each action to validate the request but do not process it.

    However, for this release we added the capability for actions to be aware of the exceptions that have been created within the entire request. We use a base class called BusinessActionBase from which all of our actions are derived. To this class we added a protected method called DoesExceptionExist that accepts one of our custom exception codes and looks in the context item with a specific key. If the item doesn’t exist it returns false.

    if (this.DoesExceptionExist("COUNTRY_CODE_NOT_FOUND")==false)
    { …

    When invoking an action that has dependencies developers can therefore add an ArrayList populated with exceptions (also provided by the base class) to the context before submitting the request.

    We added this capability because one of our actions needed to determine whether the COUNTRY_CODE_NOT_FOUND exception was previously thrown. So when action A invokes action B it does the following:

    // Added to context so exceptions can be searched

    EDAF and our base class handle the rest.


    Today we deployed the second release of our SOA project code-named "Sweet". We have four deliverables remaining and each is named after a type of pickle and include "Dill", "Half-Sour", "Kosher", and "Blue." It's a long story that came out of the requirements gathering process and too much caffeine at an offsite meeting.

    This release introduced a new EDAF based service built on the .NET Framework v1.1 to handle payment information and an augmentation of our service that manages constituents. We also introduced a new Ultimus 7.1 workflow to handle reconciling constituents and one updated and four new BizTalk 2004 orchestrations. Overall we deployed code to five application (two web servers, BizTalk, Ultimus, and our EDAF services machines) and four SQL Server database servers.

    From a schedule standpoint the work was done primarily by three in house developers and three consultants over a span of a little over two months and with the assistance of a QA manager. In that time the team modified or wrote 49 assemblies, 26 stored procedures, four ASP.NET and one ASP site, and one Ultimus workflow - some very nice work contributed across the board.

    Probably the biggest deployment challenge in this release involved introducing new BizTalk orchestrations while needing to keep existing production instances running. We had one core orchestration in our first release which was processing each request (which take on average 48 hours to complete with human interaction in Ultimus). In the Sweet release that orchestration became merely a component of a larger process as we incorporated the processing of new transaction types. So we had to create a new version of that orchestration that returned a response document among other things.

    So when we deployed we had over 100 orchestrations we suspended before deploying the new assemblies. Once we deployed we had to add a binding policy in the GAC to one of our schema assemblies that had changed and then we were able to resume the existing orchestrations (as an aside I had originally thought that deploying a new schema assembly would be problematic based on other things I had read but it turned out to be quite painless). As users in Ultimus completed their work and notified BizTalk via an HTTP receive, the orchestrations completed under the old version. New requests that were received via our EDAF service facade were then processed in the new version of the orchestration. Eventually, we'll be able to shut down the first version of the orchestration once all the existing Ultimus incidents have been resolved.

    We also learned that with the addition of new orchestrations and a new service we need to automate the installation of all components using msi packages. We used BizTalk scripts which worked well but ensuring that the GAC was updated correctly on our services machines along with configuration files made the deployment to five application and four database servers a two and half hour process that included a couple missed configuration settings.

    Architecturally the biggest challenge was in putting in place an "uber" orchestration that acts as the controller for all requests that are processed through our infrastructure to go along with a tracking database. This design will allow us to plug in new transaction types as we automate subsequent business processes.

    Other interesting aspects included building a web services operation that attempts to match sponsors to unvalidated information submitted to the process (a so called "Professed" request as opposed to a "Validated" one where the user has provided credentials) and called from an orchestration responsible for "reconciling" sponsor data, and (through some slick .NET coding) allowing the Ultimus user interface to act as a consumer of our services using our Service Agent framework in order to retrieve and create sponsors.

    Overall since our first release in September the service-oriented infrastructure has handled over 400,000 service requests and has been remarkably stable. I've been impressed with the EDAF infrastructure on which the services are based and BizTalk has easily handled the load we've thrown at it.