An SOA odyssey

Thursday, February 23, 2006

Dill is Near

Just a quick update on our project.

We're currently in the process of performing integration testing in preparation for our next release in early March code-named "Dill". That release will feature the deployment of three new services that contain 13 new service operations implemented as EDAF business actions. Of those one service and two operations are Infrastructure related and provide a service facade for Compassion's internal Consituent Communications Queue (CCQ) system while the other two services and 11 operations are Enttiy Services. Behind those services are two new business processes implemented in BizTalk and three new Ultimus exceptions.

One of the interesting things I was looking at the other day was that since our first release went into production last fall our services have handled over 625,000 requests, 25,000 of which are business transactions submitted from our web site. Of those 77% are following the happy path and updating our enterprise database automatically.

Previously 100% of the requests were handled manually and so the combination of web serivces for entity manipulation, BizTalk for business process automation, and Ultimus for human workflow have combined to form an effective tool for automating a portion of Compasison's business which in turn frees our internal employees to perform more strategic activities, and ultimately helps to release children from poverty. To me anyway, that's an appropriate use of technology.

BizTalk Pipelines in the SOA

Here's an interesting problem that occurred on our team that we weren't aware of.

In our architecture all messages that in and out of BizTalk using custom SOAP headers based on WS-Addressing as I discussed in Message Standards.

What that means is that we implemented a custom pipeline component in C# that parsed our SOAP header and promoted the various fields we use to track messages such as MessageId and MessageStreamId. This pipeline is invoked when messages are received from outside sources such as our human workflow tool, Ultimus BPM Studio which posts to an HTTP receive location.

In the Exeucte method in our original code we used the following line to get the load the incoming document

xmlDoc.Load(inmsg.BodyPart.Data);

This code works fine as long as tracking is turned on in BizTalk. Once we turned off tracking in development we ran into problems where the properties were not being promoted like so:

There was a failure executing the receive pipeline:
"Compassion.BizTalk.Pipelines.Common.
SOAPHeaderReceivePipeline"
Source: "Compassion.BizTalk.Pipelines.Utilities"
Receive Location: "/UltimusResponse/BTSHTTPReceive.dll"
Reason: Pipeline component exception - Not implemented

Through a call to Microsoft (and subsequently through reading things like this) we discovered that we should be using the GetOriginalDataStream method in order to grab the message. This is the case since BodyPart.Data clones the incoming stream and the HTTP adapter uses the network stream which does not support cloning. Turning on tracking allows BizTalk to create a wrapper around the original message. And hence you can use BodyPart.Data.

So our new code looks as follows:

Stream s = inmsg.BodyPart.GetOriginalDataStream();
xmlDoc.Load(s);

Once we have the incoming message we can run a series of try/catch blocks as follows to grab the elements we need from the SOAP header and promote them.

// strip off the envelope and return
//just what is within the Body
System.IO.MemoryStream ms = new System.IO.MemoryStream
(Encoding.UTF8.GetBytes(xmlDoc.SelectSingleNode(
"//*[local-name()='Body']").InnerXml));
inmsg.BodyPart.Data = ms;

try
{
strMessageId = xmlDoc.SelectSingleNode(
"//*[local-name()='Header']
/*[local-name()='MessageID']")
.InnerText;
// remove the "uddi:" if present
strMessageId = strMessageId.Replace("uuid:","");
inmsg.Context.Promote(
"MessageId",
"http://schemas.microsoft.com/
BizTalk/2003/SOAPHeader".ToString(),
strMessageId);
}
catch {}

Tuesday, February 14, 2006

VSTS Whys and Hows Part II

In the first article in this series I described some of the problems inherent in modern software development and suggested a few ways in which those issues might be addressed by tools like Visual Studio 2005 Team System (VSTS) soon to be released. In other words, the first article addressed the “whys” of VSTS. In this article I’ll tread further on that path by providing an overview of Visual Studio 2005 Team System (VSTS) from a functional perspective so that you can see how VSTS plans to address the problems.

More than the Sum of Its Parts
From a high level, VSTS is composed of five products:

  • Visual Studio Team Foundation. This is a server product or platform that exposes features shared by the various clients (as well as Office applications such as Microsoft Excel and Microsoft Project) shown in the diagram. While some of the features exposed by the clients can be used independently, it is Team Foundation that pulls them all together and enables the collaboration and communication so desperately needed by software development teams.


  • Visual Studio Team Foundation Client. A client tool geared towards project managers and other non-developers. The Team Foundation Client exposes a subset of the shared features in Team Foundation through the concept of the Team Explorer discussed below. While the Team Foundation Client is based on the Visual Studio IDE it does not include the language features and other tools that developers and architects would use. The Team Foundation Client is the basis for integration in the other products mentioned here.


  • Visual Studio Team Architect. The client tool used by software architects to design serviced-oriented solutions and validate their designs against the actual hardware and software environment. Team Architect also includes a modeling tool called the Class Designer that is tightly integrated with code.


  • Visual Studio Team Developer. The traditional Visual Studio IDE augmented with a set of features that enable developers to more easily comply with policies and collaborate in a team environment. Although many of the features found in Team Developer such as unit testing and static code analysis can be used without access to Team Foundation in the Visual Studio IDE, the metrics or results that the tools output cannot then be shared with the team.


  • Visual Studio Team Test. A client tool for those devoted to testing that enables them to design, manage, and execute manual, unit and load tests.
    Each of these products is designed to provide specific features that aid in the process of software development. It is their combined power when adopted by the entire team, however, that allows VSTS to address the core issues teams face of communication, predictability, integration, and process. In the remainder of this article I’ll drill down on each one of the products and walk through its key features so you’ll understand how it fits into the bigger picture.


  • Visual Studio Team Foundation
    As we saw in the first article, many of the problems encountered by software development teams arise from lack of communication within the team and lack of integration of the tools used by the team. In VSTS the Team Foundation provides the server platform that enables both communication and integration. In that way Team Foundation can be thought of as the organizing principal in VSTS. And because of its centrality it is worth having a look at its major features.

  • Change Management. A brand new (not simply a new version of Visual SourceSafe) source code control infrastructure based on SQL Server 2005 for versioning code and other project deliverables. Built from the ground up for secure and high-speed access, this is a robust version control system that supports advanced features such as branching, change sets, a novel feature called shelving, as well as check-in policies. To give one example of how this feature addresses the problem of integration discussed in the previous article, consider VSTS’s check-in policies. Essentially, when the Project Manager creates the project in VSTS he can associate policies with the check-in of source code. These policies validate that a developer’s changes comply with whatever the organizational requirements are before the set of changes can be checked-in. For example, a check-in policy might be activated so that a developer must run static code analysis or a unit test on his code before checking it in. If the developer fails to do this he’ll receive a policy warning in the Team Developer IDE. If he chooses to ignore the policy the project administrator can optionally be notified through email.


  • Work Item Tracking. A centralized repository stored in Team Foundation’s SQL Server 2005 operational data store and used to track tasks, defects, requirements, enhancements and - through the use of custom attributes - anything else required by the project. If Team Foundation is the organizing principal in VSTS, then Work Item Tracking is the central concept within Team Foundation. Requirements, unit tests, test cases, and source code can all be associated with work items to provide the basis for a rich set of metrics collected by the client tools. And as with other aspects of Team Foundation, a set of default Work Item types ships with the product but can be extended by your organization or third parties. Project managers and other team members can interact with work items through Visual Studio Team Foundation Client or even Microsoft Excel and Microsoft Project. Of course, correlating this information with the detailed data collected by the various tools makes for a powerful combination in terms of reporting on the health, status, and direction of the project.


  • Reporting. A set of over thirty predefined reports built with SQL Reporting Services used to report on the health and status of the project. These reports can then be made available both on the Team Project Portal and directly within the applications used by the team members through the Team Explorer. Data to produce these metrics is actually collected behind the scenes by the tool in which the team members work as they perform their tasks and then stored in a central SQL Server Analysis Server repository from which the reports are created. For example, data about test execution and results are saved when a tester executes his test cases and data concerning code churn is saved when developers check out and check in code files. This approach has the obvious advantage of freeing Project Managers and other team members from the hassle of data collection and ensures that the data is always fresh. At the same time the data collection process is not intrusive and works within the natural workflow of the team.


  • Team Project Portal. To help solve the problems related to collaboration and communication Team Foundation automatically creates a web site or Team Project Portal for each project using a Windows SharePoint Services (WSS) team site. From this portal project stakeholders can get a quick view of the project’s progress both freeing up the time project managers would spend creating status reports by hand and enabling stakeholders to provide input and make course corrections. The portal’s document library is also pre-populated with document templates, for example discovery and requirements specification templates, and sample files for use by team members.


  • Project Management. We saw in the first article that teams could benefit from tools that help guide them through the process of software development. In order to provide process guidance VSTS includes the concept of methodology templates which can be thought of as a schema for software project. These templates help to structure the process used by the team by exposing a set of documents, project roles, work item types, a web portal template, policies, and what are called process tasks from within all of the VSTS tools. For teams that have no process a Project Manager can choose one of the predefined methodology templates that will ship with VSTS such as a new version of Microsoft’s Solution Framework called MSFT Agile.


  • Build. One of the required activities in a software development project is to actually build the software. To that end a core feature of VSTS exposes a common process for building executables based on the tool Microsoft uses internally called MSBuild. This system is designed to deliver a “build lab in a box” that makes it easier for teams to implement the best practice of daily builds. This tool allows for the creation and execution of automatic builds on a separate build server that integrate different types of tests. As the tests execute data is collected about the builds in the operational and analytical data stores, for example to tie the build number to Work Items and automatically create work items for build failures.
    Taken as a whole these features provide a solid platform on which teams can define and implement their software development process.


  • Team Foundation Client
    The second product that comprises VSTS is the Team Foundation Client. This can be thought of as a tool that Project Managers and other generally non-technical team members will use. While it runs within the Visual Studio shell, it does not include the overhead of language compilers and modeling features that PMs and other don’t need nor want.

    The services of Team Foundation are exposed in the Team Foundation Client through the Team Explorer. This tool is accessible within all of the different VSTS clients and really provides the foundational access to VSTS.

    The Team Explorer is a tool window that can be hidden similar to the Server Explorer that developers are already familiar with in previous versions of Visual Studio .NET. Through this window team members can easily find work products and data associated with the project

    More importantly the Team Explorer also acts as a launch point into the set of tools that ship with VSTS. By simply navigating to the appropriate node in the Explorer the context of the application changes and allows the team member to create and run reports, query on tasks assigned to them, create custom queries, and view project builds and test results among other activities. No longer is there a need to manually switch between various tools thereby increasing productivity.

    Team Architect
    In the previous article I discussed the scenario where a Solution Architect creates a design only to find out later that it is precluded by the hardware or software configuration of the deployment environment. In order to address these sorts of issues the Team Architect product, in addition to including the Visio UML modeling tools present in the prior version of Visual Studio Enterprise Architect, includes a suite of graphical modeling tools. These tools are designed to allow architects to visually design systems based on service oriented architectures (SOA) and to validate those designs against the actual environments in which they’ll run.
    Specifically, the suite of tools includes:

  • Application Designer. The Application Designer provides a design surface on which the architect can diagram a set of applications that exchange messages. In other words the Application Designer can be used to create a model of connected applications that are dropped on the design surface and connected via Web Services Description Language (WSDL) contracts. Off the shelf application types include web sites, Windows applications, web services, databases, and BizTalk services as you can see in the toolbox on the left side of the screen.


  • Logical Data Center Designer. The Logical Data Center Designer is a designer that an Infrastructure Architect can use to create a logical model of a part of the data center. As such it allows for the creation of “logical servers” that specify the application hosting environment on those servers. For example, a logical server can specify the communications protocols allowed (perhaps on HTTPS) and the types of services that are enabled (IIS, FTP) in addition to the communication between other logical servers and those servers grouped into what are called “zones”. Constraints including security requirements on the logical server can then be specified. The architect can either create the servers himself or read the configuration information from physical machines using the tool.


  • System Designer. The System Designer provides a design surface used by the architect to create what is referred to as a configured system. A configured system is composed of one or more applications defined in the Application Designer for a particular deployment. The System Designer provides a higher-level view so that architects can visualize how systems will communicate with external systems or with other internal (nested) systems. From there the architect can drill down to the applications if necessary in order to define them.


  • Deployment Designer. The Deployment Designer is used by architects to create a deployment for a configured system defined in the System Designer or for an application defined in the Application Designer. Here is where the application and systems models are mapped onto the data center models through a process called binding. Prior to this all the information (metadata) collected by the designers was made available to a constraint engine and as a result validation and consistency checks can be made. If the configuration of an application violates constraints on a logical server, the Task List within Team Architect alerts the architect to the issue, thereby allowing the problem to be corrected before actually attempting a deployment. The Deployment Designer also generates a deployment report visible that can be used to communicate between IT and Operations as well as a deployment script that includes configuration settings and files to be deployed.


  • Class Designer. Although not directly connected to the other three tools, the Class Designer provides the architect with a superset of the capabilities of traditional UML static structure or class diagrams. The Class Designer represents the .NET type system with full fidelity and therefore allows architects to be very precise in the design of their classes, structures, enumerations, and interfaces in a way that UML cannot. Indeed, as types are designed on the surface of the Class Designer the code for those types is written in the project. Conversely, changes to the code will be reflected in the model since both the code and model are simply reflections of the SDM. And because design at this level isn’t restricted to architects, the Class Designer is also available in the Team Developer. Obviously, this more connected approach can help to solve some of the communication problems inherent in software development.


  • Team Developer
    In many teams today, however, developers are hindered from reaching high levels of productivity by having to cobble together multiple tools in order to adopt development best practices and having to repeatedly context-switch between tools in order to interact with requirements and tasks. When multiplied across all the developers on the team, this in turn drives down the predictability of the entire project. To address problems like these Team Developer includes the Team Explorer mentioned previously. This tool window immediately gives the developer the ability to see which Work Items (tasks) they are responsible for and through its integration with Team Foundation can tie actual work the developer is doing to those work items thereby automatically and continuously updating the project’s status. As you would expect Team Developer integrates with Team Foundation by providing the interface to Team Foundation’s change management (source code control) system and its’ check-in policies discussed previously.

    Team Developer also includes the following tools.

  • Unit Testing and Code Coverage Analysis. One of the best practices that Team Developer enables is unit testing and by extension the practice adopted by many agile teams known as Test Driven Development or TDD. Team Developer does this through its inclusion of an attribute-based unit testing framework much like the popular NUnit that is directly accessible within the IDE. Using this framework developers can create unit tests and execute them during the development process while behind the scenes the results are automatically recorded in Team Foundation’s analytical data store. In addition Team Developer enhances its power by coupling it with the testing best practice of code coverage analysis. Using this practice, when a team member executes a unit test not only will the test results be tabulated but optionally the percentage of code executed during the test for each source code module (files, namespaces, classes, methods) can be viewed as well as being stored by Team Foundation for later use.


  • Static and Dynamic Code Analyzers. A second best practice that teams often have a harder time getting their developers to adopt is code reviews. When done well, code reviews with other developers and architects provide a structured way to gain feedback in a constructive way that proactively enables the developers to make changes that improve code quality and maintainability. To address this need Team Developer includes both Static and Dynamic Code Analyzers that can be thought of as an automated method of performing code reviews. For example, the static code analysis tool comes preconfigured with a set of warnings that can identify known issues in .NET development involving performance, security, general design, and naming guidelines akin to the FxCop tool in use by many in the .NET developer community.


  • Code Profiler. Developers can also increase their productivity through proactively addressing performance problems using tools Team Developer includes. With these tools developers can sample the performance and then instrument their code in order to collect detailed performance statistics to enable drill down on potential problem areas. For example, a developer can use their unit tests to generate a Performance Session where the code is sampled in order to identify potential performance bottlenecks in terms of both throughput and resource usage. From there the developer can instruct the tool to insert timing probes into the desired subset of code to gather more detailed statistics. Finally, a detailed performance report is generated so that developers can decide where to begin further investigation.


  • Team Test
    VSTS takes testing a step further than simple unit testing by making them available in the Team Test product geared specifically for testers. This product enables testers to execute the unit tests developers create in a systematic fashion to ensure that code checked in by multiple developers passes previously constructed tests. This is an example of another testing best practice referred to as regression testing.

    Team Test goes even further by making testers first-class citizens of the project by highlighting the importance of testing at the team level through a test authoring and execution environment. That environment includes additional test types such as Web Tests, Load Tests (similar to the tool Application Center Test tool that ships with Visual Studio .NET 2003), Manual Tests, and Ordered Tests and test case management all integrated into Team Foundation.

    Thursday, February 02, 2006

    VSTS Whys and Hows

    As I’m sure many readers are aware by now, Visual Studio developers are about a month away from the second part of the Visual Studio 2005 release that includes a slew of new features that make a developer’s job easier. Visual Studio 2005 Team System.

    In the first of this two part series I’ll describe a few of the specific problems you and your team probably face that served as the impetus for the creation of Visual Studio 2005 Team System (VSTS). In the next article I’ll get down to how VSTS is designed to address these issues. Taken together, the goal of these two articles is not simply to give you a laundry list of VSTS features, but rather to place them in the context of the problems they were designed to address.

    Software Development is Hard
    I don’t want to sound like a grumpy old man who sits in front of the barbershop longing for yesterday while muttering about failures of the modern world, but software development is hard and getting harder.

    Gone are the simpler days of monolithic applications running on a single platform free from deployment issues. Gone are the heady days of client/server computing and the “enterprise data model” accessible through Visual Basic or Powerbuilder apps free from data sharing issues. And even gone are the days of simple data-driven ASP or ASP.NET web applications accessing our data behind our firewall. No, today in our ever more connected world, our solutions require access from multiple platforms, devices (heterogeneity), and networks, data sharing across the enterprise and with business partners (service orientation), and scalability on commodity hardware. All of this in an environment where the driving factors are less time and fewer resources.

    The truth of this trend towards increasing complexity has driven many organizations to consider adopting the more loosely-coupled architectural pattern of service orientation, or a Service Oriented Architecture (SOA), made practical by the implementation and maturation of web services. And as with any architectural shift SOA and web service applications come a new set of design and implementation challenges that developers will need to address.

    I know that this come as no surprise to many of you reading this article - you who have managed, designed, built, and tested complex enterprise solutions requiring heterogeneity, service orientation, and scalability. But I also know that in doing so many of you are looking for tools to make your jobs easier

    So why is software development difficult? In short I see four primary reasons teams struggle to build quality software.

    Communication Breakdowns
    The reality is that in many cases IT teams are today both geographically and functionally distributed. This distribution creates gaps in communication that provide opportunities for issues to be dropped, misunderstandings to arise, and information flow to be slow and haphazard.

    Fortunately as you’ll see in the next article, with VSTS the very technology that makes geographical distribution possible in many cases can also be applied to open up new channels of communication. And of course the benefits of tools enabling better communication aren’t restricted to geographically disparate teams. Even in small and local teams, crucial information is often dispersed and not easy to find or not captured in the first place. Tools that provide a centralized location for all of this information is in many ways the first step towards controlling the project.

    At the same time, as with much of the modern world, the IT industry has tended towards increasing specialization. Although in a small organization you may have to be a jack of all trades, in many organizations silos have developed where expertise in project management is restricted to one group while development resources come from another and solution architectures from a third. Because some of these groups share certain attributes and not others there are differing amounts of conceptual space between these groups as illustrated in Figure 1 where Developers and Testers are more closely related than Solution Architects and Project Managers. This specialization over time has the effect of producing conflicting best practices, architectures, and ultimately a vision for how software should be developed.



    The Proximity of IT Silos

    As an example of addressing the first need consider the case where a Solution Architect is designing a system all alone, safely ensconced in his architectural silo made of ivory. When his architecture is complete he hands it off to the developers who merrily code away. Job done. Not so fast. When it comes time to discuss how this solution will actually be deployed in production, the Solution Architect is shocked and dismayed to discover that Operations doesn’t support the server or communications configuration he was assuming for his “perfectly architected” solution. And of course Operations derives a certain guilty pleasure in crushing the ivory tower. His solution no longer looks so perfect and he then must rework the architecture or spend political capitol in trying to get the Operations policies changed, costing both time and development resources.

    What our architect needed in this scenario was a tool that helped him map his solution architecture to the existing hardware and software environment including the Operations policy so that he could take it into consideration during his work. In the second article in this series you’ll see how VSTS addresses this problem through a suite of design tools.

    Tools that enable information flow between these groups will go a long ways towards solving communication breakdowns.

    Lack of Predictability
    The Danish physicist Niels Bohr once said that “prediction is very difficult, especially about the future”. I don’t think he had software development in mind but I do think it is applicable. This is borne out intuitively by the concern many have in the IT industry for being able to predict the success of projects. Their concern is not unfounded. As Steve McConnell writes in his 2004 book Professional Software Development:


    “Roughly 25 percent of all projects fail outright, and the typical project is 100 percent over budget at the point it’s cancelled. Fifty percent of projects are delivered late, over budget, or with less functionality than desired.”

    Why do these projects end up over budget, late, feature-poor, or cancelled? In most cases it was because their teams couldn’t accurately predict or control the software development lifecycle. And at the heart of enabling better prediction in your projects lies metrics and repeatable practices.

    The simple fact is that teams manage projects according to the metrics they are able to collect. If the metrics collected are the right ones the Project Manager and other stakeholders can quickly get a feel for where the project is headed, to better estimate its completion, and to actually drive it towards completion by analyzing risk and making course corrections when necessary. In addition, a key point that organizations often undervalue is that good (meaning real and not anecdotal) metrics on previous projects rather than personal memory, subjective intuition, or seat of the pants guesswork provide the best input into making estimates on future projects.

    I would suggest that what project teams could use are tools that help them address the area of metrics in at least the following ways:


  • By suggesting predefined metrics taken from best practices that have been proven to provide insight into projects such as metrics related to progress towards the schedule, the stability of the plan, code quality, and measuring the effectiveness of testing.



  • By offering a single place in which data to produce metrics is stored in order to eliminate the separate silos in which that data is stored today.



  • By supporting an automatic way of collecting data in a timely fashion that is integrated into the natural workflow of the team.


  • The second aspect of more accurately predicting the success of software projects lies in the adoption of repeatable or “best” practices. While we all know that one size does not fit all I also know that in almost any endeavor applying a structured or methodical approach reduces the variability in the outcomes.

    Unfortunately, many teams have no set of repeatable or structured practices that they can hang their hats on and they can use to focus their efforts. For these teams, for example, a developer is simply tasked with implementing a feature and then left to his own devices. While some developers may naturally or through intentional analysis apply some set of structured principles to their development, many will not and the result is that the productivity of those developers and hence the predictability of both the software quality and the schedule are almost impossible to get a handle on. This is largely borne out by the oft-repeated finding that developers differ in their productivity on the order of 10 to 1 as noted by McConnell in Professional Software Development. In other words, some developers are 10 times more productive than others. And just as the adoption of uniform approaches to training and conditioning has shrunk the variation in times between competitors in athletic competitions such as track and field, I think at least part of the difference in the productivity of software developers can be overcome by applying repeatable and proven practices to the software development process.

    Tools can help in this regard by integrating structured practices into the normal workflow of developers, for example, by including a unit testing framework.

    Lack of Integration
    The primary cause of the problem is that over time many organizations have accumulated a collection of software development lifecycle (SDLC) tools through purchase, acquisition or merger, or even custom development. Unfortunately, in most cases these tools weren’t designed to be used together and are not integrated with the IDEs in use in the organization. As a result, this lack of integration can manifest itself in myriad ways but often bubbles to the surface as follows:


  • The data collected by the tools they use in their SDLC exist in separate silos and is not easily integrated or related to data from other tools.



  • The tools they use don’t provide a platform and device independent way to implement extensibility and custom tools.



  • Using “best of breed” tools forces their team members to context-switch between tools in order to perform functions of their process.



  • The tools they use don’t know about the policies or constraints of their process. In other words, their tools cannot interact in any meaningful way with the processes they do have.


  • Obviously, the size of the overall integration problem is more often than not proportional to the size of the organization you’re in. However, even small organizations are not immune from integration issues as even a single relied upon tool that isn’t integrated can manifest all of the above issues.

    One approach to this problem, and that taken by VSTS as you’ll see in the next article, would be to integrate the lifecycle tools directly into the software that team members use in their day to day work. For example, rather than require a developer to go to a web site to too see which features he’s been assigned why not make those features visible from inside his IDE? And further, why not have the tool associate the feature with the actual source code and unit tests he’s working on so that metrics can be automatically collected on the implementation of the task.
    A second way in which lack of integration rears its ugly head is when tools don’t work together to surface or expose policies adopted by the organization. Typically these policies or guidelines that attempt to regulate the software development process exist only in three-ring binders or if you’re fortunate on an Intranet. Unfortunately, in both cases out of sight is out of mind and so the policies end up being enforced across the organization only sporadically if at all.

    What is needed is for tools to enforce policies dynamically during the regular work flow of team members. One of the bets Microsoft is making with VSTS is that once development tools automate process guidance, then most of the overhead associated with the process as well as the majority of the resistance to compliance will evaporate.

    It is this kind of deep integration, both in terms of your team member’s workflow and tools, can improve the productivity of you teams and the quality of the software you produce.

    Lack of Support for the Process
    Trying to coordinate the people, geographies, roles, and tools involved in projects while at the same time tracking and making predictions about the interaction of these aspects would be a challenge for anyone. So how do teams solve these sorts of problems and raise their level of success?

    The standard answer to that question has been to adopt a software development methodology such as the Rational Unified Process (RUP) or Extreme Programming (XP) or a customized version of one of these or other standard approaches. While this approach can be and often is effective, most developers don’t have access to tools that surface or provide visibility into the process so that they can more easily follow it. And because of the lack of tools support, the perceived complexity of some of these methodologies and simple inertia, some teams simply haven’t adopted any methodology or process at all. Unfortunately, these teams must instead fly by the seat of their pants much of the time. What is needed are tools that provide the overarching structure for projects in a way that teams can flexibly integrate their own process or use a predefined process that guides them on their way.

    Because many teams (primarily those in smaller organizations who don’t have the resources or expertise to implement a rigorous process) have not adopted a software development process there is ample opportunity for tools to provide guidance and therefore improve the quality and timeliness of software produced by these teams.

    Of course, the majority of larger organizations have already adopted a methodology and so their challenge is in making their process visible through the tools they use.

    What is needed is for tools to allow for the customization of boiler plate processes or the building of custom processes that integrate with the tools.

    Summing it Up
    In this article I’ve discussed a number of common problems teams face when developing solutions in the increasingly complex and constrained world of software development and hinted at how VSTS will address them.

    What is needed in order to solve these kinds of problems what is needed is not just another upgrade of their integrated development environment, but instead an “integrated services environment” that encompasses the entire extended software development team.

    At its core this integrated services environment, in order to be successful, needs to accomplish three things:


  • Reduce the complexity of delivering modern service-oriented solutions



  • Be tightly integrated and facilitate better team collaboration and communication



  • Enable customization and extensibility by organizations and ISVs


  • Exactly how VSTS is constructed to make this happen will have to wait for the next article.