Team Foundation Server

Deployment Plan Template

Developer Tools Deployment Planning Services

Table of Contents

1Getting the Most from Your Deployment Plan

1.1Give Microsoft Feedback

2Executive Summary

2.1Existing Best Practices

2.2Key Areas for Improvement

2.2.1Current State – Urgent Issues

2.2.2Current State - Additional Issues

2.3Maturity level

3Roadmap

4Detailed Findings

5Architecture

6Resources

7Conclusion

Page 1

Team Foundation Server Deployment Plan Template, Version 3, 10.2013

1
Getting the Most from Your Deployment Plan

Our recommendations for optimizing the deployment of Team Foundation Server (TFS) in your environment are detailed within this document. Please take your time to review the findings and ask any follow-up questions necessary. Depending on the capabilities of your IT team, you may select to keep the deployment in house or contract with an outside consultant. In either case, this plan should be given to the party responsible for the work and used as an implementation guide.

1.1Give Microsoft Feedback

This Planning Service has been provided as part of your Microsoft Software Assurance benefits. Please use the link below to tell Microsoft about your experience with the engagement and any improvements you would like to see made to it. The results of the survey will only be viewed by the Planning Services team at Microsoft.

2
Executive Summary

At the request of <Customer Name>, <Partner name>conducted theTeam Foundation Server Deployment Planning Service with the following objectives:

  • Document existing Application Lifecycle Management (ALM) topology
  • Create a baseline measurement of the current development capability
  • Surface existing best practices
  • Uncover opportunities for improvement
  • Identify the most impactful areas to the business
  • Document ideal end-state for a Team Foundation Server (TFS) 201x deployment.
  • Generate and present a roadmap to implement> <upgrade> Team Foundation Server 201x

The Application Lifecycle Management (ALM) model was used as a framework to develop a vision and sustainable approach by which <Customer Name> can prioritize IT investments that fuel business growth. The engagement focused on understanding existing development processes and recommending processimprovements. Technology and people/knowledge requirements were then identified to support the process.

The following issues with the current development capability were articulated at the start of the assessment.

  • Quality issues
  • No visibility into project status
  • Projects delivered late

The following business priorities were articulated at the start of the assessment.

  • Improve quality
  • Improve customer satisfaction
  • Existing Best Practices

Our interviews surfaced the following Best Practices that are being used by teams at <Customer Name>today. These practices are:

  • The Functional Testing/QA group is clearly an organizational asset- they have been able to take the amount of work which was originally six (6) to eight (8) weeks for ten resources and automate the running of the regression bed to four (4) days. This provides immediate measurable benefit to the development as a whole, mainly because of the weight that is put on this group to ensure quality. This group is implementing practices that will continue to move the organization to a TDD (Test Driven Development) approach
  • There are many emerging capabilities relative to requirements management. The ongoing progress resulting from the implementation of formal Requirements Management at the User Interface level will serve the organization well as a starting point for future development and implementation of process in this space. Additionally, in sharing the product roadmap with the user community, the stage is being set to stave off the reactive demand for priority fixes through simple expectation setting.
  • The commitment to and movement of the production environment to an Active/Active model - this will immediately address the challenges with the deployment of new builds and the subsequent impact of bringing the system down when installing new builds
  • The build group is successfully employing (build) automation to minimize the impact of manual error in the core build process.
  • <Customer Name>is a highly customer focused organization and despite the challenges resulting from a lack of process discipline, they continue to provide a high level of support to their clients (especially those clients with custom configurations). Understandably, the organizational culture also resonates a strong delivery focus to dates

We recommend that these practices continue to be employed and are continuously evaluated and improved in order to promote process optimization.

2.2Key Areas for Improvement

2.2.1Current State – Urgent Issues

During our onsite interviews, we uncovered the following practices that should be considered critical and essential to improving the development capability in relation to their impact on the business. These practices were:

  • Development
  • Code Writing/Unit Testing - Test driven development implemented via a unit testing framework including data generation.
  • “End Game” Process and Code Reviews – Code reviews are always a good practice, but for <Customer Name>this is critical to late cycle changes.
  • Developer On-boarding – We uncovered a lack of clear coding guidelines and clear training materials for new developers
  • Quality
  • Defect/Bug Tracking - One unified and integrated process for defect management from its inception to re-release into production
  • Metrics Capture and Improvement - Development of a metrics strategy and ongoing refinement to improve predictability and quality
  • Project management of Non-Charter Projects - Introduction of a disciplined triage process with one point of accountability driving resolution
  • Feedback – Continuous, easy feedback from all parties involved in the software process (including product owners, user acceptance testers, end users, etc.).
  • Release Management
  • Code Promotion - Defined branching and merging process to ensure continuous integration
  • Priority Triage - Elimination of randomization in the priority process and implementation of a predictable and measurable approach
  • Build Management - Ongoing refinement of automated build process and increased automation to minimize risk of human error
  • Operations Management
  • Deployment environment (brute force deployment of .NET) apps- Definition and automation of deployments which leverage the features of the .NET framework to reduce complexity
  • Service level management - Creation and management of a 24/7 environment to minimize accessibility outages
  • Production Monitoring – Being able to monitor and easily reproduce production defects, bugs, exceptions, etc.
  • Current State- Additional Issues

During the course of the assessment, we were able to identify additional issues that we believe are having a material impact on Application Lifecycle Management within <Customer Name>. These include:

  • Defect Management Process Automation - The current tool’s lack of integration is creating an unnecessary level of complexity and duplication of work within the processes associated to bug tracking and defect management/ resolution. Additionally, the minimal automated traceability and validation associated to fixes for compliance/ reporting purposes further complicate any impact analysis or compliance considerations
  • Lack of Predictability and Ownership of Builds - The absence of a governing body to control scope and drive the implementation of buildshas created an uncontrollable work cue as well as resourcing challenges. Equally this prevents a true understanding a capacity for effective sourcing as well as any defence against scope control.
  • The need for agile methods – The current SCRUM efforts will ultimately promote higher quality and delivery frequency if implemented properly. The existing behaviours such as borrowing resources during sprints will not promote that. There needs to be a level commitment established for this and all parties involved must understand the process
  • Low Developer Morale – The current inward spiral created by the PTF process is having a demoralizing effect on the developers who are constantly being pulled to resolve one issue after another. Worse is that this process seems to have no end and continue to promote a sense of malaise among the developers
  • Maturity level

During the interview process, <Partner name established a maturity level for the development process. The Rangers ALM Assessment Tool was used to measure and to create the maturity model.

The key elements of the Basic level are shown below. In the Basic level, most of the practices are manual and untraceable. This state has a large amount of waste, poor value flow, and little transparency.

Basic (Current State)
  • The Development team has adopted home-grown ways of performing practices
  • Practices are performed in an ad-hoc, informal manner
  • Practices are undocumented
  • Little to no transparency exists across teams
  • Inconsistency in some of the key roles being performed (QA)

The Standardized level is the desired state for software teams to achieve with TFS and improved practices. At that level, better workflow and transparency start to emerge. The teams begin to see the value flow and reduction of waste in manual tasks, such as email tracking, clear definition of done, priorities, and requirements.

Standardized (Desired State)
  • ALM best practices (Agile, Lean, etc..) begin to be adopted
  • Tools used are starting to become connected and integrated
  • Team capacity planning starts to emerge
  • Requirements and workflow are entered into the TFS system
  • Builds and deployment cycles start to become automated
  • Reporting data becomes available

The Advanced and Dynamic State can be achieved once the foundation and solid practices are in place. We have seen that after the key practices and tools are in place, it is a natural progression to move toward a more mature level to improve the quality and cycle time.

In the Advanced and Dynamic states, the team begins to fully utilize the ALM tools and Agile consensus patterns (Transparency, Reduction of Waste, and Flow of Value). Automation and quick feedback loops are the norm. Clear requirements and business direction are piped into a prioritized backlog. Teams and stakeholders understand expectation of delivery and priorities. Trusted forecasting on complex initiatives are completed.

Advanced (Achievable in 6 – 18 mo.)
  • Use of practices and tools across teams
  • Requirements are well defined and delivered to the Backlog
  • Agile practices are used to break down the complex work into deliverable iterations
  • Testing is done during all cycles (Unit, Functional and UAT)
  • Automation of builds and tests
  • Fast feedback loops in all phases
  • Bug tracking and traceability
  • Architectural practices and tools are being used
  • Documentation is formally maintained
  • Fully integrated portfolio management tools & process
  • Ability to track requirements and use impact analysis reports for change requests
  • Using Helpdesk quality metrics on turnaround time, cost of maintenance, and identification of error
Dynamic (The North Star State)
  • Continuous delivery practices in place
  • Test Driven Development (TDD) and Acceptance Test Driven Development (ATDD) practices are established
  • Optimizing cycle-time to learn from customers
  • Systems are architected with continuous deployment in mind, supporting patterns such as dark launching to decouple deployment from release
  • All new requirements describe how the value of the feature will be measured

In the standardized state, all of the ALM disciplines are coved equally. While this is the desired goal, the reality is there will be areas that advance more than others.


Equal maturity progression across all areas

Progressing of the maturity in 1 – 18 months


A more realistic view of maturity progression

Growth in just one area is not a healthy progression. An evolution in each area is a much healthier growth rate and direction that provides a solid foundation to improve upon.

The eventual roll out of process and tools to address the opportunities mentioned in this assessment should follow a tight metrics driven approach to improvement. While we recognize that change takes time and reinforce that change requires commitment, it is important to create both a strategic and tactical plan for addressing these challenges, as well as answering the questions of where do we want to be in three, six twelve months and how will we quantify our success.

The implementation approach defined later in this document is built on the four key themes identified earlier (Development, Quality, Operations, and Release Management). We believe that by improving capability in these areas, we can address the current challenges experienced at <Customer Name>.

Key Areas for Improvement

Our interviews revealed multiple areas for improvement. These were rated by impact to the business (High, Medium or Low) across the maturity levels. These are shown in the Impact Map.

The x-axis defines the maturity level of the service area. The categories are:

  • Basic - processes are implemented in an ad-hoc, undocumented and potentially inconsistent manner.
  • Standard - a process has been defined and is generally followed. Tools are used in some cases to assist, but may not be integrated and used throughout the organization.
  • Advanced - usage of tools to drive the process is in wide use and usage guidelines are documented and understood.
  • Dynamic - the organization is bringing new and innovative methodologies to the practice area and may setting industry standards.

The y-axis defines the relative gain that would be obtained from improving the practice.

MATURITY
IMPACT / Basic / Standard / Advanced / Dynamic
High / Urgent / Improve / Enhance / Maintain
  • Code Writing/Unit Testing
  • Code Reviews
  • Quality Metrics
  • Collaborative Development
  • Version Control Repository
  • Release Management
  • Environment Management
/
  • Deployment
  • Test Planning
  • Build Management

Medium / Improve / Improve / Enhance / Maintain
  • Elicitation
  • Requirements Analysis
  • Traceability
  • Code Analysis
  • Project Monitoring and Control
  • Stakeholder Communications
/
  • Test Types

Low / Enhance / Enhance / Maintain / Maintain
  • Analysis & Design
  • Database Modeling
/
  • Architecture Framework

3Roadmap

Based on our observations and discussions, we recommend that the following iterative roadmap be implemented in order to better align the development capability with the business, and enable development efforts to drive increased value to <Customer Name>.

Please note that the areas for improvement mentioned in the prior section which are marked as urgent may not be addressed immediately. In some cases, the foundation for improving a particular service area will not be in place in the first or second iteration.

Iterations
Iteration 1 – Plan & Prepare / Iteration 2 – Install TFS 201x / Iteration 3 – Verify Installation and Onboard Team
Dates / ##/##/#### - ##/##/#### / ##/##/#### - ##/##/#### / ##/##/#### - ##/##/####
Initiative Title / Plan & Prepare / Install TFS 201x / Verify Installation and Onboard Teams
Capabilities
Initiative Title / Developer Onboarding Training
Capabilities
Initiative Title / (Optional) Capture Process Improvements Metrics
Capabilities

The date ranges for each iteration are placeholders. Planning and estimation sessions will be necessary to determine the achievable dates.

When embarking on an effort to optimize the development capability we recommend that:

  • Strong leadership sponsorship is secured
  • Overall goals are clearly communicated to all stakeholders
  • Clear metrics and milestones are established and agreed to
Iteration Details
Iteration 1: From: ##/##/#### To: ##/##/####
Plan & Prepare
Iteration Goals: / By Implementing Team Foundation Server for Source Control and Code Promotion, we will unify the source code from both .NET development and Cold Fusion development into one source control solution. By leveraging the branching and merging capabilities of Team Foundation Server, we will simplify the current code promotion process which will enable a consistent promotion process with clear understanding of which stage code is in and direct ties to the testing environments.
Iteration Activities: /
  • Review the Planning Checklist
  • Review the System Requirements
  • Verify hardware availability
  • Verify Process Template and considerations for customization
  • Review reporting requirements
  • Verify branching strategy
  • Verify build strategy
  • Verify resource availability

Iteration Cross References:
Impacted Capabilities:
Iteration 2: From: ##/##/#### To: ##/##/####
<upgrade> <install> Team Foundation Server 201x
Iteration Goals: / Following the guidance as laid out in the DPS TFS Deployment Assessment, <upgrade> <install>and configure TFS.
Iteration Activities: /
  • Setup Team Foundation Server
  • Development Operational Guidance to ensure system stability
  • Develop Security Access Plan
  • Develop Branching and Merging Design
  • Migrate Source Code
  • Training Development Teams and Release Management Teams
  • Migrate Cruise Control Build Solution to Team Build

Iteration Cross References:
Impacted Capabilities:
Iteration 3: From: ##/##/#### To: ##/##/####
Verify Installation and Onboard Teams
Iteration Goals: / Leverage automated build and test capabilities to provide a consistent quality measure including code analysis and automated unit testing. During the first iteration, the build solution was moved to TFS and this phase makes further investment to enable a much richer testing harness. At the competition of this initiative, code quality will be measurable by simply building the solution.
Iteration Activities: /
  • Verify installation and configuration
  • Install client tools and verify connectivity to TFS
  • (Optional – if applicable) Verify remote connectivity
  • Verify end-to-end work flow by building a solution
  • (Optional) Train users (development, test, PM, etc.) on how to connect to and use TFS201x.

Iteration Cross References:
Impacted Capabilities:
Developer Onboarding Training
Iteration Goals: / Create or acquire developer onboarding training materials to reduce the time and resources necessary to bring new developers into the team.
Iteration Activities: / Develop Sample Applications that outlines the key "How-To" points of the application architecture.
Refresh coding standards and providing training to all development staff on existence.
Iteration Cross References:
Impacted Capabilities:
(Optional) Capture Process Improvement Metrics
Iteration Goals: / Understanding the components of the process that represent the best opportunity for improvement is critical to investing in the right changes. The following the recommended items to collect
Number of known issues that become priority fixes. You want to confirm that the prioritization process is learning from the past and is not based on gut feel. By collecting this metric, one can measure the effectiveness and number of categorize fixes that were missed and can use this information during the prioritization process moving forward.
Bugs by major system component- If there is a specific area of the code that is buggy, it is valuable to be able to analyze the code base including code complexity to determine if rewriting the entire section would improve overall stability.
Iteration Activities: / Use current systems to track metric as best as possible.
Iteration Cross References:
Impacted Capabilities:

4Detailed Findings