Monday, December 26, 2022

Improve New Feature Delivery Rate with Value Stream Mapping

 I constantly see a desire to deliver enhancements and additional business capabilities to end users at a faster rate. What I don't see is a methodical and data-driven approach to achieving a faster delivery rate. I typically use a tactic called value stream mapping to improve clients' speed to market. That tactic seems obvious to me but isn't used as widely as I think it should be.

I'm going to define and illustrate value stream mapping for you in hopes that you see the value and understand the tactic well enough to apply it to your existing production processes and procedures. The concept applies to application features, infrastructure features, DevOps automation capabilities, and just about any type of information technology process I can think of. In fact, it applies to any business process I can think of, including those that aren't IT-related.

Example Application Delivery Value Stream

This is an example of the delivery process of a highly customized purchased application (COTS). As customizations were delivered by the vendor quite frequently, updates needed to be tested and deployed quite frequently. On average, the vendor supplied one to two updates per week. Each update required significant manual labor to test and deploy. The time and effort involved were costly and required tuning. We elected to do a value stream analysis. 

Below are components of the value stream, along with how long manual time was spent testing and deploying them. Note that given the length of outage required for deployments, significant coordination with the testing team and business users was necessary and often extended the lag between updates received from the vendor and getting those updates into the hands of end-users. 


We decided to automate the deployment process. The procedure given to us by the vendor was entirely manual. While the deployment process was only 32 clock-time hours of the total, decreasing that time to 4 hours allowed much greater flexibility in scheduling updates in the test environment as well as production. Now, updating installations for the test environment could be done off-hours without putting the testing team out of service. Additionally, production updates could be deployed off-hours and no longer require the weekend.


Lessons Learned

Automating testing would be the logical next tuning step. That said, automating tests for a COTS application is easier said than done. This particular application did not lend itself to easy UI testing in an automated fashion. As we didn't have access to product source code, testing service APIs wasn't an option either.

The value stream tuning effort illustrated here works for custom applications just as well as it did for this COTS example. The value stream tactic applies the tuning principle of optimizing the largest targets (those that take the most time) first. This principle is used when we tune CPU or memory consumption in applications. In fact, the principal can be used for non-IT processes as well, like budgets.

Value stream analysis should be an ongoing effort repeated periodically. Over time, your deployment process changes. When that happens, the value stream will also change.

As with other types of tuning efforts, it's important to identify a specific target. Like other tuning efforts, it's always possible to keep improving. Having a target allows you to know when the tuning effort is "done" and effort can be directed elsewhere.

Value stream analysis often reveals business procedures and processes that are not optimal. In another example from the field, I've seen DNS entries and firewall rule entries that are forced by organizational procedures to be manual and take significant amounts of lead time. It's important to track these activities in the value stream process as well. You need accurate information to make effective tuning decisions.

Thanks for taking the time to read my article. I'm always interested in comments and suggestions. 




Sunday, December 18, 2022

The Journey toward Continuous Delivery and Deployment: How to Start

I work with teams who are nowhere close to achieving continuous delivery, let alone continuous deployment.  Those teams are getting further and further behind where teams in other companies are on similar journeys. Often,  they don't have automated testing.  If they do,  often they are unit tests and not integration tests.  Often,  they don't have application infrastructure code and can't easily create additional environments. Often,  they work in long-lived feature branches and have teams working on disparate versions of the code. Consequently, the speed of new feature delivery to end users is abysmal. Hope is not lost.  The journey is difficult but possible.  Here are some initial steps to take. 

Establish a basic integration test suite if you don't already have one.  There is not going to be continuous anything without automated integration testing.  With automated integration testing,  you can have confidence that changes, whether they are big fixes or feature enhancements, don't accidentally introduce new defects. Often legacy code bases aren't written to be easily testable at a unit level. Concentrate on integration testing the application at its consumption points.  For web UIs, that means automated functional testing of the UI or at least the REST web service resources.  For APIs consumed by other applications, it means integration testing the service endpoints.  

Unit testing is always important, and I never discourage it.  Integration testing is more important as it tests end-user functionality,  not just small sections of code. If you have no automated tests,  start with integration testing first.  You can implement unit tests for new features along the way. 

Ideally,  the environment for integration testing should be established by infrastructure code before the tests and eliminated after.  That said,  I'm concentrating on the initial steps in this post and don't want to deviate.

Management support is needed for funding automated testing. That funding is both labor and tooling. Initial setup for automated testing has an upfront cost.  Remember that there is a broad range between 0% and 100% coverage.  The higher the percentage,  the better.  That said,  higher percentages have diminishing returns. Don't let perfection be the enemy of progress. 

Establish a continuous integration pipeline (CI) for the main/master source code branch of you don't already have one.  Continuous integration is a firm requirement to continuous delivery. The CI pipeline should run automatically on check-in of code changes.  Continuous integration identifies defects immediately after changes are checked in by executing all available unit and integration tests. The objective is to identify defects as early in the development process as possible. 

If continuous integration (CI) reveals an error,  fixing the error is the highest priority. Many pundits would say that this type of breakage should result in an "all hands on deck" type of emergency and should enlist all members of the team to fix it. As a practical matter,  the developer who checked in the change that broke CI is usually the best person to fix it.  They know what they did and are closer to the problem. The developer's tech lead can follow up with the developer at some point to identify any additional resources needed to fix the issue.  

Adopt trunk-based development and eliminate long-lived feature branches.  Trunk-based code management is a requirement for continuous anything. Trunk-based development ensures that developers are using the newest and most current code base.  This minimizes the chances of "merge hell" and keeps all team members up to date and working on the current code.

Depending on your CI/CD tooling,  it might be easier to use a short-term feature branch and initiate CI on merge.  These feature branches must be short-lived or you're not really adopting trunk-based development. As long as the short-term branch gets deleted after the merge,  this isn't a bad tactic. 

Only start changes that can be completed in four hours or less.  Two hours is better.  Break the change up into smaller pieces if it's longer than that. This reduces merge issues later.  This also reduces the chance that another team member introduces a conflicting change.  This practice provides an incentive to keep changes small and only make one change at a time.  This is in keeping with the objective of delivering new features to users faster. All good.

Eliminate the practice of reviewing pull requests.  Allow automated testing to catch defects.  If a junior developer checks in code that isn't optimal,  a more senior member of the team can refractor that change later and hopefully take the opportunity to educate the junior team member on the issue.  Either way,  if it didn't break CI, not much damage was done. 

Encouraging automated test coverage will change developer behavior. If developers "know" they will be expected to produce automated tests for changes they make, they will code to make testing easier. It's enlightened self-interest. That also makes the code base cleaner and increases its quality.

Continuous delivery is an ongoing,  neverending process.  The beginning tactics I list here are just a start.