Some thoughts on reducing the likelihood of failure with technical programs.
Use an iterative-incremental lifecycle.
Using a serial lifecycle (aka waterfall or SDLC) increases the likelihood of failure.
If you are using a serial or waterfall lifecycle, you are increasing the likelihood of failure.
But what if the problem is trivial? Once you’re in “program” territory, it’s highly unlikely that the effort is trivial enough to risk waterfall.
Note: Using a “hybrid waterfall” is still unnecessarily risking failure.
Using an iterative-incremental lifecycle increases the likelihood of success.
If you use an iterative-incremental lifecycle, you are increasing the likelihood of success.
The best way to validate that you are building the right thing is through concrete feedback. Even the best analysis up front is unlikely to be more effective and is almost always slower. The best way to avoid integration hell is to integrate and test early. Even the best specifications up front are unlikely to anticipate what actually happens when something is integrated for real.
But what if the problem is trivial? 1. Once you’re in “program” territory, it’s highly unlikely that the effort will be trivial; 2. Even it does end up being trivial, at worst, iterative-incremental adds a little more overhead than necessary. “A little more overhead than necessary” doesn’t cause program failure, building the wrong thing or integration hell can cause program failure.
See also:
- Don’t Know What I Want, But I Know How to Get It — Help your organization focus on successful outcomes (jpattonassociates.com)
- What is Agile? by Henrik Kniberg
Using set-based concurrent engineering increases the likelihood of success even further.
It is still possible with an iterative-incremental approach to choose a dead-end, or at least sub-optimal, option to iterate on. This can cause an expensive correction later on. Instead, you can explore multiple options in parallel and defer choosing until the “last responsible moment”. The options you eventually discard are not wasted if you capture the knowledge learned to inform future technical programs.
This is known as set-based concurrent engineering.
I think of this as buying insurance to protect your schedule. There should always be higher-likelihood options you can almost guarantee will work (even if not with optimal characteristics) as well as riskier options with potentially more optimal characteristics but a lower likelihood of success.
Leverage the “technical” part of technical program management.
Technical acumen matters with technical program management mainly because technology can constrain both product strategy and ways of working.
Technical choices can constrain product and program strategy. Some effects that can be caused by a poorly designed technical architecture:
- Dependencies between product capabilities that should be decoupled;
- Stable product capabilities that need to be highly reliable depending on volatile technical services that are supporting experimental product capabilities
Key technical choices can significantly increase or reduce the coordination effort required to deliver a technical program.
Old-school engineering heuristics that every technical program manager should know.
“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” (aka Gall’s Law)
John Gall, Systemantics: How Systems Work and Especially How They Fail (1977)
Related to Gall’s Law is YAGNI:
“Always implement things when you actually need them, never when you just foresee that you need them.”
You Ain’t Gonna Need It (aka YAGNI), see also Yagni (martinfowler.com)
I don’t know where this originally comes from but “limit the number of miracles required”. Most of your technical choices should be boring (aka reliable). Limit how many new, interesting (aka unproven) technologies are included if you want to actually deliver.
Modularity enables parallel development and back-up options (aka Set-Based Concurrent Engineering).
Flex scope, not time… and definitely not quality.
4 variables is more useful than the “iron triangle”… and focus on flexing scope.
“Scope is the most valuable and predictable variable to control. Most deliveries specify much more in terms of features and external quality than are actually needed to deliver the needed outcomes. It is easier to coordinate knowing that someone will deliver at least a minimal version by a particular date, with the possibility of more, than to not know if anything will be delivered.”
Four Variables: Cost, Time, Quality, Scope | by Jason Yip | Medium
Don’t treat program management like a side job.
Large technical programs sometimes require enough coordination effort to justify dedicated positions.
Once you’re in “program” territory, the level of coordination required to effectively manage delivery tends to be enough to justify a dedicated position with specialised expertise.
I’ve seen this being done by people with titles like “Delivery Lead” (consulting firms), “Program Manager”, or “Technical Program Manager”. Sometimes, it’s an assignment for a “Product Manager”, “Engineering Manager”, or “Staff Engineer”.
The key thing is that during delivery of the technical problem, the person needs to be treating it like a full-time job, not a side job.