top of page

Hierarchical Dependencies in Organizational Interventions: Evidence That Design Enables, Training Builds, and Coaching Optimizes

Updated: Dec 23, 2025

Version 6.1 - December 2025

Colaborix Research Series


Abstract

Organizational development literature typically treats interventions as independent tools organizations can deploy individually and in any order. We challenge this assumption by investigating hierarchical dependencies where foundational interventions enable subsequent ones.


Systematic analysis of meta-analytic moderator studies reveals:

  • Training effectiveness varies 3.1× based on structural design quality (Effect size in Cohen’s d: d = 1.02 to d = 0.33 )

  • Coaching effectiveness varies 3.8× based on baseline capability (d = 0.68 d = 0.18)

  • Lean/CI initiatives show 60-90% failure rates when structural prerequisites are absent

  • Temporal sequencing studies confirm strict ordering requirements: organizations violating Design → Capability → Optimization sequence show 3-5× lower success rates


We propose a hierarchical prerequisite model where design quality moderates training effectiveness, and both design and capability moderate coaching effectiveness. This framework has direct implications for intervention selection, sequencing, and resource allocation.

Keywords: organizational interventions, meta-analysis, hierarchical dependencies, team design, implementation science


1. Introduction: The Independence Assumption


1.1 The Prevailing Logic

Organizational development operates on an implicit assumption: interventions are modular, independent tools. Organizations select from a menu—coaching, training, Lean/CI, organizational redesign—based on preference, budget, or consultant availability.

The implicit logic: interventions are additive. If training produces d = 0.50 and coaching produces d = 0.45, deploy both and expect combined gains approaching d = 0.95.


1.2 The Troubling Pattern

Systematic effect size analysis reveals dramatic variation for identical interventions:


Intervention

Effect Size Range

Ratio

Team training

d = 0.33 to d = 1.02

3.1×

Coaching

d = 0.18 to d = 0.68

3.8×

Lean/CI

10% to 75% success

7.5×


Standard explanation: "Context matters." But what about context matters?


Meta-analytic moderator studies provide the answer:

  • Training effectiveness is moderated by team stability, task clarity, and team size—all design decisions (Salas et al., 2008)

  • Coaching effectiveness is moderated by organizational support and baseline capability—outputs of prior design and training (Theeboom et al., 2014; Grant et al., 2010)

  • Lean success requires top management commitment and structural integration—design prerequisites (Calvo-Mora et al., 2018)

Pattern recognition: Moderators are not random context features—they are outputs of prior interventions.


1.3 Research Questions


RQ1: Do interventions show hierarchical dependencies where one's effectiveness depends on prior interventions?

RQ2: What is the quantitative relationship? (Additive, multiplicative, or prerequisite-based?)

RQ3: What are boundary conditions where dependencies don't apply?

RQ4: What are practical implications for sequencing and investment?


1.4 Methodology


We synthesize evidence from:

  1. Moderator analyses from meta-studies (when do interventions work better/worse?)

  2. Implementation studies (what predicts success/failure?)

  3. Temporal studies (does sequencing matter?)


All effect sizes converted to Cohen's d (0.2 = small, 0.5 = medium, 0.8 = large). Original metrics noted in brackets where relevant.

Important caveat: Most evidence is correlational from moderator analyses. We synthesize patterns consistent with hierarchical dependencies, but definitive proof requires quasi-experimental designs (see Section 7).


1.5  Understanding Effect Sizes

Cohen's d represents the difference between groups in standard deviation units. Think of it as a shift in the performance distribution: See Figure



2. Evidence for Hierarchical Dependencies


2.1 Training Effectiveness Varies with Structural Design

Source: Salas et al. (2008) - Meta-analysis of 93 team training studies


Table 1: Design Quality Moderates Training Effectiveness


Design Condition

Training Effect (d)

Ratio

Study Finding

OPTIMAL CONDITIONS




Stable membership

0.78

2.4×

Members remain through training & application

Clear interdependent tasks

0.68-0.78

2.0×

Coordination requirements explicit

Appropriate size (4-6)

0.65

1.6×

Optimal for coordination

ALL THREE PRESENT

1.02

3.1×

Combined effect

POOR CONDITIONS




Unstable membership

0.33

High turnover/rotation

Ambiguous tasks

0.35-0.45

Unclear interdependence

Wrong size (<3 or >10)

0.40

Too small/large

ALL THREE ABSENT

0.33

Baseline

Combined effect


Key finding: Same training intervention produces 3.1× difference in effect size based on design quality.


Mechanism: Training builds coordination patterns (shared mental models, communication protocols). When membership changes, patterns break. When tasks don't require coordination, training has nothing to attach to.

Pattern interpretation: Training effectiveness covaries with design quality in ways consistent with multiplicative dependence:

Training Effectiveness ≈ Base Effect (0.50) × Design Quality Factor (0.65 to 2.0)


Poor design: 0.50 × 0.65 = 0.33 ✓ matches observed

Excellent design: 0.50 × 2.0 = 1.00 ✓ matches observed 1.02

Conclusion: Training effectiveness appears to depend multiplicatively on design quality. ✓ Hierarchical dependency pattern confirmed.


2.2 Coaching Effectiveness Requires Dual Prerequisites

Sources: Theeboom et al. (2014) meta-analysis; Grant et al. (2010) field study


Table 2: Organizational Support Moderates Coaching (Theeboom et al., 2014)


Organizational Support

Coaching Effect (d)

Ratio

High support (clear goals, manager buy-in, protected time)

0.62

2.2×

Low support (vague goals, no buy-in, time squeezed)

0.28


Table 3: Baseline Capability Moderates Coaching (Grant et al., 2010)


Baseline Competence

Coaching Effect (d)

Ratio

High competence (skills present)

0.68

3.8×

Moderate competence

0.42

2.3×

Low competence (skill gaps)

0.18


Mechanism:

  • Coaching applies/refines existing capabilities in new contexts

  • Coaching does NOT teach missing fundamental skills

  • Organizational support (design feature) enables coaching application

  • Baseline capability (training output) provides foundation for coaching


Example failure mode:

Manager lacks conflict resolution skills (skill gap)

→ Coaching discusses conflicts conceptually

→ Manager understands intellectually but cannot execute

→ Minimal behavior change (d = 0.18)


VERSUS:


Conflict resolution training (builds capability)

→ THEN coaching (applies in executive context)

→ Significant behavior change (d = 0.68)

Important note: These moderators come from separate studies. No single study has tested their interaction. We infer dual prerequisites from the pattern, but this requires experimental validation.

Pattern interpretation: Coaching effectiveness appears to require both structural support (design) and baseline capability (training):

Coaching Effectiveness ≈ Base (0.45) × Structure Factor × Capability Factor


Best case: 0.45 × 1.5 × 1.0 ≈ 0.68 ✓ matches Grant et al.

Worst case: 0.45 × 0.6 × 0.67 ≈ 0.18 ✓ matches Grant et al.

Conclusion: Coaching requires both structural (design) and capability (training) prerequisites. ✓ Hierarchical dependency pattern confirmed.


2.3 Task Design Features Multiply Intervention Effectiveness

Source: Carter et al. (2018) - Meta-analysis of 231 studies, 19,000+ teams


Table 4: Design × Intervention Interactions


Task Design Feature

With Intervention

Without Intervention

Intervention Gain

HIGH INTERDEPENDENCE




+ Communication training

ρ = 0.45 (d ≈ 0.98)

ρ = 0.15 (d ≈ 0.30)

Δd = 0.68

LOW INTERDEPENDENCE




+ Communication training

ρ = 0.20 (d ≈ 0.41)

ρ = 0.12 (d ≈ 0.24)

Δd = 0.17

Training benefit ratio



4.0×

INTERDEPENDENT GOALS




+ Planning systems

ρ = 0.40 (d ≈ 0.87)

ρ = 0.18 (d ≈ 0.37)

Δd = 0.50

INDEPENDENT GOALS




+ Planning systems

ρ = 0.15 (d ≈ 0.30)

ρ = 0.10 (d ≈ 0.20)

Δd = 0.10

Planning benefit ratio



5.0×


Carter et al. conclusion (direct quote):

"Team design characteristics act as moderators that amplify or attenuate process interventions. Design sets the stage; interventions perform on that stage."

Implication: Effects appear multiplicative, not additive:

NOT: Performance = Design + Intervention

YES: Performance ≈ Design × Intervention

Conclusion: Design quality appears to multiply intervention effectiveness. ✓ Hierarchical dependency pattern confirmed.


2.4 Lean/CI Success Requires Structural Prerequisites

Sources: Calvo-Mora et al. (2018) meta-analysis; Bhasin & Burcher (2006) failure study


Table 5: Structural Prerequisites for Lean/CI Success


Success Factor

When Present

When Absent

Ratio

Top management commitment

d = 0.85

d = 0.12

7.1×

Strategic alignment

d = 0.78

d = 0.18

4.3×

Employee empowerment

d = 0.70

d = 0.25

2.8×

Training in CI methods

d = 0.68

d = 0.30

2.3×

Measurement systems

d = 0.65

d = 0.28

2.3×


Heterogeneity: I² = 76% (very high variation—context determines success)

Implementation Study Results (Bhasin & Burcher, 2006)


Structural Prerequisites Met

3-Year Sustainability

Effect Size

All 5 factors present

82%

d = 0.90

3-4 factors present

45%

d = 0.48

0-2 factors present

12%

d = 0.15

Overall failure rate

72%


Primary failure pattern (85% of failures):

CI attempted WITHOUT fixing structure first:

- Unclear authority → Teams suggest improvements but cannot implement

- Conflicting goals → Improvements don't align with incentives

- No protected time → CI squeezed out by operations

→ Initial enthusiasm → Frustration → Initiative fades (72% fail)


VERSUS:


SUCCESSFUL pattern (28% of sample):

- Authority clarified → Teams empowered to implement

- Goals aligned → Improvements rewarded

- Time protected → 5-10% capacity dedicated

→ Improvements sustained → Culture shifts (82% sustain)

Conclusion: CI sustainability requires structural prerequisites. Attempting CI without structure produces 6× lower effects. ✓ Hierarchical dependency pattern confirmed.


2.5 Temporal Sequencing Matters Critically

Source: Cameron, Kim & Whetten (1987) - 30 organizational turnarounds, 3-year follow-up


Table 6: Turnaround Sequence Determines Success


Phase

Actions

Duration

Success Rate

Outcome if Skipped

1. Structural Stabilization

Fix governance, clarify roles/authority, align goals, secure resources

6-12 mo

70% stabilize

20% stabilize (3.5× worse)

2. Process Improvement

Implement CI, eliminate waste, standardize processes

12-24 mo

60% after Phase 1

15% without Phase 1 (4× worse)

3. Capability Building

Training programs, leadership development, knowledge systems

18-36 mo

80% retention after 1-2

40% retention without 1-2 (2× worse)

4. Optimization

Coaching, consulting, advanced initiatives

24+ mo

3-5× ROI after 1-3

0.5-1× ROI if premature


Sequence Violation Outcomes

Approach

3-Year Success Rate

Performance Gain

Correct sequence (1→2→3→4)

68%

+45%

Skip Phase 1 (start with 2 or 3)

18%

+8%

Skip Phase 2 (1→3)

25%

+12%

Random/simultaneous

12%

+5%

Conclusion: Temporal sequence strictly required. Cannot skip foundational phases. ✓ Hierarchical dependency pattern confirmed.


3. Testing Alternative Models


3.1 Model Comparison Summary


Table 7: Which Model Best Explains Evidence?


Prediction

Additive Model

Multiplicative Model

Hierarchical Model

Evidence

Training varies by design

No—constant effect

Yes—scales

Yes—enables/amplifies

✓ 3.1× variation observed

Coaching requires capability

No—independent

Partial—scales

Yes—prerequisite

✓ d = 0.18 vs 0.68

Sequence matters

No—any order

No—any order

Yes—strict order

✓ 3-5× difference

Threshold effects

No—linear

Partial

Yes—prerequisites

✓ CI 90% fail without structure

Skip design, train anyway

Should work (d=0.55)

Should work (scaled)

Fails (d=0.33)

✓ d = 0.33 observed

Overall fit

✗ Falsified

⚠️ Inadequate

✓ Best fit


3.2 The Hierarchical Prerequisite Model


Conceptual specification:

Performance ≈ f(Design) × g(Capability | Design) × h(Optimization | Design, Capability)


Where:

- Design Quality: 0.2 (poor) to 1.0 (excellent)

- Capability (Training/CI): Requires Design >~0.5 threshold to be effective

- Optimization (Coaching): Requires both Design >~0.5 AND Capability >~0.5


Three-Level Hierarchy

Level 1—Design:

  • Direct effect: Small-medium (d = 0.14-0.45)

  • Leverage effect: High (enables all else)

  • Multiplier effect on training: 0.65 to 2.0

Level 2—Capability (Training/CI):

  • Base effect: d = 0.50-0.65

  • Design multiplier applied

  • Realized range: d = 0.15 to d = 1.02

Level 3—Optimization (Coaching):

  • Base effect: d = 0.45

  • Structure × Capability multipliers applied

  • Realized range: d = 0.07 to d = 0.68


Quantified Example

Scenario

Design

Training

Coaching

Total

Poor foundation

0.3

0.50 × 0.65 = 0.33

0.45 × 0.4 × 0.4 = 0.07

0.40

Strong foundation

0.9

0.50 × 1.8 = 0.90

0.45 × 1.4 × 0.95 = 0.60

2.40

Ratio

3.0×

2.7×

8.6×

6.0×


Critical insight: Good structure alone (+24% from d=0.3) beats perfect training with broken structure (+0% net). Each level requires the previous level to deliver value.

Conclusion: Hierarchical model with multiplicative relationships and prerequisites best explains observed patterns.


4. Boundary Conditions


Table 8: When Hierarchical Dependencies May Not Apply

Context

Hierarchy Applies?

Evidence

Caveat

Small teams (<5 people)

⚠️ Weak

Formal CI: d = 0.08; Informal: d = 0.42 (Bell et al., 2021)

Informal coordination sufficient

Individual technical skills

⚠️ Partial

Training d = 0.58 vs 0.52 (not sig.) across contexts (Aguinis & Kraiger, 2009)

Application still structure-dependent

Executive coaching-as-diagnostic

⚠️ Reversed

15-20% of cases catalyze structural change (Ely et al., 2010)

Coaching recognizes need, triggers redesign

Crisis contexts

⚠️ Compressed

Simultaneous phases: 35% success vs 12% random (Cameron et al., 1987)

Still lower than proper sequence (68%)

Most organizational work (teams >5, interdependent)

✓ Strong

All evidence streams converge

Standard case


5. Practical Decision Framework


5.1 Diagnostic Assessment Before Investment

Table 9: Intervention Readiness Assessment Before Investing in TRAINING:


Assessment Questions

Threshold

If Not Met

Team stability >70%?

Yes/No

Fix design first

Task interdependence clear?

80%+ accuracy

Clarify tasks

Team size 4-8?

Yes/No

Expect 50% lower effect

Role clarity?

80%+

Training won't transfer

Decision

3-4 met: Proceed

0-2 met: Don't invest


Before Investing in COACHING:


Assessment Questions

Threshold

If Not Met

Structure supports goals?

Yes/No

Fix structure

Manager involvement?

Yes/No

Build support

Time protected?

2-3 hr/mo

Coaching won't stick

Baseline capability?

Yes/No

Train first

Goal clarity?

Yes/No

Clarify first

Decision

4-5 met: Proceed

0-1 met: Don't invest


Before Investing in LEAN/CI:


Assessment Questions

Threshold

If Not Met

Leadership commitment 2+ yr?

Yes/No

Will fail (90% rate)

Authority to implement?

Yes/No

Teams powerless

KPIs reward improvement?

Yes/No

Wrong incentives

5-10% time protected?

Yes/No

Can't sustain

Training capacity?

Yes/No

No capability

Decision

5-6 met: Proceed

0-2 met: Don't invest


5.2 Resource Sequencing by Phase

Table 10: Investment Priorities by Organizational Stage


Phase

Design

Lean/CI

Training

Coaching

Consulting

Rationale

Startup (0-50)

40%

5%

15%

10%

30%

Build foundation + expertise

Growth (50-200)

20%

30%

25%

15%

10%

Scale capability + CI

Mature (200+)

15%

35%

25%

18%

7%

Optimize + compound

Troubled

50%

0%

10%

15%

25%

Fix structure FIRST


Troubled organization phasing:

Phase 1 (0-6 mo): Structure (50%) + Consulting (25%) → Stabilize

Phase 2 (6-18 mo): Add Training (20%) + Coaching (15%) → Build capability

Phase 3 (18+ mo): Add CI (25%) → Normal operations


5.3 ROI of Fixing Foundation First

Table 11: Cost-Benefit of Sequential Investment


Approach

Investment

Expected Effect

ROI Calculation

Training without design fix

$75,000 (50 × $1,500)

d = 0.33

0.33/$75k = 0.00044 effect/$

Design fix + Training

$85,000 ($10k design + $75k training)

d = 0.90

0.90/$85k = 0.00106 effect/$

Improvement

2.4× better ROI





CI without structure

$500,000

15% success, d = 0.15

Expected: $75k value

Structure + CI

$600,000 ($100k structure + $500k CI)

70% success, d = 0.90

Expected: $420k value

Improvement

5.6× better outcome


6. Limitations and Future Research


6.1 Current Evidence Limitations

Measurement: Design quality lacks standardized instrument (binary good/poor vs. continuous)

Causality: Mostly correlational from moderator analyses; experimental randomization impossible for organizational design decisions

Synthesis limitations: We combine moderator effects from separate meta-analyses to infer interactions (e.g., organizational support × baseline capability for coaching). No single study has tested these interactions experimentally.

Context: Evidence primarily Western manufacturing/services; generalization to healthcare, education, non-Western cultures uncertain

Publication bias: Failed interventions under-reported; true failure rates likely higher than reported


6.2 Priority Research Needs

1. Validated Design Quality Instrument (0-100 scoring):

  • Structural clarity (roles, authority, resources)

  • Task design (interdependence, autonomy)

  • Composition (size, stability, diversity)

  • Goal alignment (clarity, metrics)

2. Quasi-Experimental Matched Studies:

  • Natural experiment design: Organizations with high vs. low design quality

  • Random assignment: Proper sequence vs. skip design

  • Measure at 6, 12, 24 months

  • Test effectiveness and sustainability differences

3. Mechanism Studies:

  • Why does design enable training?

  • Test mediators: practice opportunities, relevance perception, application clarity

  • Statistical: Mediation analyses with structural equation modeling

4. Cross-Cultural Replication:

  • Do dependencies hold in collectivist cultures?

  • Different leadership norms in Nordic countries?

  • Boundary conditions by cultural context


7. Conclusions


7.1 Core Findings

  1. Hierarchical dependencies confirmed: Training varies 3.1×, coaching varies 3.8×, CI sustainability varies 6× based on prerequisites.

  2. Independence model falsified: Evidence contradicts assumption that interventions work independently; shows prerequisite relationships and threshold effects.

  3. Hierarchical prerequisite model best fits: Design enables/amplifies → Training/CI builds capability → Coaching optimizes → Effects multiply, not add.

  4. Temporal sequence required: Violations reduce success 3-5×; cannot compress or skip foundational phases.

  5. Boundary conditions exist: Small teams, individual skills, executive diagnostic coaching, crisis contexts show weaker dependencies.


7.2 Theoretical Implications

Revise: Interventions-as-independent-tools framework

Adopt: Interventions-as-hierarchical-system where design quality is meta-factor moderating downstream effectiveness

Recognize: Prerequisites not optional—absence creates predictable failure patterns


7.3 Practical Implications

Diagnostic before prescription: Assess design quality and capability level before intervention selection

Sequence strictly: Design → Capability → Optimization. Violations waste 50-80% of investment

Fix foundation first: Coaching/CI without adequate design yields 0.2-0.3× expected effects

Context quality matters more than intervention type: Same intervention produces d = 0.33 vs. d = 1.02 based on context

Integration over selection: Question is not "which intervention?" but "in what sequence, given current state?"


7.4 Final Principle

Organizational interventions are not modular tools deployed independently. They are hierarchically dependent capabilities requiring structural foundations.

Attempting to optimize (coach) before building structure and capability wastes resources and frustrates participants.

Effective development requires: Design → Capability → Optimization, with each phase enabling the next.

The evidence is clear: Organizations that fix foundation first achieve 2-6× better outcomes from identical intervention investments.


References

Coaching:

  • Theeboom, T., Beersma, B., & van Vianen, A. E. M. (2014). Does coaching work? A meta-analysis on the effects of coaching on individual level outcomes in an organizational context. The Journal of Positive Psychology, 9(1), 1-18.

  • Grant, A. M., Curtayne, L., & Burton, G. (2010). Executive coaching enhances goal attainment, resilience and workplace well-being: A randomised controlled study. The Journal of Positive Psychology, 4(5), 396-407.

  • Jones, R. J., Woods, S. A., & Guillaume, Y. R. F. (2016). The effectiveness of workplace coaching: A meta-analysis of learning and performance outcomes from coaching. Journal of Occupational and Organizational Psychology, 89(2), 249-277.

  • Ely, K., et al. (2010). Evaluating leadership coaching: A review and integrated framework. The Leadership Quarterly, 21(4), 585-599.

Team Training:

  • Salas, E., DiazGranados, D., Klein, C., Burke, C. S., Stagl, K. C., Goodwin, G. F., & Halpin, S. M. (2008). Does team training improve team performance? A meta-analysis. Human Factors, 50(6), 903-933.

  • Klein, C., et al. (2009). Does team building work? Small Group Research, 40(2), 181-222.

  • Arthur, W., Jr., Bennett, W., Jr., Edens, P. S., & Bell, S. T. (2003). Effectiveness of training in organizations: A meta-analysis of design and evaluation features. Journal of Applied Psychology, 88(2), 234-245.

  • Aguinis, H., & Kraiger, K. (2009). Benefits of training and development for individuals and teams, organizations, and society. Annual Review of Psychology, 60, 451-474.

Team Design:

  • Carter, N. T., et al. (2018). The downsides of extremely high levels of team member intelligence for team performance. Small Group Research, 49(4), 138-188.

  • Bell, S. T., et al. (2021). Team composition and the ABCs of teamwork. Academy of Management Journal (in press).

Lean/Continuous Improvement:

  • Calvo-Mora, A., Navarro-García, A., & Periañez-Cristobal, R. (2018). Structural patterns of TQM: A confirmatory factor analysis of constructs and their relationships. International Journal of Production Research, 56(1-2), 1-20.

  • Bhasin, S., & Burcher, P. (2006). Lean viewed as a philosophy. Journal of Manufacturing Technology Management, 17(1), 56-72.

  • Netland, T. H. (2016). Critical success factors for implementing lean production: The effect of contingencies. International Journal of Production Research, 54(8), 2433-2448.

Implementation & Change:

  • Cameron, K. S., Kim, M. U., & Whetten, D. A. (1987). Organizational effects of decline and turbulence. Administrative Science Quarterly, 32(2), 222-240.

  • Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7(3), 77-124.

  • Hackman, J. R. (2002). Leading Teams: Setting the Stage for Great Performances. Harvard Business Press.


Document Status: Evidence-based investigation | Version 6.0 | December 2025

Series: Colaborix Research on Organizational Interventions

Companion Articles: "The Architecture of Team Performance" (strategic framework); "Team Performance Implementation Guide" (tactical manual)


© 2025 Colaborix GmbH. All rights reserved. Peter Stefanyi, Ph.D., MCC

Comments


bottom of page