top of page

Colaborix Flexi Team System
Designing Human–AI Production Systems Across Work Types

Peter Stefanyi, Ph.D., MCC, Colaborix GmbH

February, 2026


0. Executive Abstract

This manual defines the Colaborix Flexi Team System: a structural operating model enabling small teams to continuously reconfigure roles, meeting formats, decision rights, and AI usage based on the dominant work type. The system integrates:

  • 4 Work Types as “work physics” (independent, sequential, reciprocal, complex/mixed)

  • Team performance architecture: structural design gates capability development, which gates optimization

  • Toyota/TPS-inspired visibility translated to AI usage as a process control principle (hypothesis grounded in TPS logic; not presented as a meta-analytic claim)

  • A practical weekly cadence (SOP) that embeds PDCA/Kolb loops

The manual’s purpose is to preserve conceptual integrity, enable scale, and provide consulting-grade structure without becoming tool-specific or policy-specific.


1) The Problem (Structural, Not Cultural)

1.1 Common symptom cluster (observed pattern) - hypothesis - needs validation 

Organizations report:

  • “AI pilots are exciting, then stall”

  • “Collaboration push increases meetings but not outcomes”

  • “Lean/Agile works in one unit and collapses in another”


    These are often misattributed to culture, readiness, or mindset.


1.2 Core diagnosis (Colaborix position)

Structural mismatch is the dominant hidden variable:

  • organizations apply the same interventions (teamwork, AI training, collaboration rituals) to fundamentally different types of work

  • they attempt Level 2/3 interventions (training, coaching, optimization) while Level 1 structure is ambiguous or wrong


1.3 Boundary claim (logic-tight)

We do not claim “visibility is the single root cause” universally.We claim:

  • work-type mismatch is a primary structural failure mode

  • invisible AI influence is a high-leverage failure amplifier (TPS-derived control principle), especially in reciprocal and complex work (hypothesis; strong plausibility; requires measurement program).


2) The 4 Work Types (Formal Definition)

Work types are not job roles or departments. They describe the coordination economics required to produce value.

2.1 Type 1 — Independent (Pooled) Work

  • Output is additive; individuals can work autonomously

  • Optimization lever: individual skill + tools

  • Failure mode: forced collaboration creates coordination drag


2.2 Type 2 — Sequential Work

  • Output constrained by bottleneck step; throughput is the control variable

  • Optimization lever: identify constraint; improve at constraint; stabilize handoffs

  • Failure mode: deploying AI everywhere adds reconciliation work; improving non-bottlenecks doesn’t improve flow


2.3 Type 3 — Reciprocal Work

  • Output quality emerges from interaction and integration (non-linear)

  • Optimization lever: coordination routines, stable small team, shared understanding, integration protocol

  • Failure mode: everyone “works independently then merges later”; hidden AI use fragments shared cognition


2.4 Type 4 — Complex / Mixed Work

  • Composition of 1–3 plus ambiguity and shifting constraints

  • Rule: Decompose before optimization

  • Failure mode: “treat as high-collaboration work” (category error)


2.5 Work-type decision test (operational)

A team classifies work by asking:

  • Is output additive (Independent)?

  • Is throughput limited by one step (Sequential)?

  • Does quality require integration cycles (Reciprocal)?

  • Are multiple types present with shifting constraints (Complex)?


3) Flexi Team Architecture

Key insight 1: Structure of team drives performance and is condition for all other interventions 

Key insight 2: Work types are the external divers and matching team structure, roles and rules are needed to address the work at hand in an efficient and effective way leading to high productivity.

Key insight 3. Toyota principle of Visibility is suggested as a solution to AI adoption failures caused by AI making aspects of work invisible to team members and decision makers 


3.1 System definition


A Flexi Team is not a stable role structure.It is a stable decision protocol that repeatedly:

  1. classifies work type

  2. configures structure to match work physics

  3. executes in appropriate sync/async mode

  4. captures learning and updates standard work


3.2 Small-team scope (default)

  • Designed for 4–8 cross-functional members

  • Above that, model becomes a pattern requiring monitoring (see section 9)


3.3 Structural hierarchy principle (gating)

From the “Architecture of Team Performance” model:

  • Level 1 structural design enables 

  • Level 2 capability, enabling


    Level 3 optimization

  • The Flexi Team System is explicitly a Level 1 structural operating model that creates prerequisites for capability/CI.


4) CARDS Roles (Structural Interpretation)

We reuse the 4 role archetypes as decision functions rather than personality labels (keeping IP coherence with CARDS/CATMI ecosystem).


4.1 Role functions (team version)

  • 🟦 Integrator: work-type classification, decomposition, structure selection, timeline coherence

  • 🔵 Expert: feasibility, constraints, quality/safety/compliance veto (bounded scope), standards & SOP integrity

  • 🟠 Achiever: priority, stop/continue calls, KPI definition, ROI logic

  • 🟢 Builder: experimentation, adoption, workflow iteration, learning capture


4.2 Role coverage rule

All four roles must be present as functions during the structural cycle. One person may hold multiple roles, but not in the same decision moment.



4.3 Scaled/organizational mapping (pattern version)

In manufacturing-like settings:

  • Integrator often maps to a process owner/manager function

  • Expert maps to IT/industrial engineering/maintenance/quality

  • Achiever maps to process engineer/line owner/operations leadership

  • Builder maps to team lead/supervisor (results owner)

Note: This is mapping logic, not a normative claim.


5) Decision Cadence and Authority


5.1 Cadence SOP (default)


Weekly structural cycle (30–45 min):

  1. classify dominant work type

  2. configure structure & AI usage norms for the week

  3. define execution mode (in/out; success criteria)

  4. capture friction + AI insight + structural improvement candidate


Daily/biweekly standups run execution; they do not re-litigate structure unless escalation triggers.


5.2 Decision rights (normal mode)

  • Integrator decides work type and structure selection

  • Achiever decides priority and stop/go

  • Expert has veto only on feasibility/safety/quality/compliance

  • Builder decides how to run experiments within the defined structure


5.3 Escalation protocol (rare)

Trigger conditions:

  • work type disagreement persists

  • Expert veto blocks progress

  • scope creep across work types

Procedure:

  1. freeze execution

  2. 15-minute escalation huddle

  3. sponsor/senior achiever decides

  4. decision locked for the week

  5. Builder captures why escalation was needed (learning artifact)


6) Toyota Visibility Translated to AI (Process Control Principle)


6.1 TPS visibility principle (stated as principle)

Visibility means: the state of work is observable at the point of decision (TPS-style control logic).


6.2 AI introduces a new invisibility risk (hypothesis)

AI can influence decisions without leaving trace (e.g., private prompting yields recommendations inserted into group decisions). This creates “hidden state” analogous to hidden inventory/hidden defects in TPS terms.


6.3 Colaborix AI visibility rule (operational)

If AI output affects collective work, AI use must be collective or visible at the moment the decision is made.If AI output affects only individual work, AI may remain private.


6.4 Work type × AI visibility matrix (operational default)

  • Independent → private AI allowed

  • Sequential → AI visible at handoffs; AI mainly at constraints

  • Reciprocal → shared AI mandatory for decision-influencing output

  • Complex → AI used visibly for decomposition before optimization


6.5 Compliance separation (logic clean)

“Approved vs non-approved AI tools” is company governance. Flexi Team enforces visibility norms and structural fit, not vendor policy.


7) Failure Modes and Diagnostics


7.1 Core failure modes (signals)

  • Over-collaboration → too many meetings for independent work

  • AI noise → output volume increases, value decreases (often wrong placement of AI in sequential work)

  • Endless debate → unclear authority/role boundaries

  • Dependency risk → blind trust; lack of baselines (linked to your cognitive decline typology)

  • Learning stall → repeated same issues without updating standard work (CI breakdown)


7.2 Diagnostic questions (fast audit)

  • Which work type dominates this initiative?

  • Are we optimizing before decomposing?

  • Is AI influence visible at decision points?

  • Are decisions re-litigated mid-cycle?

  • Are failures logged as structural feedback?


8) Implementation Pattern (Non-tool-specific)


8.1 Minimum viable implementation (MVI)

  • Weekly structural cycle + explicit role coverage

  • Work-type classification artifact (one shared page)

  • Decision log (what structure, what AI visibility rule applied, what success criteria)

  • One “friction captured” per week


8.2 What not to implement early

  • tool stack mandates

  • large governance bureaucracy

  • universal AI policies inside training cohorts


9) Scaling Beyond One Team


9.1 Scaling boundary

When coordination spans teams, the Integrator becomes a process function, not a person.


9.2 Monitoring requirements (lightweight)

  • role function coverage across teams (are Achiever/Expert functions missing?)

  • distribution of work types across portfolio

  • AI visibility compliance in reciprocal work

  • throughput + defect signals in sequential work


9.3 Governance stance

Control via PDCA/CI loops, not hierarchy.


10) Research Grounding and Claims Discipline


10.1 Evidence-grounded statements

  • Structure is foundational; downstream interventions are gated by design quality

  • Work types require different optimization logics; applying one approach causes failure

  • Social integration affects AI outcomes; collective use reduces risk (presented in your “cognitive decline” piece as a grounded hypothesis and practical guidance)


10.2 Principle-based statements (not meta-analytic claims)

  • TPS visibility translated to AI is a process control principle and a high-plausibility hypothesis; Colaborix should validate via measurement (e.g., decision traceability, error propagation, rework).

  • One system, one variable (work type), one rule (AI visibility), one cadence (Kolb/PDCA)

Comments


bottom of page