Page MenuHomePhorge

Process Documentation
Updated 3,132 Days AgoPublic

WARNING: This is a wiki used as a scratch-book. This version of the development process is being worked on actively, and are considered to be a draft. The authoritative documentation for Kolab Groupware lives at


The purpose of this document is to describe the processes related to product development at Kolab systems. This includes the following three parts:

  1. Pre-Production (Product Management and Architecture & Design)
  2. Production (Development)
  3. Post-Production (QA, Release Management and Service)

This Kolab instance of Phabricator is holding the following types of projects:

  • Software Projects: most projects of this type is related to one or more source code management repositories. The Maniphest tasks are entered and associated with these software projects. Example: Copenhagen
  • Role Projects: This type of project is providing workboards and workflow for a group of users with the specific role (members of the project). Example: Product Owners
  • Sprint Projects: These projects provide workboards and workflow for the development team working the timeboxed sprint. Example: Sprint Process 201516
NOTE: See also the clarification on the terminology Product, Project And Teams and their use in Phabricator.


The Pre-Production process is an interaction between Product Owners, Architecture & Design and consumers of the Kolab Groupware product (stakeholders).
Stakeholders can be:

  • Community members
  • Kolab employees
  • Paying customers
  • Partners of Kolab Systems
  • Subject Matter Experts
  • Any other person or group with interests in Kolab
  • A mix of all of the above
NOTE: See also the clarification of Community Collaboration about the interaction of community members in Phabricator.

Product Owners collect input from all stakeholders and compile the Product Vision from the input. (This process will be described and clarified elsewhere).

The Product Vision is the guideline for all development work to be done in the product. Following that, the Product Owners compiles in a list of (very high level) descriptions of areas in which improvement and development can happen in the product. These descriptions are entered into Phabricator as Maniphest Tasks of the type Epics and associated with the Product Owners workboard (Incoming).

For each Epic a set of requirements is defined. Those are often formulated as User-stories and entered into Phabricator as Maniphest Tasks of the type Story.

Architecture & Design is defining which development activities needs to be executed to realize each Story and (in cooperation with Quality Assurance) which tests are needed to cover the work. These activities are entered into Phabricator as Maniphest Tasks of the type Task.

The workflow of the Product Owners workboard help the Product Owners facilitate the conversation with Architecture & Design and Quality Assurance. It consist of Incoming, In_Triage, Ready and Done:

  • Incoming When Epics has only just been created, and no other action has been taken yet, they are in this state.
  • In_Triage This state is holding Epics in the process of clarification and prioritizing (Triage). This is where Stories are being written and separated into smaller workable Tasks. The actual Software Projects are determined and associated, technical details are added by Architecture & Design, and test-cases are defined and approved by Quality Assurance.
  • Ready Tasks in this state are ready to be associated with a sprint.
  • Done Tasks has been through a sprint, they have been build and packaged, and the resulting packages are available to #release_management.

The Architecture & Design team evaluates the tasks technically during the triage, clarify the requirements with the Product Owners and separate them into workable sub-tasks. Every task must hold at least one test-case for a verifiable user story. These test-cases will be following the task all the way through the process and are a vital part of the deliverable measurement and documentation of the task and in the end release. To ensure the testability, Quality Assurance will evaluate the test-cases. If a test-case is not useful, Quality Assurance will require the task to be re-evaluated.

This evaluation is normally a part of the backlog grooming by sprint participants, but in such normal situations, the sprint participants are usually also involved with a mere one software project.

The evaluation is best done by a team in a single room. The Kolab development teams however are not in a single location, so the backlog grooming will have to happen on-line. Such grooming is done in preparation for sprints, so focussed on the future sprint backlogs, and not taking place as a general activity for the ~112 software project backlogs normally referred to as "product backlog" in Scrum terminology.

In the beginning focus is on the following perceived challenges:

  • The Product Owners must ensure to keep collaboration with stakeholders.
  • The Product Owners must facilitate, that that views from Architecture & Design, Quality Assurance and where needed others, are included in each task during triage.
  • Every task must contain a test-case or a verifiable user-story before it can be accepted as "Ready" from the Product Owners.
  • During triage it is determined for each task if:
    1. it could be decided the feature is a small task and needs no review (it is ready to be included into a sprint)
    2. the feature is an epic (involves many components, complexity, ... etc.),
    3. the user-stories are problem-descriptions and not just a desired solution, but they are complete, and understandable for developers and Quality Assurance.

The Product Owners prioritize the tasks, and associates the tasks in the top of his prioritized list with the next sprint by adding the next sprint to the task using hash-tags #sprint_desktop_next or #sprint_server_next).

This lists the tasks in a sprint backlog based on priority. It is then up to the sprint participants (currently still occupied with the current sprint, while the backlog for the next sprint is being filled) to ensure the backlog for the next sprint is appropriately groomed.


Production starts with sprint planning. The team get together on-line, and look at the sprint backlog (the backlog for the #sprint_desktop_next or #sprint_server_next).

NOTE: These two sprint links are dynamic -- they refer to the actual next sprint tags, such as: #sprint_desktop_next and #sprint_server_next.

The Process Managers facilitate an evaluation of the tasks in the #sprint_xxx_next backlog in the order of priority. The needed effort is discussed. This evaluation results in the assignment of storypoints, and the tasks are assigned to a developer (if possible). Although the task can be assigned to a specific developer, the sprint backlog and all the tasks therein is the responsibility of the complete sprint team.

"Storypoints" is a method of describing the measure of the effort needed to resolve a given task. Rather than estimating the amount of time expected to be needed, a storypoint measure is taking the complexity, the collected knowledge and the effort needed for other tasks in the stack. During the sprint preparation, the team select a reference task from the stack (usually the simplest) and give that e.g. 3 points. Points are then given to the rest of the tasks in the stack following the Fibonacci sequence (1,2,3,5,8,13,21..) and relating these to the reference task.

NOTE: An amount of effort in every sprint should be reserved for interruptions.

During the sprint, the Process Managers facilitate the daily standup call. In this call team members inform the rest of the team about the status of their work and which impediments they encounter ("What did I do yesterday?" and "What do I plan to do today?"). It is the job of the Process Managers to work on and eliminate any impediment if possible.

The #sprint projects are configured with the sprint workflow; the states of: Backlog, Doing, Review, Done:

  • Backlog is holding tasks that are selected for this sprint, but not being worked on yet.
  • Doing is holding tasks that is actively being worked on by a developer (the assignee).
  • Review is holding tasks where coding is done, but test case needs to be run [..and code review?].
  • Done is holding tasks where the resulting code has been committed, and included test-cases has been successfully run.

As developers are working on tasks, these are being moved across the workflow steps in the sprint workboard.

In the end of a sprint, the Process Managers facilitate the sprint retrospective. The development team get together on-line and show as much of the working software written during the sprint as possible by running through the included test cases. The status of the sprint documentation (burndown/burnup charts) is recognized and the velocity (the number of story points done in the sprint) is decided for the coming sprint.

NOTE: See also Sprint-plannings, Backlog-grooming And Sprint-retrospectives for a definition of those three kinds of meetings.


When tasks are exiting a sprint in the status Done, there are one or more commits connected to each task, which resolve the issue (and successfully run through the included test-case) described in the task. These tasks are forwarded to Quality Assurance, by adding Quality Assurance to the task (if it is not already there).

The relevant components are packaged, and packages are made available to Quality Assurance. If the packages pass Quality Assurance [simple workflow], the tasks are added the Release Managers tag, Release Managers will label the packages as releasable. They are however not released until all stakeholders [needs to be determined] agree to the release. The Product Owners facilitate this decision.

#support_engineers is added to the tasks for access to the test-cases. This makes it possible for #support_engineers to create and improve HowTo and FAQ documentation.

NOTE: See also the Glossary for definitions
Last Author
Last Edited
Sep 22 2015, 3:39 PM

Document Hierarchy

Event Timeline

petersen edited the content of this document. (Show Details)
petersen changed the edit policy from "All Users" to "Custom Policy".
petersen subscribed.
petersen edited the content of this document. (Show Details)