Flow Time Breakdown
Flow Time Breakdown allows you to track the average and median time tickets spend in each stage of a workflow, as well as compare these times by ticket type.
This chart provides an overview of the performance of each step in the process and allows you to see how processing times vary depending on the types of tickets processed.
The horizontal axis represents the different stages of the workflow, and the vertical axis presents the time spent in each stage. The graphic shows three key pieces of information for each step:
The average time spent by all tickets at each stage (curve).
The median time spent by tickets at each stage (curve).
The average flow time for each ticket type in each stage (bars per ticket type).
Operation and Usefulness
This chart is very useful for:
Analyze performance by stage: By viewing the average time and median for each stage of the flow, users can see which stages tickets are spending the most time in and whether this varies a lot or a little between tickets.
Identify deviations between the mean and the median: Deviations between the average time and the median can signal the presence of tickets with very long or very short processing times, which can indicate anomalies or inefficiencies in certain steps.
Compare ticket types: The columns representing different ticket types allow you to see if certain ticket types take longer than others to go through a given step. This allows a detailed and comparative analysis according to the types of tasks or requests processed.
How to read a Flow Time Breakdown?
Flow Time Breakdown combines lines and bars to represent different information about processing times at each stage of the flow.
Horizontal axis (x): This axis represents the different stages of the flow. Each point on the horizontal axis corresponds to a specific step in the process.
Vertical axis (y): This axis represents the time tickets spend in each stage, typically measured in hours, days, or another relevant unit of time.
Curves:
Average time (curve 1): The first curve shows the average time tickets spend in each step. This curve allows you to see if a particular step slows down the process overall compared to the others.
Median (curve 2): The second curve represents the median time spent at each step. The median is a good indicator of central processing time, less influenced by extreme values (very slow or very fast tickets).
Columns by ticket type:
For each step, the chart displays one or more columns, each corresponding to a ticket type. These columns represent the average flow time for that ticket type in the given step.
These columns allow you to compare the performance of different ticket types at each stage. If a ticket type has a significantly taller column than the others, it means that ticket type takes longer in this step.
Patterns to observe:
Differences between mean and median:
If the average time curve is significantly higher than the median, this means that there are tickets with abnormally long processing times in this stage. This may be due to exceptional cases or bottlenecks.
An average close to the median indicates that the distribution of processing times is more homogeneous, with less variability.
Analysis by ticket type:
The columns by ticket type allow you to compare the performance of different ticket types at each stage. If a ticket type consistently takes longer in a specific step, this may indicate idiosyncrasies related to that ticket type that are slowing down the process at that step.
If the columns are broadly similar for each step, this means that processing times are relatively consistent across different ticket types.
Points of improvement:
A large spread between the mean and median or very high columns for a certain ticket type in a specific stage may indicate a need for improvement in that stage. It may be necessary to optimize this step for certain ticket types or to investigate the reasons for unusually slow tickets.
Example:
Let's imagine a Flow Time Breakdown in Wiveez for an IT request management flow:
Close average and median times: If the average and median time curves are close at each stage, this means that the majority of tickets are processed within relatively constant times, without large disparities in processing times.
Large difference at one step: Let's assume that the "Technical Validation" step has a marked difference between the average time and the median. This may indicate that some tickets are stuck in this step much longer than others, which could signal a bottleneck in this part of the process.
Difference by ticket type: If the ticket type columns show that emergency tickets consistently take longer than others in the "Review" stage, this could indicate that this ticket type requires additional resources or a more complex processing, thus slowing down the entire process.
Usefulness in Wiveez
Flow Time Breakdown in Wiveez is essential for:
Understand performance by stage: Users can identify the stages where tickets are taking the most time and assess whether those stages require adjustments or optimizations.
Analyze gaps between ticket types: The columns by ticket type allow you to see how different ticket types are handled at each stage and to identify specific areas for improvement for certain ticket types.
Identify inefficiencies: Deviations between the average time and the median reveal anomalies or inefficiencies in ticket processing, allowing users to focus their efforts on the most problematic steps.
Chart
Filters
Quartiles: Analyze the Distribution of the Performance of Your Flow
Quartiles divide the data into four equal parts, allowing us to understand how it is distributed.
They are particularly useful for getting an overview of the performance distribution in a process.
Definition of Quartiles
Q1 (First quartile): 25% of data is below this value.
Q2 (Median or second quartile): 50% of the data is lower than this value.
Q3 (Third quartile): 75% of data is below this value.
Inter-quartile ranges (IQR) can also be used to detect outliers.
The IQR will represent the lower and upper limits admissible to have a predictable flow distribution. These are the famous mustaches. Values falling outside must be considered as outliers and analyzed.
The IQR is defined as Q3−Q1.
A common rule is to consider any value outside of Q1 − 1.5 × IQR and Q3 + 1.5 × IQR as an outlier.
Example of calculating Quartiles
Sort data in ascending order: 17; 18; 19; 19; 20; 20; 21; 22; 23; 24
Calculate the First Quartile – Q1 by identifying the Cycle time of 25% of the measurements – Result: Q1 = 19
Calculate the Second Quartile, i.e. the Median, representing 50% of the measured cycle times – Result Q2 = 20
Odd List: When the number of measures is odd, take the middle value
Even List: When the number of measurements is even, as in our example, add the 2 central values and divide them by 2. The result represents the Median.
Calculate the Third Quartile – Q3, representing 75% of measurements taken – Result: Q3 = 22
Identify the Whiskers, i.e. the lowest value and the highest value measured – Result:
Calculate the inter-quartile (IQR)
IQR = Q3 – Q1 = 22 – 19 = 3
Low limit = Q1 – 1.5IQR = 19 + 1.53 = 14.5
High limit = Q3 + 1.5IQR = 22 + 15*3 = 26.5
Utility
Quartiles are often used to visualize the distribution of data and identify points where the majority of values fall. This allows you to see where the central values are (using the median) and identify gaps or outliers.
Let's take the example of a development team that delivers tickets every two weeks. By analyzing the number of tickets delivered over multiple periods, quartiles give us insight into the distribution of deliveries. This helps understand how many tickets are delivered in the bottom 25%, middle 50%, and top 25%.
UCL/LCL: Keeping your Process under Control
Control limits (UCL and LCL) are statistical thresholds used in control charts. They allow a process to be monitored to detect anomalies and determine whether it is stable.
UCL (Upper Control Limit): Upper control limit.
LCL (Lower Control Limit): Lower control limit. These limits are typically set at 3 standard deviations above and below the mean, meaning that 99.73% of the data should fall within this range in an “in control” process.Utilité
Control limits are ideal for detecting anomalies in a process. If any data falls outside of these limits, it may indicate a problem that requires investigation (such as an unexpected change in performance).
UCL/LCL : Comment les calculer ?
Les limites de contrôle (UCL/LCL) sont utilisées dans les cartes de contrôle pour surveiller un processus. Elles sont basées sur la moyenne et l’écart-type, et définissent une plage de variation normale. Elles sont calculées en appliquant la règle des 3-sigma, soit trois écarts-types au-dessus et en dessous de la moyenne.
UNPL/LNPL: Understanding Natural Variability
Unlike UCL and LCL, natural process limits (UNPL and LNPL) are not based solely on statistical calculations, but rather on a thorough understanding of the process and accepted tolerances.
UNPL (Upper Natural Process Limit): Natural upper limit.
LNPL (Lower Natural Process Limit): Natural lower limit. These limits reflect the natural range of variation of the process, defined by accepted specifications or tolerances, not by statistical deviations alone.
Utility
Natural limits are particularly useful when you have a good empirical understanding of the process or when specific tolerances must be met (for example, industry standards or customer requirements). They help avoid overreacting to small variations while ensuring that the process operates within the defined acceptable range.
Example
1 – Data collection and distribution analysis
In general, the first step is to identify whether the process follows a normal distribution or not. Natural boundaries can sometimes be calculated differently depending on the distribution of the data.
2 – Determine acceptable tolerances (or specifications)
UNPL/LNPL can be based on accepted tolerances or performance specifications defined by business needs, customer requirements or internal standards. If these tolerances exist (for example, a team expects to deliver between 15 and 25 tickets per iteration), they can directly guide the calculations.
If specifications or tolerances are already established, natural limits can be set accordingly. For example, if the team considers that by delivering less than 16 tickets or more than 26, it is deviating from its normal behavior, then:
LNPL = 16
UNPL = 26
3 – Calculate natural limits based on observed variability
If you do not have pre-established tolerances, you can calculate UNPL/LNPL based on historical process data.
3.1 – Calculate the median or an adjusted average
The median is often used instead of the mean if the process has outliers or asymmetric variations. This gives a central measurement less influenced by extreme values.
In our case, the median of tickets delivered over 10 periods is:
Median = 20
If you use the average, it is: Average(xˉ) = 20.3
3.2 – Calculate the natural range of variability
The natural variability of the process is often estimated from the interquartile range (IQR), which measures the dispersion between the lowest 25% of data and the highest 25%.
For our example, the quartiles are as follows:
Q1 (1st quartile)=19: 25% of tickets delivered are less than or equal to 19.
Q3(3rd quartile)=22: 75% of tickets delivered are less than or equal to 22.
The interquartile range (IQR) is: IQR = Q3−Q1 = 22−19 = 3
3.3 – Calculate natural limits
A common method is to multiply the IQR by a factor, often 1.5×IQR, to identify the acceptable range of variation.
Thus, the natural limits can be calculated as follows:
UNPL = Q3+1.5×IQR = 22+1.5×3 =22+4.5 = 26.5
LNPL= Q1−1.5×IQR = 19−1.5×3 = 19−4.5 = 14.5
3.4 – Adjust limits based on observations and specifications
Once the initial limits are calculated, you can adjust them based on empirical observations or process goals.
For example, if your team finds that delivering fewer than 16 tickets or more than 26 tickets is unacceptable, you could set the UNPL/LNPL respectively to those values, even if the calculations indicate a slight variation.
You see, based on this example, that the UNPL and the LNPL are very complementary to the measurement with the Quartiles by integrating the limits not to be exceeded to remain in a stable system.
Predictability Analysis
The Predictability analysis makes it possible to measure the quality of the flow and the level of confidence in its use for projections.
The analysis is based on the Thin-Tailed - Fat-Tailed principle
Of course ! The concept of Thin-Tailed and Fat-Tailed distributions is often used in risk analysis, statistics and finance to understand and model the impact of rare and extreme events. Here is a description that you could integrate into the Wiveez user documentation, adapted to explain their principle, their operation and their usefulness in this context.
Thin-Tailed and Fat-Tailed: Principle and Usefulness
Principle:
In data modeling, particularly in finance and risk management, we often talk about probability distributions to describe how events or values are distributed. Two types of distributions are particularly important: Thin-Tailed and Fat-Tailed distributions.
Thin-Tailed: A thin-tailed distribution is characterized by a relatively low probability that extreme events (or large deviations from the mean) will occur. In other words, extreme values (very far from the average) are rare. A typical example would be the normal (or Gaussian) distribution, where most of the data concentrates around the mean and extremes are very unlikely.
Fat-Tailed: In contrast, a fat-tailed distribution has a higher probability of extreme events. This means that rare (but very impactful) events are more frequent than would be expected with a thin-tailed distribution. Fat-tailed distributions are used to model phenomena where extreme events have a disproportionate impact, such as stock market crashes or economic crises.
Detailed ticket analysis
Wiveez allows the user to analyze the performance of each flow in detail by displaying the list of tickets associated with a column and displaying the details of the Flow Metrics of a ticket.
Analyze with our AI Alice
Wiveez provides you with its AI, named Alice, to help you analyze graphs.
Click on the Alice icon to start analyzing your graph;
A page is displayed containing an analysis of the health of your graph and tips for improvement;
You can save this analysis in a PDF file;
You can copy/paste the analysis into another type of document.
As long as no modification has been made to the chart filters or no data refresh has been initiated, your analysis remains accessible.