Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Identify overall performance: It shows how the flow values ​​are distributed. For example, if a large number of events or transactions are within a high throughput range, this indicates that the system is performing efficiently.

  2. Detect bottlenecks: A histogram that shows a significant concentration of low flow values ​​could indicate bottlenecks or performance issues in certain parts of the system.

  3. Analyze performance variability: The histogram allows you to visualize whether the observed flow rates are stable or variable. A uniform distribution indicates relatively constant performance, while a widely spread distribution may signal high variability in throughput.

Char

...

Filters

...

Quartiles: Analyze the Distribution of the Performance of Your Flow

Quartiles divide the data into four equal parts, allowing us to understand how it is distributed.

They are particularly useful for getting an overview of the performance distribution in a process.

Definition of Quartiles

  • Q1 (First quartile): 25% of data is below this value.

  • Q2 (Median or second quartile): 50% of the data is lower than this value.

  • Q3 (Third quartile): 75% of data is below this value.

...

A common rule is to consider any value outside of Q1 − 1.5 × IQR and Q3 + 1.5 × IQR as an outlier.

Example of calculating Quartiles

  1. Sort data in ascending order: 17; 18; 19; 19; 20; 20; 21; 22; 23; 24

  2. Calculate the First Quartile – Q1 by identifying the Cycle time of 25% of the measurements – Result: Q1 = 19

  3. Calculate the Second Quartile, i.e. the Median, representing 50% of the measured cycle times – Result Q2 = 20

    1. Odd List: When the number of measures is odd, take the middle value

    2. Even List: When the number of measurements is even, as in our example, add the 2 central values ​​and divide them by 2. The result represents the Median.

  4. Calculate the Third Quartile – Q3, representing 75% of measurements taken – Result: Q3 = 22

  5. Identify the Whiskers, i.e. the lowest value and the highest value measured – Result:

    1. Calculate the inter-quartile (IQR)

      1. IQR = Q3Q1 = 22 – 19 = 3

      2. Low limit = Q1 – 1.5IQR = 19 + 1.53 = 14.5

      3. High limit = Q3 + 1.5IQR = 22 + 15*3 = 26.5

Utility

Quartiles are often used to visualize the distribution of data and identify points where the majority of values ​​fall. This allows you to see where the central values ​​are (using the median) and identify gaps or outliers.

Let's take the example of a development team that delivers tickets every two weeks. By analyzing the number of tickets delivered over multiple periods, quartiles give us insight into the distribution of deliveries. This helps understand how many tickets are delivered in the bottom 25%, middle 50%, and top 25%.

UCL/LCL: Keeping your Process under Control

Control limits (UCL and LCL) are statistical thresholds used in control charts. They allow a process to be monitored to detect anomalies and determine whether it is stable.

...

Control limits are ideal for detecting anomalies in a process. If any data falls outside of these limits, it may indicate a problem that requires investigation (such as an unexpected change in performance).

UCL/LCL : Comment les calculer ?

Les limites de contrôle (UCL/LCL) sont utilisées dans les cartes de contrôle pour surveiller un processus. Elles sont basées sur la moyenne et l’écart-type, et définissent une plage de variation normale. Elles sont calculées en appliquant la règle des 3-sigma, soit trois écarts-types au-dessus et en dessous de la moyenne.

UNPL/LNPL: Understanding Natural Variability

Unlike UCL and LCL, natural process limits (UNPL and LNPL) are not based solely on statistical calculations, but rather on a thorough understanding of the process and accepted tolerances.

  • UNPL (Upper Natural Process Limit): Natural upper limit.

  • LNPL (Lower Natural Process Limit): Natural lower limit. These limits reflect the natural range of variation of the process, defined by accepted specifications or tolerances, not by statistical deviations alone.

Utility

Natural limits are particularly useful when you have a good empirical understanding of the process or when specific tolerances must be met (for example, industry standards or customer requirements). They help avoid overreacting to small variations while ensuring that the process operates within the defined acceptable range.

Example

1 – Data collection and distribution analysis

  • In general, the first step is to identify whether the process follows a normal distribution or not. Natural boundaries can sometimes be calculated differently depending on the distribution of the data.

2 – Determine acceptable tolerances (or specifications)

  • UNPL/LNPL can be based on accepted tolerances or performance specifications defined by business needs, customer requirements or internal standards. If these tolerances exist (for example, a team expects to deliver between 15 and 25 tickets per iteration), they can directly guide the calculations.

  • If specifications or tolerances are already established, natural limits can be set accordingly. For example, if the team considers that by delivering less than 16 tickets or more than 26, it is deviating from its normal behavior, then:

    • LNPL = 16

    • UNPL = 26

3 – Calculate natural limits based on observed variability

  • If you do not have pre-established tolerances, you can calculate UNPL/LNPL based on historical process data.

...

You see, based on this example, that the UNPL and the LNPL are very complementary to the measurement with the Quartiles by integrating the limits not to be exceeded to remain in a stable system.

Predictability Analysis

The Predictability analysis makes it possible to measure the quality of the flow and the level of confidence in its use for projections.

...

Of course ! The concept of Thin-Tailed and Fat-Tailed distributions is often used in risk analysis, statistics and finance to understand and model the impact of rare and extreme events. Here is a description that you could integrate into the Wiveez user documentation, adapted to explain their principle, their operation and their usefulness in this context.

Thin-Tailed and Fat-Tailed: Principle and Usefulness

Principle:

In data modeling, particularly in finance and risk management, we often talk about probability distributions to describe how events or values ​​are distributed. Two types of distributions are particularly important: Thin-Tailed and Fat-Tailed distributions.

  • Thin-Tailed: A thin-tailed distribution is characterized by a relatively low probability that extreme events (or large deviations from the mean) will occur. In other words, extreme values ​​(very far from the average) are rare. A typical example would be the normal (or Gaussian) distribution, where most of the data concentrates around the mean and extremes are very unlikely.

  • Fat-Tailed: In contrast, a fat-tailed distribution has a higher probability of extreme events. This means that rare (but very impactful) events are more frequent than would be expected with a thin-tailed distribution. Fat-tailed distributions are used to model phenomena where extreme events have a disproportionate impact, such as stock market crashes or economic crises.

...

Detailed ticket analysis

Wiveez allows the user to analyze the performance of each flow in detail by displaying the list of tickets associated with a column and displaying the details of the Flow Metrics of a ticket.

...

Analyze with our AI Alice

Wiveez provides you with its AI, named Alice, to help you analyze graphs.

...