Mean Absolute Percentage Error

What is the Mean Absolute Percentage Error?

We employ the Mean Absolute Percentage Error (MAPE), a statistical measure that gauges forecasting model accuracy. Calculating the average absolute percentage variance between projected and actual values illuminates the precision of our forecasts.

MAPE is conveniently expressed as a percentage; thus, interpretation and comparison across various datasets or models are facile: this underscores its utility in our analytical framework.

Fields such as finance, supply chain management, and weather forecasting find particular utility in it; the accuracy of predictions is crucial here. The strength of MAPE rests not only on its simplicity but also clarity – a potent combination indeed. However, when confronted with zero or near-zero actual values, this measure exhibits limitations – yielding either skewed results or undefined values. We will touch on the topic of MAPE limitations in more detail below.

How Do You Calculate Mean Absolute Percentage Error?

The Mean Absolute Percentage Error formula:


  • n- number of observations
  • Ai – actual value
  • Fi – forecasted value

To compute the Mean Absolute Percentage Error (MAPE), subtract the forecasted value from its corresponding actual value and divide this difference by said actual value. Afterward, take the absolute of the resultant ratio and average these absolute percentage errors across all data points. Finally, multiply this by a factor of 100 to convert it into a percentage. This metric provides insight into forecast accuracy in terms of an average percentage error.

Limitations of MAPE

  • Zero Values: MAPE grapples significantly with zero values, a limitation that directly affects its applicability. When the actual value in a dataset is zero, the division within the MAPE formula yields an undefined result; this renders calculation impossible or devoid of meaning. Particularly challenging are datasets inherently featuring these zeros due to their exclusion as reliable metrics in such scenarios. When analysts encounter zero values in a dataset, they must either consider alternative metrics or apply adjustments to accommodate the limitation of MAPE.
  • Asymmetry: MAPE distinguishes between over-predictions and under-predictions unevenly. This skewness in MAPE’s computation implies that (depending on the data’s nature) it might accord more weight to either overestimations or underestimations. Therefore, this bias can precipitate misinterpretations of the accuracy within predictive models – particularly when achieving equilibrium between over- and under-prediction becomes crucial.

In scenarios necessitating an unbiased error metric – the inherent asymmetry of mean absolute percentage error interpretation may compromise its reliability or representation of the true model performance.

  • Not suitable for all applications: Especially in contexts such as intermittent demand forecasting. Frequently encountering actual values that reach zero or near-zero levels can cause MAPE to become less meaningful and skewed due to its sensitivity towards small divisions. The measure’s reliability diminishes when it encounters scenarios with widely varying or generally minute actual value magnitudes – this is an inherent limitation of using MAPE. In these specific contexts, alternative metrics could potentially offer a more precise reflection of forecast accuracy.
  • Scale Dependency: It experiences a scale dependency, which profoundly influences its effectiveness across various datasets. As it gauges relative error, the scale or magnitude of the applied data can excessively sway MAPE’s outcome. This sensitivity to scale implies that comparing datasets with differing scales or units may yield inconsistent results for MAPE. Objectively assessing and comparing the accuracy of predictions across different contexts can face challenges due to such variability, thereby reducing MAPE’s suitability for situations that demand a standard measure of error across varied scales.
  • Infinite or Undefined Values: This is a possibility when the actual values are zero. This phenomenon arises from the division by actual values inherent in calculating MAPE. This operation becomes problematic – and consequently renders interpretation of MAPE impractical or impossible – when these particular values equal zero. In cases where zero values are common or probable, this limitation of ours presents substantial challenges: it often culminates in error metrics that mislead or defy interpretation. So, as we said already, when you confront situations potentially featuring actual zeros, we should consider employing alternative error metrics for a model evaluation that is both precise and dependable.
  • Overemphasis on Errors: Potentially overshadowing smaller yet more frequent errors in our overall error analysis. As a result, this characteristic might present a skewed view of a model’s accuracy – particularly in scenarios where numerous small but collectively impactful mistakes influence the performance significantly. Sometimes, this overemphasis on larger errors may indeed lead to misinformed conclusions about the effectiveness or reliability of predictive models.

Mean Absolute Percentage Error and Model Monitoring

In model monitoring, particularly in predictive modeling contexts, the Mean Absolute Percentage Error (MAPE) plays a crucial role: it acts as an indicator of prediction accuracy. This metric quantifies the proximity of predictions to actual outcomes, expressing average error magnitude in percentage terms -, thus providing a lucid and comprehensible measure of their closeness. In scenarios demanding consistent and precise predictions (such as financial forecasting or inventory management), this tool proves invaluable: it allows us to assess the performance of models over time, thereby guaranteeing their ongoing accuracy and reliability.


Mean Absolute Percentage Error

  • Reduce Risk
  • Simplify Compliance
  • Gain Visibility
  • Version Comparison