🎤 Watch the full replay of Decagon Dialogues 2025.
Learn more
Glossary

Model drift

Model drift, or model decay, refers to the gradual decline in a machine-learning model’s accuracy and reliability over time. This happens when the world the model was trained to understand changes: data patterns evolve, customer behavior shifts, or the relationships between inputs and outcomes no longer hold true. In simple terms, a model that once performed well slowly becomes less accurate as real-world conditions change.

How does model drift work?

Model drift typically appears in two main forms. Feature or data drift occurs when the statistical distribution of input data changes—for example, when customer preferences or transaction patterns shift. Concept drift happens when the relationship between inputs and the predicted outcome changes, such as when a company launches a new product, updates a policy, or faces new regulations.

When drift occurs, model predictions start to lose relevance, leading to declining accuracy and weaker performance. Even well-trained models can become stale and produce unreliable or inconsistent results without active monitoring and timely updates. To stay effective, production models must be retrained, recalibrated, or refreshed at regular intervals to align with new data realities.

The model drift-customer satisfaction connection

Model drift is one of the most important operational challenges to address when it comes to AI-based customer service. If an intent-recognition or routing model drifts, the AI may misinterpret what customers are asking, send them to the wrong channel, or respond with outdated information. Over time, this erodes customer satisfaction and undermines the performance metrics that matter most—such as first contact resolution, contact rate, and overall service quality.

A strong drift management strategy involves retraining and continuous visibility into system performance. Here, AI observability plays a key role. Observability tools track model health across data inputs, predictions, feedback loops, and infrastructure layers, helping teams detect when and identify drift begins—and take corrective action before it affects customers.

Drift monitoring is therefore part of maintaining true “AI operations” maturity. It ensures that models powering customer-service agents remain consistent and trustworthy even as customer needs and context evolve.

Considerations and best practices

Sustaining model performance over time requires structured, ongoing management practices:

  • Continuous monitoring: Track performance metrics over time, check for upward trends in error, and track the distribution of inputs.  
  • Retraining/refresh schedule: Depending on the rate of change in your domain, you may need periodic retraining or model updates.
  • Data-pipeline vigilance: Changes in upstream systems (e.g., new channel type, change in user behaviour) can trigger drift.
  • Governance and alerts: Set thresholds for when to alert data-science teams, automate drift-detection tools.
  • Adaptability: Use models or architectures designed to adapt to change (online learning, incremental updates).

When paired with strong AI observability and governance, these practices keep customer-service AI reliable and aligned with real-world conditions. They help ensure that models continue to serve the business effectively—supporting both customer satisfaction and long-term return on investment.

Model drift is a reminder that machine-learning models are not “set and forget”. For customer-service AI systems, where the business context is dynamic, planning for drift (monitoring, updating, governance) is essential to maintain effectiveness, reliability, and ROI.

AI agents for concierge customer experience

Get a demo