Introducing Decagon Labs.
Learn more
Glossary

Fallback intent

Fallback intent is a designated response that an AI agent or chatbot delivers when it cannot confidently match a user's input to any recognized intent. Rather than producing an error, guessing incorrectly, or falling silent, the AI responds with a fallback, typically acknowledging that it did not understand the request and offering alternative paths forward.

Fallback intent is a standard component of conversational AI design, but it is also one of the most consequential. A poorly designed fallback frustrates users and ends conversations prematurely. A well-designed fallback keeps the interaction alive, gives the user a productive next step, and provides data that helps improve the system over time.

How fallback intent works

When a user sends a message, the AI system runs it through its intent detection model to identify what the user is trying to accomplish. If the top-matching intent does not meet the system's confidence threshold, the fallback intent is triggered instead of forcing a low-confidence match.

The fallback response typically:

  • Acknowledges the failure clearly: Without placing blame on the user, the agent states that it did not understand the request.
  • Offers structured alternatives: Suggestions might include rephrasing the question, selecting from a menu of common topics, or connecting to a human agent.
  • Collects the unrecognized input: The message that triggered the fallback is logged for review, allowing teams to identify gaps in intent coverage and add new training data.
  • Avoids repetition: Good fallback design avoids triggering the same generic message repeatedly if the user makes multiple attempts, instead escalating to a handoff after a set number of failed matches.

Types of fallback responses

Not all fallback situations are equal, and sophisticated conversational AI systems use tiered fallback approaches:

  • Soft fallback: Triggered when confidence is low but not zero. The agent might offer its best guess and ask for confirmation before proceeding.
  • Hard fallback: Triggered when confidence is below a minimum threshold. The agent does not attempt a match and instead immediately acknowledges it cannot help and redirects the user.
  • Domain fallback: Triggered when the user's request is outside the scope of what the AI is designed to handle, even if the intent is clear, such as asking a customer service bot a question about an unrelated topic.

Why fallback design matters

Fallback intent is a direct indicator of AI system quality. High fallback rates signal that the system's intent coverage is incomplete, that training data does not reflect how real users phrase requests, or that the AI's confidence thresholds are miscalibrated. Monitoring fallback rate as part of AI observability helps teams identify where to invest in model improvement.

From the customer's perspective, a fallback that quickly connects them to a human agent via AI agent handoff is far better than a series of unhelpful responses that force them to abandon the conversation and try a different channel. According to Google's Dialogflow documentation, building clear and helpful default fallback responses is one of the foundational practices of good conversational AI design.

Reducing fallback rates over time

Fallback intent should be treated as a feedback mechanism, not a permanent solution. Every fallback event is a signal that something the AI could not handle came up in a real conversation. Teams should regularly review fallback logs, cluster unrecognized inputs by topic, and add or improve intents to cover those gaps. This continuous improvement loop is what separates conversational AI systems that get better over time from those that remain static. For more on how intent recognition and fallback logic fit into broader conversational design, see conversational AI design.

Deliver the concierge experiences your customers deserve

Get a demo