Skip to main content

Response Metrics

Every interaction in PromptCue is composed of User Prompts and AI Responses. While your prompts drive the conversation, the AI's responses come with additional information that provides insight into performance and cost. This page explains the three key metrics displayed above every AI response:

  • Response Time
  • Response Length
  • Number of Tokens Consumed

What Do These Metrics Mean?​

Response Time​

Response Time is the duration between sending your prompt to the AI model and receiving its response.

  • Why It Matters:
    A faster response time leads to a smoother, more real-time interaction. Monitoring this helps you gauge the efficiency of the AI model.
Response Timeout

We set a 15-second timeoutβ€”if the AI doesn't respond within that period, your request will be automatically canceled.

Response Length​

Response Length measures the amount of text generated by the AI model, typically in characters (or words).

  • Why It Matters:
    The length of the response can indicate the level of detail provided. A longer response might offer more comprehensive insights, whereas a shorter one may be concise and focused.

Number of Tokens Consumed​

Token Consumption refers to the number of tokens that were used by the AI model to generate the response. Tokens are the basic units (words or subwords) processed by the model.

  • Why It Matters:
    Tracking token usage is essential for managing costs and understanding the complexity of the response. A higher token count may indicate a more detailed answer, which can be useful for deeper analysis.

How These Metrics Enhance Your Experience?​

  • Transparency:
    With these metrics displayed above each AI response, you can easily see how long it took to get an answer, how detailed the response is, and how many tokens it consumed.
  • Performance Insights:
    Analyzing these metrics helps you optimize your promptsβ€”if a particular query takes too long or consumes too many tokens, you can adjust your approach accordingly.
  • Cost Management:
    By keeping track of token consumption, you gain insight into the cost implications of your interactions, allowing you to make more informed decisions.

Example: A Practical Look at AI Responses​

Imagine you ask:

What are the latest trends in artificial intelligence?

The AI might respond with:

The latest trends in AI include the rise of transformer models, advances in deep learning, and increasing integration of AI in various business processes.

Above this response, PromptCue displays:

  • Response Time: 1.2 seconds
  • Response Length: 145 characters
  • Tokens Consumed: 48 tokens

This detailed breakdown not only informs you about the speed and efficiency of the AI model but also provides valuable insights into the complexity and cost of the response.

Response Analysis

The Tokens Consumed metric calculation would depend on the AI Model.


Next Steps​

  • Experiment with Your Prompts:
    Try different types of prompts and observe how these metrics change. This will help you refine your queries for optimal performance.

  • Learn More:
    For additional details on optimizing your AI interactions, check out our Chatbox.

  • Need Support?
    If you have questions about these metrics or encounter issues, visit our Support & Resources page or contact our support team at support@promptcue.com.


With PromptCue’s transparent response metrics, you have all the information you need to fine-tune your AI interactions. Start experimenting today and unlock the full potential of your AI experience!