Skip to main content

Model Selection

PromptCue gives you the power to choose from over 10 advanced AI models from leading providers—all within one seamless interface. Our model selection dropdown is designed to help you quickly find the best model for your needs by displaying detailed information on each option.


How Model Selection Works?

When you click on the model selection dropdown, you will see a list of supported AI models. As you hover over any model, a detailed popover or tooltip appears with key details, including:

  • Model Description:
    Learn about the model and its provider. Whether it’s OpenAI’s GPT-4o series, Google’s Gemini models, Anthropic’s Claude, or others, each entry provides a brief overview of the model's strengths and typical use cases.

  • Token Limits:

    • Max Input Tokens: The maximum number of tokens the model can accept in a single request.
    • Max Output Tokens: The maximum number of tokens the model can generate in a response.
  • Data Freshness:
    Indicates the model's knowledge cutoff. This shows you how current the model’s data is, which is essential for applications that require up-to-date information.

  • Response Speed:
    Displays the average response time of the model. Fast response speeds are critical for real-time interactions and high-efficiency workflows.


Why It Matters?

Choosing the right AI model can make all the difference:

  • Tailor Your Experience:
    With detailed model information at your fingertips, you can select the model that best aligns with your task—whether you need rapid responses, extensive context, or the most up-to-date data.

  • Efficiency and Performance:
    Models with higher token limits and faster response times can handle more complex queries and deliver results quicker, improving your workflow.

  • Flexibility Across Providers:
    Whether you’re working with OpenAI, Google, Anthropic, DeepSeek, or local models via Ollama, PromptCue’s unified interface makes it easy to compare and switch between models.


How to Use the Model Dropdown?

  1. Open the Dropdown:
    Click on the model selection dropdown in the PromptCue interface.

  2. Review Model Details:
    As you hover over each model, a detailed popover appears with:

    • A brief description of the model and its provider.
    • Token limits (input and output) to help you gauge the model’s capacity.
    • The model’s data freshness (knowledge cutoff) so you know how current the model’s training data is.
    • Response speed information to assess performance.
  3. Make Your Choice:
    Select the model that best meets your needs and continue with your conversation. The chosen model will be used for your current and subsequent chats until you change it.

After selecting the right model for your task, you’re ready to start chatting with PromptCue. To know which models we support, visit our Supported AI Models page.

Switch Model mid-chat

Currently, we do not allow switching AI models mid-chat to maintain clear communication and clearity on which model was used in chats.


Harness the full potential of your AI interactions by making informed model choices—start exploring PromptCue’s Model Selection today!