Forget prompt engineering, just run an experiment
With Prompt Picker, you can experiment with and assess multiple prompts in parallel, meaning faster iteration and better outcomes for your end users (and your bank account!)
Use Prompt Picker to...
Craft better user experiences
Ensure your users are getting the best possible experience from your application.
Reduce costs per query
Minimise your query tokens by choosing the shortest and most effective system prompts.
Get more helpful chat responses
Identify the best custom instructions for your daily use of ChatGPT.
Don't leave your responses to chance
Large language model can be opaque – but their performance doesn't have to be. Whether you are building entreprise software or just trying to get more helpful responses from ChatGPT, Prompt Picker ensures that you are getting the best possible results from your AI system.
Better system prompts in three steps
Step #1: Configure your experiment
Define the setup of your experiment, including the system prompts or custom instructions you want to test, the user inputs you want to assess them against, and the parameters of your language model.
Step #2: Assess your generations
Using a double blind experimental setup, choose the generations that best meet your users' needs.
Step #3: Get your results
Having run your experiment, Prompt Picker will return a ranking of the system prompts or custom instructions you tested, leaving you confident that you are using the best possible configuration for your use case.
Frequently asked questions
What language models can I use?
Currently, only GPT-3.5-turbo is supported, but GPT-4 will be coming soon!
How much does it cost?
Prompt Picker is currently free to use, but we will be introducing a paid tier with more features soon.
How do you rank the candidate system prompts?
Based on pairwise comparisons provided by the user, Prompt Picker uses the ELO rating system to rank the candidate system prompts or custom instructions.