Telling an AI to hurry up and “answer directly” hurts accuracy. A new Wharton study shows the hit is close to 10%. Ethan Mollick posted about the finding. See link to his post in the comments.
I asked ChatGPT to give me some examples of better vs ok prompts. The table below has six examples. Each one pairs the old “answer directly” prompt with a smarter prompt that lets the model think first.
The study says that asking direct answers doesn’t work as well because AI:
Doesn’t have time to think
Can’t double-check its work
Focuses more on format than facts
Looks confident even when it’s wrong
This is great news because my go-to use case for AI is as a thought partner, not a Q&A machine.
So fascinating, don’t you think?
