Community

/

Why do chain-of-thought prompts work so well?

Why do chain-of-thought prompts work so well?

·

·

TIL

I have been reading papers on chain-of-thought prompting and the performance gains are remarkable, especially on math and logic tasks. The prevailing theory is that CoT forces the model to allocate computation to intermediate steps rather than compressing reasoning into a single forward pass. Has anyone tested whether the reasoning steps themselves need to be correct, or does the format alone help?

👍

0

Log in to upvote


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *