Community

A community of prompt engineers sharing insights, tips, and learnings


  • Tip – Always include output format in your prompts

    One of the most impactful improvements you can make to your prompts is specifying the exact output format you want. Whether you need JSON, markdown, a bulleted list, or a…

    👍

    0

  • TIL

    Tip: Temperature 0 for deterministic outputs

    If you need reproducible, consistent outputs from an LLM, set temperature to 0. This makes the model always select the highest-probability token at each step, effectively making it deterministic. This…

    👍

    0

  • TIL

    My AI-powered code review setup

    Sharing my code review workflow that catches issues our team of five humans regularly misses. I feed each PR diff to Claude with a structured review prompt covering bugs, security,…

    👍

    0

  • TIL

    Tip: Use XML tags to structure complex prompts

    Wrapping prompt sections in XML tags dramatically improves output quality, especially with Claude. Instead of relying on markdown headers or plain text separators, use tags like , , , and…

    👍

    0

  • TIL

    TIL: Few-shot examples beat zero-shot for classification tasks

    Ran a systematic comparison on a sentiment classification dataset: zero-shot hit 78% accuracy, one-shot reached 85%, and three-shot examples pushed it to 91%. The examples do not need to be…

    👍

    0

  • TIL

    Why do chain-of-thought prompts work so well?

    I have been reading papers on chain-of-thought prompting and the performance gains are remarkable, especially on math and logic tasks. The prevailing theory is that CoT forces the model to…

    👍

    0

  • TIL

    How I automated my entire blog workflow with AI

    I built a complete blog automation pipeline using chained AI prompts. Step one generates topic ideas from trending keywords. Step two creates detailed outlines. Step three drafts each section with…

    👍

    0

  • TIL

    TIL: Claude handles 200K token contexts better than expected

    I tested Claude with a full 200K token context containing a large codebase and was surprised by how accurately it could reference specific functions deep in the input. The attention…

    👍

    0