[Review] Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
The paper discusses about prompt engineering, mainly focusing on GPT-3. It compiles some prompt engineering approaches.
Background:
The recent rise of massive self-supervised language models such as GPT-3 arises the interests of prompt engineering. For such models, 0-shot prompts may significantly outperform few-shot prompts. So, the importance of prompt engineering is again being promoted.
Some facts:
- 0-shot may outperform few-shot: instead of treating examples as a categorical guide, it is inferred that their semantic meaning is relevant to the task.
- For GPT-3, its resemblance not to a single human author but a superposition of authors.
![[Review] Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm](/blog/images/22/cover.png)
![[Review] Assisting Static Analysis with Large Language Models: A ChatGPT Experiment](/blog/images/42/cover.png)
![[Review] Detecting Missed Security Operations Through Differential Checking of Object-based Similar Paths](/blog/images/41/cover.png)
![[Review] GPTScan: Detecting Logic Vulnerabilities in Smart Contracts by Combining GPT with Program Analysis](/blog/images/40/cover.png)
![[Review] MoonShine: Optimizing OS Fuzzer Seed Selection with Trace Distillation](/blog/images/39/cover.png)
![[Review] One Simple API Can Cause Hundreds of Bugs: An Analysis of Refcounting Bugs in All Modern Linux Kernels](/blog/images/38/cover.png)