PaperCard ‣ CRITIC: LLMs Can Self-Correct With Tool-Interactive Critiquing

Andrei
2 min readApr 24, 2024

--

Hi-res version: https://d3ddy8balm3goa.cloudfront.net/paper-cards/2024_w16-critic.excalidraw.svg

Every so often, I read a paper and become compelled to summarize key aspects from it into a memorable format. This helps me to not only deepen my understanding of the paper’s contents but also to refresh myself on it whenever future me would want/need to. I share these PaperCards to the public so that others too may benefit from them.

Paper

Gou, Zhibin, et al. “Critic: Large language models can self-correct with tool-interactive critiquing.” arXiv preprint arXiv:2305.11738 (2023).

Highlights

1 Reflection is one of the four Agentic Workflow Patterns that Andrew Ng has mentioned will drive progress for LLM Agents in 2024. This design pattern infuses an automated critiquing and subsequent correcting step on the LLM’s responses. Doing so often leads to performance gains as noted by Andrew Ng from his personal experience, which also corroborates the results of the CRITIC paper.

2 The experiments carried out in this paper demonstrated that LLMs alone cannot reliably used to carry out critiquing and correction on their own work; underscoring the importance of using external tools.

3This CRITIC method works without any fine-tuning and can be applied on top of any black-box LLM. With that being said, CRITIC does work better with stronger LLMs, which is quite intuitive. Perhaps performance gains can be realized by using an LLM fine-tuned to provide better critiques using certain tools.

--

--

Andrei
Andrei

Written by Andrei

Founding software/machine-learning engineer at LlamaIndex. (https://ca.linkedin.com/in/nerdai)

No responses yet