AI Blindspots

Know Your Limits

It is important to know when you are out of your depth or you don’t have the tools available to do your job, so you can escalate and ask for help.

Sonnet 3.7 is not very good at knowing its limits. If you want it to tell you when it doesn’t know how to do something, at minimum you will have to explicitly prompt it (for example, Sonnet’s system prompt instructs it to explicitly warn a user about hallucinations if it is being asked about a very niche topic.) It is very important to only ask the LLM to do things that it actually can do, especially when it’s an agent.

Examples