Prompt caching significantly reduces LLM costs and latency by storing and reusing responses to repeated or similar prompts. The core technique involves checking a cache before sending a prompt to the LLM, retrieving a prior result if available. Effective caching requires balancing cache size, retrieval speed (using methods like vector databases), and strategies for handling slight prompt variations.