Jeremy Howard 10/1/2025

Cachy: How we made our notebooks 60x faster.

Read Original

The article details how AnswerAI created Cachy, a Python package that patches the httpx library to automatically cache responses from LLM providers like OpenAI and Anthropic. This eliminates slow, non-deterministic LLM calls in tests and development, making notebooks 60x faster, enabling CI/CD integration, and producing cleaner code diffs without manual mocking.

Cachy: How we made our notebooks 60x faster.

Comments

No comments yet

Be the first to share your thoughts!

Browser Extension

Get instant access to AllDevBlogs from your browser

Top of the Week

1
The Beautiful Web
Jens Oliver Meiert 2 votes
3
LLM Use in the Python Source Code
Miguel Grinberg 1 votes
4
Wagon’s algorithm in Python
John D. Cook 1 votes