- Engineering Blog
- Building better software with AI
Building better software with AI
How Attio uses internal AI tooling like Alexandria and Claude 3.7 to power smarter, context-aware software development.

Alexander Christie on June 02, 2025
4 min read
At Attio, we believe that AI is fundamentally changing the way that software gets built. As an AI native company, it’s part of our mission to understand these tools and to advance what is possible.
With the recent release of Claude 3.7 Sonnet, we wanted to discuss some of the ways that we have used the model to upgrade our internal tooling and accelerate the way that we build Attio.
Level 0 - Tooling and Access
You can’t learn about tools when you can’t access them, so we make sure that we have accounts (with security and billing settings in place) with all of the major tools and releases in the AI development space.
As a baseline, we make sure that every member of the team at Attio has access to the tools that they want to use whilst making sure that we can educate each other as new tools become available. When new tools are released, we make sure that we’re testing them out, sharing our learnings internally and offering everyone time, space and budget to experiment with them if they feel appropriate.
We have a diverse engineering team that enjoy using a wide variety of tools, ranging from classic Vim, through more conventional tools like IntelliJ, and cutting edge ones like Cursor and Zed. This gives us the unique ability to try out a wide range of applications and bring together the best of their AI solutions for the team.
Level 1 - Alexandria (Lessons)
One of the most common problems we found with LLM based development tools is that they try to learn about your codebase exclusively from the code in your repository. Even with great indexing, we haven’t found this to be an effective way to learn our standards and plan new development.
To solve this, we created Alexandria, an internal documentation library written specifically for LLMs. Alexandria comprises of “Lessons”, markdown files which explain in natural language rules and conventions about our codebase. Each lesson file contains a front matter annotation which specifies the areas of our codebase which the lesson applies to. We’ve found this to be a surprisingly accurate way to scope context to the underlying models, beyond what is achieved through commonly applied embedding techniques or tool calls.
Alexandria files are written in an RFC-inspired style, which whilst a little blunt for a human reader works well for our silicon-based companions.
An example of an Alexandria file showing how to work with styled components.
Level 2 - Automated Code Review
We use our lesson files to drive an automated code review agent. Our agent runs on GitHub Actions and, whilst originally built to use the OpenAI o series reasoning models, is now powered by Claude 3.7 Sonnet.
The agent works through a tool based system where files are first grouped into logical sub-groups, relevant lessons are provided into the context based on the file paths and a review is undertaken.
Today, our agents act in an “advisory” model using the GitHub Annotations API to call out potential issues that the reviewer might need to be aware of.
As a rapidly growing engineering team, one of the key challenges we face is communicating the complex “ancestral wisdom” about our code base to new members of the team. Overcoming this barrier historically involves a long onboarding period spent pairing deeply with early members of the team, but we’re increasingly seeing Alexandria take over this role. AI Annotations on Pull Requests rapidly deliver feedback to new joiners on more nuanced feedback points whilst allowing more experienced team members a quick and easy way to note down points they are seeing as lessons.
We don’t yet run the AI code review system as an actual reviewer in GitHub. This is largely because we want to prevent “AI Fatigue” where the team begin to ignore the automated reviews or mistrust the models due to the occasional mishap. Our current annotation based system blends the line between not being overly intrusive when it’s wrong, whilst still being incredibly helpful to everyone involved.
Level 3 - MCP servers for Alexandria
Since developing our library of Alexandria lessons, we have also expanded them to include a local MCP server that can be connected to a variety of models and tools. This MCP server gives the models explicit access to the documentation in a convenient way to help aid better decision making in autonomous tools.
The tool that we have seen the most exciting results from so far is Anthropic’s Claude Code, where combined with the Alexandria MCP server is able to reliably deliver code that “feels” like it was written at Attio.
We’re really excited about the future of software engineering as LLMs become more capable and widely deployed. As we progress on our journey with AI, we’re increasingly finding that model capabilities are often held back by the infrastructure around them. This is particularly apparent when comparing the state of commercially available tools for AI Code Review or Code Modification against what can be built internally. At Attio, we’re constantly experimenting with implementation led approaches to support LLM capabilities such as Alexandria.
Interested in redefining one of the world’s most important software categories? Check out our careers page.