I Built Alexandria for My AI Agents

I Built Alexandria for My AI Agents

I had finally had it. Anyone who knows me knows I’m not the most patient person when it comes to repetitive work. The same frameworks, learnings, structure, process… It all became repetitive.

Re-teaching my AI agents the same patterns across different projects was beyond annoying.

Getting agents to function as top performers in a specific role or discipline takes a lot of finessing particular domain knowledge into its functional context. But just as with humans, most of the teaching process is in repetition.

It started months ago, when a client asked for a behavioural analysis of their product. This is something fundamental to the work I do, so I had already developed a specialized agent ready to run this specific type of analysis.

Now it’s important to note that the point of agents like this (at that time) was to get the 80% result and allow me to refine the last 20%. Applying highly contextual analysis and applying each general learning to the agent to make it stronger for the next task.

What’s worse is that I was manually connecting tasks between agents for higher quality output at the cost of my time. More work on my side.

When clients needed a new agent(s) constructed from the ground up, well, that was front-loaded with tons of additional processes, knowledge building, context, processes, and the whole gamut of construction added to the work.

But this was becoming tedious.

Why wouldn’t an agent or agents be able to understand these learnings faster? Why couldn’t these agents work together? Naturally deferring to those with domain expertise and spawning new agents when necessary. Why couldn’t they evaluate each other, share learnings, and have real-time bidirectional conversations? You know…

Like a real team or org.

I built Alexandria to fix this.

Library of Alexandria - Ancient knowledge repository

Alexandria is my centralized hub of knowledge for 100+ specialized AI agents organized into functional teams. It’s the entire ecosystem where agents live, collaborate, and learn from each other. At its core, Alexandria is a learning API that enables cross-project knowledge sharing through vector-based storage. When one agent learns something valuable, every agent across every project can access it. My behavioural agent’s analysis patterns now help my sales agent. Just as my documentation agent’s technical clarity improves my case study agent. Knowledge flows freely instead of staying siloed.

Since launching Alexandria, my agent onboarding dropped from hours to minutes. Repeated mistakes fell 90%. Quality became consistent across all my projects… old and new.

And I want to share with you how I built it and how it solves the knowledge silo problem.

Knowledge Silos Kill Productivity

Picture this scenario. You’re juggling three active projects:

Project 1: Your Content Creation Site (Jottly) Your suite of notetaking and content agents has learned exactly how to format your “jots”. It knows you prefer bullet points that start with action verbs. It understands your technical depth sweet spot. After 15 iterations, it finally nails your professional voice. It knows the type of writer you are and provides feedback on ways you can improve that are specialized to YOU.

Project 2: Your Blog (PM_Notebook) You switch to your blog project. The Chief Editor agent needs to review your article on AI tooling and content. It starts learning your preferred structure, your writing voice, and your technical level. But wait, didn’t your Jottly agents just learn all of this? Aren’t they actively learning and providing feedback?!

Project 3: Your SaaS Product (WorkerBee) Now you’re documenting WorkerBee’s API. The technical writer agent asks for style preferences. Active voice or passive? Code-first or concept-first? You’ve already explained this twice to other agents.

I use content as an example because it’s the easiest for people to conceptualize (you’re reading this, right?!). Applying this to domain-specific roles is easy to understand if you’ve worked on large teams. Software Engineers, Art, Design, Writing, Product, literally all disciplines have specializations that overlap with others. Not to mention direct dependencies that require close communication in tight iteration cycles.

We’ve felt this pain in real life and have clear frameworks to help solve these issues, but first..

No Pain, No Gain

If it weren’t for these pain points accumulating over months, I would have never created Alexandria. So let’s crystallize the pains it aims to solve.

Pain #1: Knowledge Silos Each project’s agents operate in isolation. Your CV agent’s learnings stay in the CV project. Your blog agent can’t access them. Knowledge that should be shared stays trapped.

Pain #2: Memory Loss Agents don’t persist learnings between sessions. You taught your agent how to write compelling hooks last week. Today, it asks again. The knowledge disappeared when the session ended. This is a fundamental limitation in agentic AI maturity that Alexandria addresses.

Pain #3: Repeated Mistakes Without cross-project learning, agents make the same mistakes in multiple projects. Your CV agent struggled with technical depth. Your blog agent hits the same issue. You fix it twice instead of once.

Pain #4: Real-time Communication Without communication, all this falls apart. The faster you allow individuals to communicate, the better the results.

Pain #5: Quality Assurance Verifying the quality of an agent.

Pain #6: Automation How do we leverage our time? Becoming an architect, rather than a babysitter.

The Consequences

This isn’t just annoying. It’s expensive.

Time Cost: You spend 2-3 hours per project teaching agents patterns they should already know.

Quality Cost: Inconsistent agent performance across projects. Your CV looks professional. Your blog reads differently. Your product briefs use a third style.

Scaling Cost: Every new project requires full agent training. Want to start a fourth project? Clear your calendar for agent onboarding.

Agents lack cross-project learning capability. They can’t share knowledge. They can’t build on learnings from previous projects. They reset with every new context, forcing you to rebuild intelligence from scratch.

Alexandria fixes this by enabling cross-project learning through a centralized knowledge base for all your agents.

The Mouseion

The structure on which the Library was built. The Mouseion, containing lecture halls, gardens, dining areas, and numerous scrolls (over 700,000 at its peak).

Alexandria is my centralized agent ecosystem and learning platform. It contains >150 specialized agents organized into 16 functional teams, plus a learning API that enables cross-project knowledge sharing. Think of it as both a shared brain and an agent network hub that stores and distributes knowledge across all your projects.

By now, you get the gist: When you use AI agents across multiple projects, each one starts from scratch. Agents are cool, but actually, teams of agents are better. But none of them share knowledge through cross-project learning.

Alexandria changes this by making AI agent learning transferable across projects. The learning API is the mechanism that captures and distributes knowledge across the agent network.

The Meat & Potatoes

WARNING: I will take one step in the direction of the technical framework. Not too deep, but most should be understood by anyone remotely into AI.

At its core, Alexandria uses vector embeddings to store and retrieve agent learnings. This enables semantic search across all your agents’ knowledge, helping them progress through maturity levels more efficiently. But you don’t need to understand vectors to use it.

Here’s the simple version:

1. Agents Learn Something Valuable When an agent creates high-quality work (quality score 9+), that learning gets stored in Alexandria.

2. Learnings Become Searchable Each learning is converted into a vector (a mathematical representation) that makes it searchable by meaning, not just keywords.

3. Agents Search Before Working Before an agent starts a task, it searches Alexandria: “Has any agent across any project learned how to do this?”

4. Cross-Project Learning Transfers If your CV agent learned how to write compelling narratives, your blog agent can access that same pattern through cross-project knowledge sharing.

Key Architecture Components

Qdrant Vector Database

  • Stores learnings as searchable vectors
  • Enables semantic search (meaning-based, not keyword-based)
  • Returns similar learnings ranked by relevance
  • Purpose-built for vector similarity search at scale

PostgreSQL Metadata Layer

  • Tracks agent metrics and learning history
  • Stores quality scores and confidence levels
  • Manages cross-project access patterns
  • Provides analytics foundation

Session Management

  • Tracks what agents learned during each work session
  • Prevents duplicate learnings
  • Provides learning history and analytics

Quality Filtering

  • Only stores high-quality learnings (score 9+)
  • Confidence thresholds ensure reliability
  • Prevents noise from polluting the knowledge base

Cross-Project Learning Access

  • Learnings from CV project available to Blog project
  • Learnings from Jottly available to WorkerBee
  • Centralized knowledge enables seamless cross-project learning

The Data Flow (PM-Friendly Explanation)

  1. Start Session: Agent begins work, creates session ID
  2. Search Phase: Agent queries Alexandria for relevant learnings
  3. Work Phase: Agent completes task using the retrieved knowledge
  4. Learn Phase: Agent stores new high-quality learnings
  5. End Session: Session closes, metrics tracked

This happens automatically once you integrate Alexandria.

Compounding Benefits

Alexandria transforms how your AI agents learn and work. Here’s what actually changed for me after I built it.

Benefit #1: Ship Faster with Compound Knowledge

Before Alexandria, each new project meant starting from scratch. I’d spend Sunday afternoons re-teaching agents patterns I knew they’d learned elsewhere. My blog agent knew how to write for PMs. My CV agent knew how to write for PMs. But they couldn’t talk to each other.

After integrating Alexandria, my blog agent searches the knowledge base before starting work. It finds learnings from my CV agent, my docs agent, and my case study agent. All three had contributed patterns for writing to a PM audience. Instead of starting with zero knowledge, it starts with compound knowledge.

What used to take 2-3 hours now takes 30 minutes. This moves your agents from basic prompt execution to specialized team capability significantly faster.

Benefit #2: Higher Quality Through Learning Transfer

I noticed something interesting after a few weeks. My CV agent had learned that action verbs in bullet points drive stronger engagement. That single learning automatically transferred to my blog agent (action-oriented headlines), my SaaS agent (imperative documentation), and my case study agent (outcome-focused narratives).

One pattern I taught once. Four quality improvements I got automatically.

The repeated mistakes I used to fix project by project just stopped happening. When one agent learns not to make a mistake, the rest benefit immediately.

Benefit #3: Scale Your Team Without Scaling Training

When my product team adopted Alexandria, something clicked. We had four PMs, each managing different parts of our SaaS platform. Before, each PM’s agents learned independently. Documentation standards? What documentation standards?

After Alexandria, the team’s agents share one knowledge base. When I taught my agents how to write API documentation for technical audiences, it took me three iterations to get right. Sarah (junior PM on our team) documented her feature the following week. Her agents immediately accessed those patterns. She didn’t train anything. Her agents just knew.

Team productivity jumped noticeably. Onboarding time for new PMs dropped by more than half.

Benefit #4: Learning Visibility and Trust

The unpredictability bothered me. I never quite knew what my agents knew. Sometimes they’d nail a task. Sometimes they’d forget something they clearly knew last week.

With Alexandria, I can query: “What has my blog agent learned about writing for PMs?” The full learning history shows up. I can see exactly what knowledge my agent is using, where it came from, and how confident it is.

Performance stopped feeling unpredictable. I can make data-driven decisions about which agents to use for what.

Benefit #5: Performance Metrics That Matter

Last month, my CV agent stored 12 high-quality learnings. My blog agent used 8 of them. That’s quantifiable knowledge transfer I can actually track.

Before Alexandria, I had no way to measure agent improvement. Now I track learnings stored, learnings used, quality scores, and confidence levels. I can prove the ROI with actual data instead of just feeling like things are getting better.

When Alexandria Isn’t Optimal

Alexandria solves cross-project learning, but it’s not for everyone. Here’s when you probably don’t need it:

Solo, Single-Project Work: If you’re only managing one project with no plans to expand, the overhead of centralized learning doesn’t pay off. Your agents can store learnings locally just fine.

Simple, Repetitive Tasks: For basic automation that doesn’t require learning or improvement (scheduled reports, data formatting, simple file operations), Alexandria is overkill. You don’t need cross-project intelligence for tasks that never change.

Early-Stage Exploration: If you’re just testing AI agents or experimenting with different approaches, wait until you have established patterns worth sharing. Alexandria shines when you have proven learnings to distribute.

Privacy-Critical Isolation: Some projects require strict knowledge isolation (client work, regulated industries, sensitive data). If cross-project learning poses compliance or confidentiality risks, keep agents siloed.

Resource-Constrained Environments: Alexandria requires infrastructure (vector database, API layer, session management). If you’re optimizing for minimal dependencies or offline operation, the complexity may outweigh the benefits.

The honest assessment: Alexandria is built for people managing multiple AI-powered projects who are tired of re-teaching the same patterns. If that’s not you yet, bookmark this and come back when it is.

How You Can Integrate Alexandria

I set up my first Alexandria integration during a Sunday afternoon. Took me 15 minutes with coffee in hand.

The integration follows five steps:

  1. Authentication: Generate a JWT token using your API key
  2. Create Session: Start a learning session for your agent
  3. Search Learnings: Query Alexandria for relevant knowledge before starting work
  4. Store Learnings: Save new high-quality learnings after completing tasks
  5. Close Session: End the session and calculate metrics

Here’s the pattern I follow: search first (leverage existing knowledge), work, then store (contribute new knowledge). Sessions track learning over time and prevent duplicates.

Time investment: 15 minutes for your first integration. After that? Less than 2 minutes for each new project.

When I integrated Alexandria into my blog project (the second time around), it took 90 seconds. The third project took even less.

For complete API documentation with code examples, authentication details, and best practices, see the full integration guide.

FAQ

Q: How long does it take to integrate Alexandria?

First time? About 15 minutes. I’ve been trying to make it easier with pip install, but that’s a WIP. Your second project takes about 90 seconds. Third project? Even faster.

Q: Do I need to understand vector databases to use Alexandria?

Nope. I barely understand Qdrant’s internals myself. Alexandria handles the complexity. You just make simple API calls.

Q: Can I search learnings from other team members?

Yes. This is where it gets powerful. When your teammate teaches their agents something valuable, your agents can access it immediately. No knowledge silos.

Q: What happens if I store low-quality learnings?

Alexandria won’t let you. Quality filter requires 9+ scores and 0.7+ confidence (all configurable). I’ve tested storing a mediocre learning once (quality score 7.5). System rejected it. Only high-quality knowledge gets through.

Q: Is my data secure?

Yes. Learnings are stored with project-level isolation. You control which projects can access your learnings. My CV agent’s knowledge doesn’t leak to anyone unless I explicitly share access.

Q: How much does it cost?

Alexandria is currently in private beta. I’m seriously considering going open source because why not? I’d love to hear if people are interested in something like this.

The Future: Collective Intelligence

The future of AI agents isn’t smarter individual agents. It’s agents that learn collectively.

Right now, your agents are isolated islands of knowledge. Each one starts from scratch. Each one learns independently. Each one makes the same mistakes in slightly different contexts.

I got tired of that pattern.

Alexandria connects these islands into a learning network. Your agents become a system where knowledge flows freely, similar to how the Cycle Nexus framework enables pattern recognition across different domains. Learnings compound. Every project makes every future project better.

This isn’t just about efficiency, though you’ll ship significantly faster. It’s not just about quality, though you’ll eliminate most repeated mistakes. It’s about building organizational AI capability that grows stronger over time.

My first project trained my agents from zero to competent. My tenth project started with nine projects worth of accumulated wisdom. My hundredth project will leverage a vast knowledge base that no individual agent could learn alone.

That’s the compound effect I was chasing.

Ready to Get Started?

Remember that Sunday afternoon I mentioned? When I was frustrated re-teaching the same patterns for the third time?

Alexandria fixed that.

Your agents are already smart. They just don’t remember. They don’t share. They don’t compound.

Integrate Alexandria into your next project. Watch what happens when your agents stop forgetting and start learning together.

Get started: Alexandria API Documentation View the API: OpenAPI Specification Questions: Open an issue on GitHub

Your agents are ready to remember. Are you ready to let them learn together?


Written by: 6 Alexandria agents working in parallel:

  • UX Researcher v3.2 (audience analysis)
  • Content Strategist v3.2 (article structure)
  • Technical Writer v3.2 (architecture & integration)
  • Blog Content Writer v3.2 (narrative sections)
  • SEO Specialist v3.2.1 (optimization)
  • Content Editor v3.2 (final polish)

Session: c1213803-90b9-48ce-a0db-64e99dc9afb4 Learnings Stored: 7 Workflow: Multi-agent content creation with full Alexandria integration

Quality Scores:

  • UX Research: 9/10
  • Content Strategy: 9/10
  • Technical Writing: 9/10
  • Blog Content: 9/10
  • SEO Optimization: 9.2/10
  • Editorial Review: 9.5/10
PM

PATRICK MCGRATH

Product manager with 10+ years in gaming, having shipped 8 projects that hit $100M+ lifetime revenue (3 exceeded $500M). Currently building in Web3 gaming and writing about crypto, gaming, AI, and product management. Exploring the intersections where technology meets philosophy meets possibility.

TOPICS

#AI #Agentic AI #Product Management #Knowledge Management #Systems Thinking