Blog>Deep Dives

CrewAI Memory with Cognee

Introduction

Over the past month, I've been spending time in San Francisco, immersing myself in the tech scene through various events, demos, and presentations. While public events can be hit or miss, reputable ones often prove valuable. Our initial product announcement was made at a Weaviate event, and when I heard that Philipp from Weaviate was organizing a small hackathon, I knew I couldn't miss it.

After a long day at work, I headed to the GitHub office venue. While waiting for the event to start, I met Clovis, who works on fascinating projects exploring the post-AI labor market. We decided to pair up for the hackathon.

We needed to submit this project in just 2 hours, so what you see here was developed with speed in mind. Nevertheless, it opens up an interesting space for experimentation.

And we won!

The Project: Real Estate Research Agent with Agentic Memory

Problem Statement

When choosing a new place to live, we need to consider both the living space itself and its surroundings. The challenge is how to efficiently evaluate an area without wasting precious time.

Solution

We developed an AI Agent system with three main components:

  1. Planning Agent (Boss): Controls and coordinates the analysis
  2. Living Space Agent: Evaluates apartment features and personality profiles
  3. Area Vibes Agent: Analyzes neighborhood characteristics

Architecture Overview

Key Components

  • Cognee Memory Layer: Creates a semantic layer using graph store and Weaviate vector database
  • CrewAI Agents: Handle the analysis process
  • Remote Execution: Runs on Phoenix platform

Data Structure

User Profile Example

Apartment Listing Example

Implementation

The complete code for this project is available at GitHub Repository.

Cognee Integration with CrewAI

First, we create a Cognee search tool:

Crew Definition

Task Definition

Crew Execution

How It Works

  1. Data Loading: Cognee loads and processes audio and text files from real estate listings
  2. Agent Coordination: The Planning Agent distributes relevant information to specialized agents
  3. Analysis: Each agent performs its specific analysis using Cognee's semantic search
  4. Decision Making: Agents make recommendations based on user preferences and available data
  5. Final Report: Results are compiled into a comprehensive analysis

Benefits of This Approach

  1. Efficient Information Processing: Cognee's semantic layer enables quick access to relevant information
  2. Specialized Analysis: Each agent focuses on specific aspects of the decision-making process
  3. Contextual Understanding: The system considers both explicit and implicit relationships in the data
  4. Scalable Architecture: The modular design allows for easy addition of new agents or analysis types

Future Improvements

  1. Enhanced Data Sources: Integration with more real estate data providers
  2. Advanced Analytics: Implementation of more sophisticated analysis algorithms
  3. User Interface: Development of a user-friendly interface for interaction
  4. Performance Optimization: Further optimization of the semantic search layer

Conclusion

This project demonstrates how Cognee can serve as an effective AI memory framework when combined with CrewAI. The system successfully processes complex real estate data while maintaining context and relationships, making it a powerful tool for decision-making in real estate research.

The combination of semantic search, graph storage, and specialized agents creates a robust foundation for building intelligent systems that can understand and process complex domain-specific information.

If you try implementing this approach, we'd love to hear about your experience! The full repo can be found here.

Going beyond Langchain + Weaviate: Level 4 towards production
Deep Dives

Going beyond Langchain + Weaviate: Level 4 towards production

In our quest for a robust RAG model, we delve into memory architecture and integrate with keepi.ai. Using human-inspired cognitive processes, we optimize data management with a focus on graph databases.

6 mins read

Improving LLM Accuracy: Graph-Based Retrieval and Chunking Methods
Deep Dives

Improving LLM Accuracy: Graph-Based Retrieval and Chunking Methods

Discover how integrating graph-based retrieval methods and advanced chunking techniques can enhance the relevance and precision of responses generated by Large Language Models (LLMs).

8 mins read

Going beyond Langchain + Weaviate: Level 3 towards production
Deep Dives

Going beyond Langchain + Weaviate: Level 3 towards production

Enhancing RAG applications involves testing adjustable parameters like document quantity and chunk size. Challenges include reliably linking memories and organizing memory elements for human-like understanding. We need to ensure robust AI development.

7 mins read