The CrewAI Project
You can find CrewAI Project Details and source code at:
- The Project on PyPI
- The CrewAI Source Code at Github
- License: MIT 鉂わ笍
CrewAI is a Framework that will make easy for us to get Local AI Agents interacting between them.
Using Crew AI
Pre-Requisites - Get Docker! 馃憞
Important step and quite recommended for any SelfHosting Project - Get Docker Installed
It will be one command, this one, if you are in Linux:
apt-get update && sudo apt-get upgrade && curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh && docker version
Follow the steps below to get CrewAI in a Docker Container to have all the dependencies contained.
With this approach, we will get our Free AI Agents interacting between them locally.
And yes, we will be using local Models thanks to Ollama - Because why to use OpenAI when you can SelfHost LLMs with Ollama
We need three steps:
- Get Ollama Ready
- Create our CrewAI Docker Image: Dockerfile, requirements.txt and Python Script
- Spin the CrewAI Service
Building the CrewAI Container
Prepare the files in a new folder and build the image with:
docker build -t crewai .
Dockerfile
# Use the Python 3.11 slim base image
FROM python:3.11-slim
# Set the working directory in the container
WORKDIR /app
COPY . ./
# Install Python dependencies from requirements file
RUN pip install -r requirements.txt
# Keep the container running
CMD ["tail", "-f", "/dev/null"]
#docker build -t crewai .
Requirements File
We will be using CrewAI together with LangChain to Integrate Ollama.
crewai==0.1.14
langchain==0.0.353
Our Python Script
Here we select the Ollama Model/s that we want to create the interaction.
Also we provide their context and role.
import os
from crewai import Agent, Task, Crew, Process
### OLLAMA (THANKS TO LANGCHAIN)
from langchain.llms import Ollama
ollama_model = Ollama(model="openhermes")
### OPENAI
# os.environ["OPENAI_API_KEY"] = "Your Key"
#export OPENAI_API_KEY=sk-blablabla # on Linux/Mac
# Define your agents with roles and goals
researcher = Agent(
role='Researcher',
goal='Discover new insights',
backstory="You're a world class researcher working on a major data science company",
verbose=True,
allow_delegation=False,
llm=ollama_model, ### OLLAMA VERSION!!
# llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4 ### OPENAI VERSION!!
)
writer = Agent(
role='Writer',
goal='Create engaging content',
backstory="You're a famous technical writer, specialized on writing data related content",
verbose=True,
allow_delegation=False,
llm=ollama_model ### OLLAMA VERSION!!
)
# Create tasks for your agents
task1 = Task(description='Investigate the latest AI trends', agent=researcher)
task2 = Task(description='Write a blog post on AI advancements', agent=writer)
# Instantiate your crew with a sequential process - TWO AGENTS!
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
llm=ollama_model, ### OLLAMA VERSION!!
verbose=2, # Crew verbose more will let you know what tasks are being worked on, you can set it to 1 or 2 to different logging levels
process=Process.sequential # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
)
# Get your crew to work!
result = crew.kickoff()
SelfHosting AI Agents - CrewAI
Now, deploy with the Docker Stack:
version: '3.8'
services:
localaigents:
image: crewai
container_name: aigen_crewai
working_dir: /app
#command: python3 app.py
command: tail -f /dev/null #keep it running
Then, enter the container terminal, and execute the Python script: main-crewai.py
python3 main-crewai.py
And this is it. We have CrewAI working with Ollama - Now Feel to try different LLMs, prompts and roles to get the Local Models working for your tasks.
FAQ
CrewAI is built on top of LangChain framework.
F/OSS Frameworks to Build Context Aware AI Apps
- LangChain
- LLamaIndex
- PandasAI
- EmbedChain - Build resource-driven LLM-powered bots
- Chainlit
Chainlit is an open-source Python package to build production ready Conversational AI.