GPT4All is an awsome open source project that allow us to interact with LLMs locally - we can use regular CPU’s or GPU if you have one!.

The project has a Desktop interface version, but today I want to focus in the Python part of GPT4All.

GPT4all with Python

I would recommend you to use a clean Python environment: conda, venv or an isolated Python Container.

The GPT4All Python package we need is as simple to install as:

pip install gpt4all
#pip install gpt4all==1.0.0
#pip show gpt4all

We need to import the Python package and load a Language Model - Make sure you have downloaded some Open Source Model before and place it.

Let’s use Orca model as an example:

from gpt4all import GPT4All
model = GPT4All("/home/yourlocaldirectory/Models/orca-mini-3b.ggmlv3.q4_0.bin")

Next step? Just use the model like so:

output = model.generate("what is a framework?", max_tokens=100)
#output = model.generate("The capital of France is ", max_tokens=3)
#output = model.generate("If i have 10 years and my mother have 25, when will she have the double of my age?", max_tokens=100)

print(output)

A Docker Image for Python GPT4All

The requirements file we need is:

gpt4all==1.0.0

And the Dockerfile:

FROM python:3.11

# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME

COPY . ./

RUN apt-get update && apt-get install -y \
    build-essential \
    curl \
    software-properties-common \
    git \
    && rm -rf /var/lib/apt/lists/*

# Install production dependencies.
RUN pip install -r requirements.txt

#EXPOSE 8501

Feel free to add any extra dependencies for the Python App that you want to incorporate the LLM model to - and create the Docker Image with:

DOCKER_BUILDKIT=1 docker build --no-cache --progress=plain -t py_gpt4all .

And this is it, you can now use your Python Docker Image with GPT4all:

version: '3.8'

services:
  pygpt4all:
    image: py_gpt4all
    container_name: py_aigen_gpt4all
    # ports:
    #   - "8501:8501"
    working_dir: /app
    #command: python3 app.py
    command: tail -f /dev/null #keep it running

FAQ

Using GPT4All with GUI

You can also interact with a Desktop App: https://github.com/nomic-ai/gpt4all.

See Which Models you can use at GPT4All Official Site