A MultiChat with Streamlit

WIth this project, we will have a single Python Streamlit UI to Interact with:

If you want, you can try these projects, first:

  1. Install Python 🐍
  2. Clone the repository
  3. And install Python dependencies
  • We will be using venv first and later create a Docker version for SelfHosting the GenAI App.

Lets have a look to the projects.

Streamlit Chat with OpenAI

git clone https://github.com/JAlcocerT/openai-chatbot

python -m venv openaichatbot #create it

openaichatbot\Scripts\activate #activate venv (windows)
source openaichatbot/bin/activate #(linux)

#deactivate #when you are done

Once active, you can just install the Python packages as usual and that will affect only that venv:

pip install -r requirements.txt #all at once

#pip list
#pip show streamlit #check the installed version
streamlit==1.26.0 #https://pypi.org/project/streamlit/#history
openai==0.28.0 #https://pypi.org/project/openai/#history

Now, to create the Docker Image:

Really, Just Get Docker 🐋👇

You can install Docker for any PC, Mac, or Linux at home or in any cloud provider that you wish. It will just take a few moments. If you are on Linux, just:

apt-get update && sudo apt-get upgrade && curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
#sudo apt install docker-compose -y

And install also Docker-compose with:

apt install docker-compose -y

When the process finishes, you can use it to self-host other services as well. You should see the versions with:

docker --version
docker-compose --version
#sudo systemctl status docker #and the status
FROM python:3.11

# Install git
RUN apt-get update && apt-get install -y git

# Set up the working directory
#WORKDIR /app

# Clone the repository
RUN git clone https://github.com/JAlcocerT/openai-chatbot

WORKDIR /openai-chatbot

# Install Python requirements
RUN pip install -r /phidata/cookbook/llms/groq/video_summary/requirements.txt

#RUN sed -i 's/numpy==1\.26\.4/numpy==1.24.4/; s/pandas==2\.2\.2/pandas==2.0.2/' requirements.txt

# Set the entrypoint to a bash shell
CMD ["/bin/bash"]
export DOCKER_BUILDKIT=1
docker build --no-cache -t openaichatbot . #> build_log.txt 2>&1

Or if you prefer, with Podman:

podman build -t openaichatbot .
#podman run -d -p 8501:8501 openaichatbot
#docker run -p 8501:8501 openaichatbot:latest
docker exec -it openaichatbot /bin/bash

#sudo docker run -it -p 8502:8501 openaichatbot:latest /bin/bash

With Portainer:

version: '3'

services:
  streamlit-openaichatbot:
    image: openaichatbot
    container_name: openaichatbot
    volumes:
      - ai_openaichatbot:/app
    working_dir: /app  # Set the working directory to /app
    command: /bin/sh -c "streamlit run streamlit_app.py"    
    #command: tail -f /dev/null #streamlit run appv2.py # tail -f /dev/null
    ports:
      - "8507:8501"    

volumes:
  ai_openaichatbot:

Streamlit Chat with Groq

Streamlit Chat with Anthropic

Streamlit Chat with Ollama

The Streamlit MultiChat Project

SelfHosting Streamlit MultiChat

Build the image:

podman build -t streamlit-multichat .

and deploy with:

version: '3'

services:
  streamlit-multichat:
    image: streamlit-multichat #ghcr.io/jalcocert/streamlit-multichat:latest
    container_name: streamlit_multichat
    volumes:
      - ai_streamlit_multichat:/app
    working_dir: /app
    #command: tail -f /dev/null # Keep the container running
    command: /bin/sh -c "\
      mkdir -p /app/.streamlit && \
      echo 'OPENAI_API_KEY = \"sk-proj-yourkey\"' > /app/.streamlit/secrets.toml && \
      echo 'GROQ_API_KEY = \"gsk_yourkey\"' >> /app/.streamlit/secrets.toml && \
      echo 'ANTHROPIC_API_KEY = \"sk-ant-api03-yourkey\"' >> /app/.streamlit/secrets.toml && \      
      streamlit run Z_multichat.py"
    ports:
      - "8503:8501"
    networks:
      - cloudflare_tunnel
      # - nginx_default      

volumes:
  ai_streamlit_multichat:

networks:
  cloudflare_tunnel:
    external: true
  # nginx_default:
  #   external: true

#docker-compose up -d

Conclusion

Similar AI Projects 👇

And feel free to use any of these:

LLM Service Description/Link
Groq Groq API Keys - Use Open Models, like Llama3-70B
Gemini (Google) Gemini API Documentation
Mixtral Open Models - You can use their API here
Anthropic (Claude) Anthropic API Documentation, Console, API Keys
OpenAI GPT API Keys
Grok (Twitter) -
Azure OpenAI -
Amazon Bedrock -
Using buildx with Github Actions to create x86 and ARM64 images⏬

We need to define a Github Actions workflow with buildx:

name: CI/CD Build MultiArch

on:
  push:
    branches:
      - main

jobs:
  build-and-push:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v2

    - name: Set up QEMU
      uses: docker/setup-qemu-action@v1

    - name: Set up Docker Buildx #here the cool thing happens
      uses: docker/setup-buildx-action@v1

    - name: Login to GitHub Container Registry
      uses: docker/login-action@v1
      with:
        registry: ghcr.io
        username: ${{ github.actor }}
        password: ${{ secrets.CICD_TOKEN_MultiChat }}

    - name: Build and push Docker image
      uses: docker/build-push-action@v2
      with:
        context: .
        push: true
        platforms: linux/amd64,linux/arm64 #any other
        tags: |
          ghcr.io/yourGHuser/multichat:v1.0
          ghcr.io/yourGHuser/multichat:latest          

It uses QEMU to emulate different computer architecture to be able to build the images.

Locally, you could do:

#build and push the image and manifest to DockerHub
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t yourDockerHubUser/multichat --push .

Chat with CSV, PDF, TXT files 📄 and YTB videos 🎥 | using Langchain🦜 | OpenAI | Streamlit ⚡

git clone https://github.com/yvann-hub/Robby-chatbot.git
cd Robby-chatbot

python3 -m venv robby #create it

robby\Scripts\activate #activate venv (windows)
source robby/bin/activate #(linux)

streamlit run src/Home.py
#deactivate #when you are done

This one also summarizes YT Videos thanks to https://python.langchain.com/v0.2/docs/tutorials/summarization/

F/OSS RAGs

version: '3'
services:
  qdrant:
    container_name: my_qdrant_container
    image: qdrant/qdrant
    ports:
      - "6333:6333"
    volumes:
      - qdrant_data:/path/to/qdrant_data

volumes:
  qdrant_data:

Build resource-driven LLM-powered bots

  • LangChain
  • LLamaIndex

LlamaIndex is a data framework for your LLM applications

Chainlit is an open-source Python package to build production ready Conversational AI.

F/OSS Knowledge Graphs

  • Neo4j - A popular graph database that uses a property graph model. It supports complex queries and provides a rich ecosystem of tools and integrations.
  • Apache Jena - A Java framework for building semantic web and linked data applications. It provides tools for RDF data, SPARQL querying, and OWL reasoning.
What it is GraphRAG ⏬

Create LLM derived knowledge Graph which serve as the LLM memory representation.

This is great for explainability!

How to use LLMs with MultiAgents Frameworks

Try them together with LLMOps Tools like Pezzo AI or Agenta

F/OSS Conversational AI

Build Conversational AI Experiences

pip install langflow==1.0.0 #https://pypi.org/project/langflow/
python -m langflow run

Langflow is a no-code AI ecosystem, integrating seamlessly with the tools and stacks your team knows and loves.

FAQ

The GenAI Stack will get you started building your own GenAI application in no time

How to create an interesting readme.md ⏬

Similar Free and Open Tools for Generative AI

How can I use LLMs to help me code