We already used Groq together with Streamlit to create a Youtube Video summarizer.

Now, it’s time to use Groq API to chat with Websites.

Streamlit RAG with Groq

  • Fully available on GitHub

What it is Powering this Project

  • FAISS as Vector Store
  • LLama2 as Embedding Model
  • LLM Mixtral-8x7b (Via Groq API)

By default it will summarize the content of:

It has to create the embeddings - which will be the time consuming part (one time).

Later, the calls to Mixtral through Groq API will be pretty quick.


Conclusions

FAQ

Other F/OSS RAG Frameworks

https://fossengineer.com/embedchain-ai/

But What it is a RAG?

Other F/OSS Projects Implementing a RAG

  • The Ask Multiple PDF Project

  • DifyAI - a development platform specifically designed for LLMs

    • Integrates well with Retrieve, Ask, Generate (RAG) pipelines. RAG pipelines allow your LLM workflows to access and process information from the real world through search engines or other external sources, enhancing the capabilities of your LLM applications.

F/OSS Vector Databases for AI Projects

FAISS is great and open sourced with the MIT ❤️.

But that’s not all regarding Vector Stores, there are other popular alternatives like: ChromaDB, Qdrant, PineCone, …

You can Manage Vector DBs with UI in the blink of an eye