Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
amanda-tan authored Jun 7, 2024
1 parent 29f876b commit 23744c9
Showing 1 changed file with 32 additions and 34 deletions.
66 changes: 32 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
# Developing a Web-based Q&A system using a RAG-based approach to LLMs - June 25, 2024

## Course Information
Registration Link: https://tinyurl.com/3tnfey9n
Workshop time: 8.00 am PST/11am pm EST - 2.00pm PST / 5pm EST
Instructor: Sujee Maniyam, Node 51. See Instructor Profile [here](https://www.mongodb.com/developer/author/sujee-maniyam/).
- Registration Link: https://tinyurl.com/3tnfey9n
- Cost: $300 for NET+ Subscribers, $400 for Internet2 members, $500 for non-members
- Workshop time: 8.00 am PST/11am pm EST - 2.00pm PST / 5pm EST
- Instructor: Sujee Maniyam, Node 51. See Instructor Profile [here](https://www.mongodb.com/developer/author/sujee-maniyam/).

## Introduction
We have all seen how powerful and impactful Large Language Models (LLMs) like ChatGPT are. These very powerful AI models are enabling a new generation of conversational
applications. This 6-hour hands-on workshop introduces the audience to LLMs, data augmentation using the Retrieval Generation Augmentation (RAG) approach, and how to how to
build web using open-source tools.
We have all seen how powerful and impactful Large Language Models (LLMs) like ChatGPT are. These very powerful AI models are enabling a new generation of conversational applications. This 6-hour hands-on workshop introduces the audience to LLMs, data augmentation using the Retrieval Generation Augmentation (RAG) approach, and how to how to build web using open-source tools.

## Skills Level
Introductory to Intermediate
Expand All @@ -22,51 +21,50 @@ Introductory to Intermediate
- Deploying containerized applications

## Prerequisites
Comfortable with python programming and notebooks
Have Python development environment locally or have access to Google Colab or other Jupyter environments
A free subscription to a vector database like MongoDB Atlas
Using hosted models like chatGPT, Mistral will require a subscription
To run a local LLM, we recommend a GPU system (a laptop with GPU or Google Colab or cloud instance with GPU)
- Comfortable with python programming and notebooks
- Access to Google Colab
- A free subscription to a vector database like MongoDB Atlas
- Using hosted models like chatGPT, Mistral will require a subscription


## Details
### 1 - Embeddings
Understanding embeddings
Various embedding models
Semantic text search using embeddings
evaluating a various embedding models
- Understanding embeddings
- Various embedding models
- Semantic text search using embeddings
- evaluating a various embedding models

### 2 - Vector Databases
Introduction to vector databases
getting started with MongoDB Atlas
Loading data into database and populate embeddings
Vector search with database and embeddings
- Introduction to vector databases
- getting started with MongoDB Atlas
- Loading data into database and populate embeddings
- Vector search with database and embeddings

### 3 - LLMs
Introduction to LLMs, the eco system
Access LLMs via API
Run LLMs locally using llama-cpp, oolama, lm-studio, jen
Experiment with different LLMs
- Introduction to LLMs, the eco system
- Access LLMs via API
- Run LLMs locally using llama-cpp, oolama, lm-studio, jen
- Experiment with different LLMs

### 4 - Running LLMs
We will use a framework like llama-cpp, llama-index and Langchain to run local LLMs

Run a local LLM. Experiment with various open LLMs like Mistral, Llama
- We will use a framework like llama-cpp, llama-index and Langchain to run local LLMs
- Run a local LLM. Experiment with various open LLMs like Mistral, Llama

### 5 - Develop a custom application using LLM
Use frameworks like streamlit, flask to build sample applications
- Use frameworks like streamlit, flask to build sample applications

### 6 - Building RAG Applications
Here we will query PDF documents using LLMs
Index documents with embeddings. Use various embedding models (OpenAI, Mistral, open source models)
query the documents using various LLMs (OpenAI, Mistral, LLama)
- Here we will query PDF documents using LLMs
- Index documents with embeddings. Use various embedding models (OpenAI, Mistral, open source models)
- query the documents using various LLMs (OpenAI, Mistral, LLama)

### 7 - Deploying RAG Applications
containerizing applications
model serving
scale
- Containerizing applications
- Model serving
- Scalability

### 8 - Workshop / Project
Attendees will a sample application using data of interest, and LLM of their choice
- Attendees will a sample application using data of interest, and LLM of their choice



0 comments on commit 23744c9

Please sign in to comment.