Skip to content

Commit e7d521d

Browse files
committed
formatting
1 parent da207ea commit e7d521d

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

updates/website-11-21-23.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ In the era of advanced technology, integrating Artificial Intelligence (AI) into
2020

2121
Our AI chat system takes a unique approach by analyzing our documentation to find the most suitable information for user queries. If the AI can't provide a direct answer, it gracefully redirects users to our Discord community for further assistance.
2222

23-
### 🚫Warning🚫 this is a very technical document going deep into code if you care great this is the blog post I wanted when we first started down this path.
23+
### 🚫Warning🚫 this is a very technical document going deep into code if you care great this is the blog post I wanted when we first started down this path. If not your take away can be we are testing an ai chat for the website.
2424

2525
## How Does It Get Info from Our Docs
2626

@@ -108,7 +108,7 @@ metadata: {
108108
```
109109

110110
Now you can see mostly the same stuff but a new member called `embedding` these are all of the words as numbers.
111-
These we can use to do similarity searches using math instead of string, making the whole search very fast and more acurate.
111+
These we can use to do similarity searches using math instead of string, making the whole search very fast and more accurate.
112112

113113

114114
## How We Generate the Answer
@@ -123,13 +123,17 @@ To generate the final answer, we utilize prompt engineering. This involves takin
123123

124124
```typescript
125125

126-
export const retriever = async (prompt: string, chatHistory: string[] = []) => { // this is our entry point where we take in the prompt and chat history then build the prompt we will send into the ai and seach our docs database for relevent info for the ai to use as context.
126+
export const retriever = async (prompt: string, chatHistory: string[] = []) => { // this is our
127+
//entry point where we take in the prompt and chat history then build the prompt we will send into
128+
//the ai and seach our docs database for relevent info for the ai to use as context.
127129
const formattedPrompt = await buildProptemplate(prompt, chatHistory)
128130
const result = await prompAi(formattedPrompt)
129131
return result
130132
}
131133

132-
const buildProptemplate = async (prompt: string, chatHistory: string[] = []) => { // this function takes the user's question and the chat history and formats it into a prompt for the AI to use
134+
const buildProptemplate = async (prompt: string, chatHistory: string[] = []) => { // this function
135+
//takes the user's question and the chat history
136+
//and formats it into a prompt for the AI to use
133137
const embeddingsModel = new HuggingFaceTransformersEmbeddings({
134138
modelName: "Xenova/all-MiniLM-L6-v2",
135139
});

0 commit comments

Comments
 (0)