Skip to content

Installation

Install the core package:

Terminal window
npm i @vectorstores/core

And then the package for the vector database you want to use, e.g. for Qdrant use:

Terminal window
npm i @vectorstores/qdrant

vectorstores is built with TypeScript and provides excellent type safety. Add these settings to your tsconfig.json:

{
"compilerOptions": {
// Essential for module resolution
"moduleResolution": "bundler", // or "nodenext" | "node16" | "node"
// Required for Web Stream API support
"lib": ["DOM.AsyncIterable"],
// Recommended for better compatibility
"target": "es2020",
"module": "esnext"
}
}

If you don’t already have a project, you can create a new one in a new folder:

Terminal window
npm init
npm i -D typescript @types/node tsx
npm i @vectorstores/core openai

You’ll also need to set your OpenAI API key:

Terminal window
export OPENAI_API_KEY=your-api-key-here

Create the file example.ts. This code will:

  • Configure OpenAI embeddings for generating vector representations
  • Load a text file from your filesystem
  • Create a Document object from the text
  • Build a VectorStoreIndex that automatically splits the text and generates embeddings
  • Create a retriever to search the indexed content
  • Query the index and display results in a formatted table
import { Document, VectorStoreIndex, Settings, type TextEmbedFunc } from "@vectorstores/core";
import { OpenAI } from "openai";
import fs from "node:fs/promises";
import { fileURLToPath } from "node:url";
// Configure OpenAI embeddings
const openai = new OpenAI();
Settings.embedFunc = async (input: string[]): Promise<number[][]> => {
const { data } = await openai.embeddings.create({
model: "text-embedding-3-small",
input,
});
return data.map((d) => d.embedding);
};
async function main() {
// Load a text file (create a sample.txt file with some text content)
const filePath = fileURLToPath(new URL("./sample.txt", import.meta.url));
const text = await fs.readFile(filePath, "utf-8");
// Create Document object with the text
const document = new Document({ text, id_: filePath });
// Split text and create embeddings. Store them in a VectorStoreIndex
const index = await VectorStoreIndex.fromDocuments([document]);
// Create a retriever from the index
const retriever = index.asRetriever();
// Query the index
const response = await retriever.retrieve({
query: "What is the main topic?",
});
// Display results
console.log(`Found ${response.length} result(s):\n`);
response.forEach((result, index) => {
console.log(`${index + 1}. Score: ${(result.score! * 100).toFixed(2)}%`);
console.log(` Text: ${result.node.text?.substring(0, 100)}...\n`);
});
}
main().catch(console.error);

Create a sample text file sample.txt in the same directory:

Machine learning is a subset of artificial intelligence that focuses on algorithms
that can learn from data. These algorithms build mathematical models based on
training data to make predictions or decisions without being explicitly programmed
to perform the task.

To run the code:

Terminal window
npx tsx example.ts

You should expect output something like:

Found 1 result(s):
1. Score: 85.32%
Text: Machine learning is a subset of artificial intelligence that focuses on algorithms
that can learn from data. These algorithms build mathemat...

This example demonstrates the core workflow of vectorstores:

  1. Ingestion: Converting your text into Document objects
  2. Indexing: Automatically splitting text into chunks and generating embeddings
  3. Retrieval: Querying the index to find semantically similar content

Learn vectorstores

Show me code examples