Skip to content

TypeScript

GitHub repo

The repo for our SDK is here and includes a quickstart section.

If you want to dive right in, we recommend cloning the repo and cracking on with the quickstart.

If you want to read more about it first, read on:

Install the TypeScript SDK

npm install @onecontext/ts-sdk

Initial Setup

To start using OneContext, set the following environment variables:

API_KEY=<your api key>
BASE_URL=<your base url>

You can put them in an .env file in the root of your project and initialise them in your project like so:

import * as OneContext from "@onecontext/ts-sdk"
import * as dotenv from "dotenv";
import path from 'path';
import { fileURLToPath } from 'url';
import * as util from "util";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);

// Create a .env file and add your API_KEY
dotenv.config({path: __dirname + '/../.env'});

// make sure the env variables are being read correctly and instantiated as global variables
const API_KEY: string = process.env.API_KEY!;
const BASE_URL: string = process.env.BASE_URL!;

🙋‍♂️ You can get an API key by signing up here

🙋‍♂️ If you're on the serverless plan, your base URL will simply be https://api.onecontext.ai. That's the default, so you can leave it blank if you like. If you're on the dedicated plan, this will be the URL of your private instance of OneContext.

Create your first knowledge base

A knowledge base is a collection of files. We create our first knowledge base and upload a file:

const knowledgeBaseCreateArgs: OneContext.KnowledgeBaseCreateType = OneContext.KnowledgeBaseCreateSchema.parse({
  API_KEY: API_KEY,
  knowledgeBaseName: knowledgeBaseName
})

OneContext.createKnowledgeBase(knowledgeBaseCreateArgs).then((res) => {console.log(res)})

Upload some content to this Knowledge Base

You can upload a file, or list of files

const uploadFilesArgs: OneContext.UploadFilesType = OneContext.UploadFilesSchema.parse({
  API_KEY: API_KEY,
  knowledgeBaseName: knowledgeBaseName,
  file: "./quickstart/demo_data/instruct_gpt.pdf",
  metadataJson: {"tag": "longForm"}
})

OneContext.uploadFiles(uploadFilesArgs).then((res) => {console.log(res)})

Or upload all the compatible files in a directory

const uploadDirectoryArgsLongForm: OneContext.UploadDirectoryType = OneContext.UploadDirectorySchema.parse({
  API_KEY: API_KEY,
  knowledgeBaseName: knowledgeBaseName,
  directory: "./quickstart/demo_data/long_form/",
  metadataJson: {"tag": "longForm"}
})

OneContext.uploadDirectory(uploadDirectoryArgsLongForm).then((res) => {console.log(res)})

Create a Vector Index

We want to chunk and embed the files in our knowledgebase but first we need somewhere to store our vectors. We create a vector index and specify the embedding model that the vector index should expect:

const vectorIndexCreateArgs: OneContext.VectorIndexCreateType = OneContext.VectorIndexCreateSchema.parse({
  API_KEY: API_KEY,
  vectorIndexName: vectorIndexName,
  modelName: "BAAI/bge-base-en-v1.5"
})

OneContext.createVectorIndex(vectorIndexCreateArgs).then((res) => {console.log(res)})

By specifying the model we create a vector index of appropriate dimensions and also ensure that we never write embeddings from a different model to this index.

Create an Ingestion Pipeline

We are ready to deploy our first ingestion pipeline.

const indexPipelineCreateArgs: OneContext.PipelineCreateType = OneContext.PipelineCreateSchema.parse({
  API_KEY: API_KEY,
  pipelineName: indexPipelineName,
  pipelineYaml: "./quickstart/example_yamls/index.yaml",
})

OneContext.createPipeline(indexPipelineCreateArgs).then((res) => {console.log(res)})

Where the file at index.yaml reads like so:

steps:
  - step: KnowledgeBaseFiles
    name: input
    step_args:
      # specify the source knowledgebases to watch
      knowledgebase_names: ["demoKnowledgeBase"]
    inputs: []

  - step: Preprocessor
    name: preprocessor
    step_args: {}
    inputs: [input]

  - step: Chunker
    name: simple_chunker
    step_args:
      chunk_size_words: 320
      chunk_overlap: 30
    inputs: [preprocessor]

  - step: SentenceTransformerEmbedder
    name: sentence-transformers
    step_args:
      model_name: BAAI/bge-base-en-v1.5
    inputs: [ simple_chunker ]

  - step: ChunkWriter
    name: save
    step_args:
      vector_index_name: demoVectorIndex
    inputs: [sentence-transformers]

Let's break down the steps.

The KnowledgeBaseFiles step tells the pipeline to watch the "my_kb" knowledge base. When the pipeline is first deployed all files in the knowledge base will be run through the pipeline. Any subsequent files uploaded to this knowledge base will trigger the pipeline to run.

The Chunker defines how the files will be split into chunks.

The SentenceTransformerEmbedder step specifys the embedding model that will be used to embed the chunks.

Finally, the ChunkWriter step writes the chunks to the vector index we created earlier.

Create a Query Pipeline

Having indexed the files we now create a pipeline to query the vector index.

const QueryPipelineCreateArgs: OneContext.PipelineCreateType = OneContext.PipelineCreateSchema.parse({
  API_KEY: API_KEY,
  pipelineName: QueryPipelineName,
  pipelineYaml: "./quickstart/example_yamls/query.yaml",
})
OneContext.createPipeline(QueryPipelineCreateArgs).then((res) => {console.log(res)})

where the file at query.yaml reads like so:

steps:
  - step: SentenceTransformerEmbedder
    name: query_embedder
    step_args:
      model_name: BAAI/bge-base-en-v1.5
      include_metadata: [ title, file_name ]
      query: "placeholder"
    inputs: [ ]

  - step: Retriever
    name: retriever
    step_args:
      vector_index_name: demoVectorIndex
      top_k: 100
      metadata_filters: { }
    inputs: ["query_embedder"]

  - step: Reranker
    name: reranker
    step_args:
      query: "placeholder"
      model_name: BAAI/bge-reranker-base
      top_k: 5
      metadata_filters: { }
    inputs: [ retriever ]

Run the Query Pipeline

We can run the query pipeline and override any of the default step arguments defined in our pipeline at runtime by passing a dictionary of the form:

{step_name : {step_arg: step_arg_value}.

const query: string = "How much wood could a woodchuck chuck if a woodchuck could chuck wood?"

const QueryPipelineRunArgs: OneContext.RunType = OneContext.RunSchema.parse({
  API_KEY: API_KEY,
  pipelineName: QueryPipelineName,
  overrideArgs: {"retriever" : {"query" : query}}
  override_args = {
    "query_embedder": {"query": "How much wood could a woodchuck chuck if a wooodchuck could chuck wood?"},
    "retriever": {
        "top_k": 50,
    },
    "reranker": {"top_k": 5, "query": query},
}
})

OneContext.runPipeline(QueryPipelineRunArgs).then((res) => {console.log(util.inspect(res, {showHidden: true, colors: true}))})

For much more information on the steps you can add to your pipeline, and what functionality you can get out of pipelines, see the pipelines page.