🚀 Research PreviewWe’re currently in research preview! We’re excited to share our system with you, and we would love to hear your feedback.Enterprise Users: Need custom APIs tailored to your specific use case? Submit a request through our form →

Get Started with Vedaya

Build your first intelligent knowledge system that understands relationships, not just keywords. This guide takes you from zero to a working RAG system using real, tested examples.

What You’ll Build

In the next 5 minutes, you’ll:
  • ✅ Upload documents to create a knowledge graph
  • ✅ Query your data using natural language
  • ✅ Get context-aware answers powered by relationships
  • ✅ Build multi-turn conversations that maintain context

🎥 See It In Action (2 min)

Before You Begin

Requirements

  • Python 3.8+
  • 2 minutes to spare
  • Any text documents

Install Libraries

pip install requests openai

Step 1: Initialize Your Environment

import os
import requests
import json
import time
from openai import OpenAI

# Configure API (prefer environment variables)
API_BASE_URL = os.getenv("VEDAYA_API_BASE_URL", "https://vedaya-kg.fly.dev")  # no trailing slash
API_KEY = os.getenv("VEDAYA_API_KEY", "")  # leave empty if no auth

# Build headers
headers = {"Content-Type": "application/json"}
if API_KEY and API_KEY.strip() and API_KEY != "sk-mock-dummy-key":
    headers["Authorization"] = f"Bearer {API_KEY}"

print(f"Connecting to: {API_BASE_URL}")
print(f"Auth header set: {'Authorization' in headers}")
Quick Start Tip: Leave API_KEY empty to test immediately without authentication setup. For production, use environment variables.

Step 2: Your First Knowledge System (3 Steps)

1

Upload Your Documents

Transform any text into a knowledge graph:
# Example: Upload company documentation
documents = [
    "Our product uses advanced machine learning for customer analytics. "
    "We process millions of transactions daily using distributed systems.",
    
    "The analytics dashboard provides real-time insights. Users can track "
    "customer behavior patterns and predict future trends.",
    
    "Security is implemented through OAuth2 and data encryption. All customer "
    "data is processed in compliance with GDPR regulations."
]

# Upload to Vedaya
response = requests.post(
    f"{API_BASE_URL}/documents/texts",
    headers=headers,
    json={
        "texts": documents,
        "file_sources": [f"doc_{i}.txt" for i in range(len(documents))]
    }
)

if response.status_code == 200:
    print("✅ Documents uploaded successfully")
2

Wait for Knowledge Graph Generation

The system builds relationships between concepts (takes ~5 seconds):
print("🔄 Building knowledge graph...")
for i in range(30):
    status = requests.get(
        f"{API_BASE_URL}/documents/pipeline_status", 
        headers=headers
    ).json()
    
    if not status.get('busy', False):
        print("✅ Knowledge graph ready!")
        break
    
    time.sleep(2)
    print(f"  Status: {status.get('latest_message', 'Processing...')}")
What’s happening: Vedaya extracts entities, identifies relationships, and builds a knowledge graph for intelligent retrieval.
3

Query Your Knowledge

Ask questions and get relationship-aware answers:
# Setup OpenAI-compatible client
client = OpenAI(
    api_key="sk-dummy",
    base_url=f"{API_BASE_URL}/v1"
)

# Ask a question
response = client.chat.completions.create(
    model="vedaya-hybrid",
    messages=[{"role": "user", "content": "How does our security work with analytics?"}],
    temperature=0.7,
    max_tokens=500
)

print("Answer:", response.choices[0].message.content)

Step 3: Choose Your Query Strategy

Different questions need different retrieval strategies. Vedaya provides four specialized models:
Model: vedaya-hybridBest for general questions combining facts and relationships:
response = client.chat.completions.create(
    model="vedaya-hybrid",
    messages=[{"role": "user", "content": "Explain our security architecture"}]
)

Advanced Features

Build a Reusable Query Function

Create a robust function with automatic fallback:
def query_knowledge_base(question, mode="vedaya-hybrid"):
    """
    Query your knowledge base with automatic fallback.
    
    Args:
        question: Your natural language question
        mode: vedaya-naive, vedaya-local, vedaya-global, or vedaya-hybrid
    
    Returns:
        Answer from the knowledge base
    """
    
    client = OpenAI(
        api_key="sk-dummy",
        base_url="https://vedaya-kg.fly.dev/v1"
    )
    
    try:
        # Try OpenAI SDK first
        response = client.chat.completions.create(
            model=mode,
            messages=[{"role": "user", "content": question}],
            temperature=0.7,
            max_tokens=500
        )
o        return response.choices[0].message.content
        
    except Exception as e:
        # Fallback to direct HTTP
        response = requests.post(
            "https://vedaya-kg.fly.dev/v1/chat/completions",
            headers={"Content-Type": "application/json"},
            json={
                "model": mode,
                "messages": [{"role": "user", "content": question}],
                "temperature": 0.7,
                "max_tokens": 500
            }
        )
        if response.status_code == 200:
            return response.json()['choices'][0]['message']['content']
        return f"Error: {response.status_code} - {response.text}"

# Example usage
answer = query_knowledge_base(
    "What security measures are in place?",
    mode="vedaya-global"  # Use relationship-aware mode
)
print(answer)

Multi-Turn Conversations

Build context-aware dialogues that remember previous questions:
class VedayaChat:
    def __init__(self):
        self.messages = []
        self.client = OpenAI(
            api_key="sk-dummy",
            base_url="https://vedaya-kg.fly.dev/v1"
        )
    
    def ask(self, question):
        # Add user question
        self.messages.append({"role": "user", "content": question})
        
        # Get response
        response = self.client.chat.completions.create(
            model="vedaya-hybrid",
            messages=self.messages,
            max_tokens=300
        )
        
        # Store assistant response
        answer = response.choices[0].message.content
        self.messages.append({"role": "assistant", "content": answer})
        
        return answer

# Example conversation
chat = VedayaChat()
print("Q1:", chat.ask("What are our main security features?"))
print("\nQ2:", chat.ask("How do they protect customer data?"))
print("\nQ3:", chat.ask("What about compliance?"))
Pro Tip: The conversation context helps Vedaya understand pronouns and references, making answers more relevant.

Native Query API

For more control, use the direct query endpoint:
response = requests.post(
    f"{API_BASE_URL}/query",
    headers=headers,
    json={
        "query": "Explain our architecture",
        "mode": "hybrid",
        "top_k": 20,  # Number of chunks to retrieve
        "llm_model": "gpt-4",
        "llm_provider": "openai",
        "response_type": "Multiple Paragraphs"  # Detailed answer
    }
)
answer = response.json()["response"]
print(answer)

Try It Now: Copy-Paste Example

Run this complete example in your Python environment:
# Complete working example - just copy and run!
import requests
from openai import OpenAI
import time

# 1. Upload sample documents
print("📤 Uploading documents...")
requests.post(
    "https://vedaya-kg.fly.dev/documents/texts",
    json={
        "texts": [
            "Vedaya uses knowledge graphs to understand document relationships. "
            "Unlike traditional search, it maps connections between concepts.",
            
            "The system supports multiple retrieval modes: keyword search for facts, "
            "entity search for specific topics, and graph search for relationships.",
            
            "Applications include customer support, research analysis, and "
            "regulatory compliance documentation."
        ]
    }
)

# 2. Wait for processing
print("⚙️ Building knowledge graph...")
time.sleep(5)

# 3. Query the knowledge
print("🤔 Asking questions...\n")
client = OpenAI(api_key="sk-dummy", base_url="https://vedaya-kg.fly.dev/v1")

questions = [
    "What makes Vedaya different from traditional search?",
    "What are the main use cases?",
    "How do the retrieval modes work?"
]

for q in questions:
    response = client.chat.completions.create(
        model="vedaya-hybrid",
        messages=[{"role": "user", "content": q}],
        max_tokens=200
    )
    print(f"Q: {q}")
    print(f"A: {response.choices[0].message.content}\n")
Expected Output: You’ll see intelligent answers that understand the relationships between concepts, not just keyword matches.

Troubleshooting Guide

Platform Differences

FeatureTraditional RAGVedaya Knowledge Graph RAG
RetrievalKeyword/semantic similarityRelationship-aware context
ContextIsolated chunksConnected knowledge network
AccuracyCan miss related informationFinds connected concepts
Setup Time15-30 minutes< 5 minutes
Query ModesSingle approach4 specialized strategies

What’s Next?

Quick Reference

requests.post(
    "https://vedaya-kg.fly.dev/documents/texts",
    json={"texts": documents}
)

Get Help


🎉 Success! You now have a working knowledge graph RAG system. Your documents are searchable with relationship-aware intelligence.