Build an AI Astrologer Chatbot with the Vedic Astrology API
The VedIntel™ /api/v1/ai/chatendpoint is the only astrology API that combines Swiss Ephemeris precision (real birth chart computation) with Claude AI (Anthropic's frontier model) in a single call. Your chatbot doesn't guess — it reads from an accurate Vedic chart computed on-the-fly.
This guide covers the API call, SSE streaming, multi-turn conversation history, a ready-to-ship React component, and the no-code embeddable widget.
Why This AI Astrologer API Is Different
Real chart data
Claude receives Lagna, Moon, all 9 planets, current dasha — not generic horoscope text
Swiss Ephemeris accuracy
Zero external computation dependency. Runs locally. Mathematically verified.
Streaming output
SSE token-by-token streaming — feels live, not like waiting for an API call
Multi-turn memory
Send history[] and the AI remembers everything you said this session
3 astrology types
Vedic, Western, or Tarot — each with its own system prompt and knowledge base
Embeddable widget
2-line snippet. Birth form + chat UI included. Zero frontend work.
The AI Astrologer API Call
One POST, birth data + question, and you get a streaming Vedic astrology response. The API automatically computes the chart and sends it to Claude as context before answering:
// Minimal AI astrologer API call
const response = await fetch('https://vedintelastroapi.com/api/v1/ai/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
api_key: 'YOUR_KEY',
dob: '01/10/1977', // DD/MM/YYYY
tob: '11:40', // HH:MM (24hr)
lat: 11,
lon: 77,
tz: 5.5,
question: 'What does my current Jupiter mahadasha mean for my career?',
type: 'vedic', // 'vedic' | 'western' | 'tarot'
}),
});
// Streams Server-Sent Events:
// data: {"token":"Jupiter"}
// data: {"token":" in your"}
// ...
// data: [DONE]
Reading the Streaming SSE Response
The response is Server-Sent Events — tokens arrive in real time. Read with the Fetch streaming API:
// Reading the SSE stream in JavaScript
async function askAstrologer(question, birthData) {
const res = await fetch('/api/v1/ai/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ api_key: 'YOUR_KEY', ...birthData, question, type: 'vedic' }),
});
const reader = res.body.getReader();
const decoder = new TextDecoder();
let fullAnswer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(l => l.startsWith('data: '));
for (const line of lines) {
const payload = line.slice(6).trim();
if (payload === '[DONE]') return fullAnswer;
try {
const { token } = JSON.parse(payload);
fullAnswer += token;
// Update UI in real-time:
document.getElementById('answer').textContent = fullAnswer;
} catch { /* skip malformed */ }
}
}
return fullAnswer;
}Multi-Turn Conversation — Building Chat History
Append every user + assistant turn to a history[] array and send it on the next call. The AI remembers everything in the session:
// Multi-turn conversation — send history[] on every call
const history = [];
async function chat(userMessage, birthData) {
const res = await fetch('/api/v1/ai/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
api_key: 'YOUR_KEY',
...birthData,
question: userMessage,
history, // ← previous turns
type: 'vedic',
}),
});
let aiResponse = '';
const reader = res.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const lines = decoder.decode(value).split('\n').filter(l => l.startsWith('data: '));
for (const line of lines) {
const payload = line.slice(6).trim();
if (payload === '[DONE]') break;
try { aiResponse += JSON.parse(payload).token; } catch {}
}
}
// Append both turns to history for next call
history.push({ role: 'user', content: userMessage });
history.push({ role: 'assistant', content: aiResponse });
return aiResponse; // up to 20 turns before history is truncated automatically
}Complete React AI Astrologer Chat Component
Drop this into any React or Next.js project. It handles streaming, history, and UI in under 80 lines:
// React chat component for AI astrologer
'use client';
import { useState, useRef, useEffect } from 'react';
interface Message { role: 'user' | 'assistant'; content: string; }
interface BirthData { dob: string; tob: string; lat: number; lon: number; tz: number; }
export default function AstrologerChat({ birthData }: { birthData: BirthData }) {
const [messages, setMessages] = useState<Message[]>([]);
const [input, setInput] = useState('');
const [streaming, setStreaming] = useState(false);
const historyRef = useRef<Message[]>([]);
async function sendMessage() {
if (!input.trim() || streaming) return;
const userMsg = input.trim();
setInput('');
setStreaming(true);
setMessages(prev => [...prev, { role: 'user', content: userMsg }, { role: 'assistant', content: '' }]);
const res = await fetch('/api/v1/ai/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
api_key: process.env.NEXT_PUBLIC_ASTRO_KEY,
...birthData,
question: userMsg,
history: historyRef.current,
type: 'vedic',
}),
});
let aiContent = '';
const reader = res.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const lines = decoder.decode(value).split('\n').filter(l => l.startsWith('data: '));
for (const line of lines) {
const payload = line.slice(6).trim();
if (payload === '[DONE]') break;
try {
aiContent += JSON.parse(payload).token;
// Update last message live
setMessages(prev => [...prev.slice(0, -1), { role: 'assistant', content: aiContent }]);
} catch {}
}
}
historyRef.current = [...historyRef.current,
{ role: 'user', content: userMsg },
{ role: 'assistant', content: aiContent },
];
setStreaming(false);
}
return (
<div style={{ display: 'flex', flexDirection: 'column', height: 500, border: '1px solid #30363d', borderRadius: 12, overflow: 'hidden' }}>
<div style={{ flex: 1, overflowY: 'auto', padding: 16, background: '#0d1117' }}>
{messages.map((m, i) => (
<div key={i} style={{ marginBottom: 12, textAlign: m.role === 'user' ? 'right' : 'left' }}>
<span style={{
display: 'inline-block', padding: '10px 14px', borderRadius: 12, maxWidth: '80%',
background: m.role === 'user' ? '#4f46e5' : '#1e2433',
color: 'white', fontSize: 14, lineHeight: 1.6,
}}>{m.content || (streaming ? '▋' : '')}</span>
</div>
))}
</div>
<div style={{ display: 'flex', padding: 12, borderTop: '1px solid #30363d', background: '#161b22' }}>
<input value={input} onChange={e => setInput(e.target.value)}
onKeyDown={e => e.key === 'Enter' && sendMessage()}
placeholder="Ask your AI astrologer..."
style={{ flex: 1, padding: '10px 14px', borderRadius: 8, border: '1px solid #30363d', background: '#0d1117', color: 'white', fontSize: 14, outline: 'none' }}
/>
<button onClick={sendMessage} disabled={streaming}
style={{ marginLeft: 8, padding: '10px 18px', background: '#4f46e5', color: 'white', border: 'none', borderRadius: 8, fontWeight: 700, cursor: 'pointer' }}>
{streaming ? '...' : '→'}
</button>
</div>
</div>
);
}Embeddable Widget — No Frontend Work Required
If you just want an AI astrologer chatbot on your website without building a UI, use the 2-line embed snippet. It includes a birth data form, city autocomplete, type switcher (Vedic/Western/Tarot), and the full streaming chat:
<!-- Drop this anywhere on your website — no React, no build step -->
<script>
(function(w,d,s,o,f,js,fjs){
w['VedIntelWidget']=o; w[o]=w[o]||function(){(w[o].q=w[o].q||[]).push(arguments)};
js=d.createElement(s); fjs=d.getElementsByTagName(s)[0];
js.id=o; js.src=f; js.async=1; fjs.parentNode.insertBefore(js,fjs);
})(window,document,'script','vai','https://vedintelastroapi.com/api/v1/widget.js');
vai('init', {
apiKey: 'YOUR_KEY',
type: 'vedic', // or 'western' or 'tarot'
// If you already have birth data:
// dob: '01/10/1977', tob: '11:40', lat: 11, lon: 77, tz: 5.5,
});
</script>
<!-- A floating chat button appears bottom-right. Users enter birth details, then chat. -->Pricing — AI Chat Calls
AI chat calls charge 1 call from your plan quota — same pool as all other endpoints. A Developer plan (5,000 calls/month) can be used entirely for AI chat, or mix with standard Vedic endpoints — your choice. For high-volume AI chat, use BYOLLM: pass your own OpenAI/Gemini/Mistral key — you pay your LLM provider directly, we charge only 1 call for the chart compute. See full pricing →
Start building your AI astrologer chatbot
500 free API calls on signup. No credit card. Claude AI + real Vedic chart data from the first message.