Build with AI

Leverage EzenIA's powerful APIs to integrate AI solutions into your applications seamlessly.

Getting Started

AUTH

Authentication

curl -X POST "https://api.ezenia.com/auth/token" \
  -H "Content-Type: application/json" \
  -d '{"client_id": "your_client_id", "client_secret": "your_secret"}'
                        

This endpoint returns your access token, which needs to be included in all API requests in the Authorization header.

1

Install SDK

npm install ezenia-sdk
2

Configure Client

import { EzenClient } from 'ezenia-sdk' const client = new EzenClient({ apiKey: 'your-api-key', endpoint: 'https://api.ezenia.com' })

API Endpoints

🧠

AI Models

List Available Models

GET /api/v1/models

Public
{ "models": [ { "id": "llm-001", "name": "Ezen-12B", "type": "conversation", "description": "General purpose language model for enterprise use", "capabilities": ["text", "code", "audio"] } ] }

Generate AI Response

POST /api/v1/models/{id}/generate

Protected
{ "prompt": "Explain quantum computing in simple terms" }
{ "response": "Quantum computing uses qubits that can exist in multiple states simultaneously...", "model_used": "llm-001", "tokens_used": 357 }
📊

Data Analysis

Upload Data

POST /api/v1/data/upload

Protected
{ "data": [{"column1": 123}, {"column2": "example"}], "format": "json" }

Run Analysis

POST /api/v1/analysis

Protected
{ "analysis_type": "regression", "parameters": { "confidence": 0.95 } }
{ "coefficient": 1.23, "p_value": 0.002, "confidence_interval": [0.89, 1.57] }
âš¡

Real-time Inference

Start Inference Session

POST /api/v1/session

Protected
{ "model_id": "llm-001", "temperature": 0.7 }

Stream Response

POST /api/v1/session/{id}/stream

Protected
{ "input": "Analyze this customer sentiment data" }
Type: positive
Score: 92%
Confidence: 95.4%

Build AI Chatbot

Here's how to implement a basic AI conversation interface with streaming responses

React Example

// src/App.jsx import { useState, useRef } from 'react'; function Chatbot() { const [messages, setMessages] = useState([]); const session = useRef(); const startSession = async () => { const res = await fetch('/api/v1/session', { method: "POST", headers: { "Authorization": "Bearer YOUR_TOKEN" }, body: JSON.stringify({ model_id: "llm-001" }) }); session.current = await res.json(); }; const sendMessage = async (text) => { const res = await fetch(`/api/v1/session/${session.current.id}/stream`, { method: "POST", headers: { "Authorization": "Bearer YOUR_TOKEN" }, body: JSON.stringify({ input: text }) }); const parser = new ReadableStream(res.body); let response = ''; parser.on('data', (chunk) => { response += chunk; setMessages(prev => [...prev, { from: 'ai', text: response }]); }); }; return ( {/* Chat UI would go here */} ); }

Node.js Example

const fetch = require('node-fetch'); async function getAIResponse(prompt) { const sessionRes = await fetch('https://api.ezenia.com/api/v1/session', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ model_id: 'llm-001' }) }); const sessionData = await sessionRes.json(); const streamRes = await fetch( `https://api.ezenia.com/api/v1/session/${sessionData.id}/stream`, { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ input: prompt }) } ); return new Promise((resolve) => { let chunks = ''; const reader = streamRes.body.getReader(); function read() { reader.read().then(({ value, done }) => { if (value) { chunks += new TextDecoder().decode(value); } if (done) { resolve(chunks); } else { read(); } }); } read(); }); }
```