Server-side Implementation

This guide covers implementing Addis AI services in server-side applications. Server-side integration offers advantages in security, performance, and architecture flexibility compared to direct client-side API calls.

Why Use Server-side Integration?

Server-side integration with Addis AI provides several key advantages:
  1. API Key Security: Keep your API keys secure on your server instead of exposing them in client-side code
  2. Request Validation: Validate and sanitize user inputs before forwarding to Addis AI
  3. Response Caching: Implement caching strategies to reduce API calls and improve performance
  4. Custom Logic: Add business logic, logging, and monitoring around AI capabilities
  5. Cost Control: Implement usage limits and throttling to manage API consumption
  6. Cross-origin Issues: Avoid CORS issues that may occur with direct client-side requests

Architecture Overview

A typical server-side integration follows this pattern:
  1. Client (web/mobile app) sends request to your server
  2. Your server validates the request and adds authentication
  3. Your server forwards the request to Addis AI API
  4. Addis AI responds to your server
  5. Your server processes the response and sends it back to the client
┌─────────┐ ┌─────────┐ ┌──────────┐
│ Client │ ──────> │ Your │ ──────> │ Addis AI │
│ App/Web │ │ Server │ │ API │
└─────────┘ <────── └─────────┘ <────── └──────────┘
text

Node.js Implementation

Here's a complete implementation using Express.js:

Installation

npm install express axios cors dotenv
bash

Basic Setup

Create a .env file for your API key:
ADDIS_AI_API_KEY=your_api_key_here
text
Create an app.js file:
const express = require("express");
const axios = require("axios");
const cors = require("cors");
const dotenv = require("dotenv");
const multer = require("multer");
const FormData = require("form-data");
const fs = require("fs");
// Load environment variables
dotenv.config();
const app = express();
const port = process.env.PORT || 3000;
const upload = multer({ dest: "uploads/" });
// Middleware
app.use(cors());
app.use(express.json());
// Addis AI API config
const ADDIS_AI_API_KEY = process.env.ADDIS_AI_API_KEY;
const ADDIS_AI_BASE_URL = "https://api.addisassistant.com/api/v1";
// Rate limiting middleware (simple example)
const requestCounts = {};
const RATE_LIMIT = 100; // requests per hour
const RATE_WINDOW = 60 * 60 * 1000; // 1 hour in milliseconds
function rateLimiter(req, res, next) {
const ip = req.ip;
const now = Date.now();
if (!requestCounts[ip]) {
requestCounts[ip] = { count: 1, resetTime: now + RATE_WINDOW };
} else if (requestCounts[ip].resetTime < now) {
// Reset if window expired
requestCounts[ip] = { count: 1, resetTime: now + RATE_WINDOW };
} else if (requestCounts[ip].count >= RATE_LIMIT) {
return res
.status(429)
.json({ error: "Rate limit exceeded. Try again later." });
} else {
requestCounts[ip].count++;
}
next();
}
// Chat generate endpoint
app.post("/api/chat", rateLimiter, async (req, res) => {
try {
const { prompt, target_language, conversation_history, generation_config } =
req.body;
// Validate required fields
if (!prompt) {
return res.status(400).json({ error: "Prompt is required" });
}
// Forward request to Addis AI
const response = await axios.post(
`${ADDIS_AI_BASE_URL}/chat_generate`,
{
prompt,
target_language: target_language || "am",
conversation_history: conversation_history || [],
generation_config: generation_config || { temperature: 0.7 },
},
{
headers: {
"Content-Type": "application/json",
"X-API-Key": ADDIS_AI_API_KEY,
},
},
);
res.json(response.data);
} catch (error) {
console.error(
"Error calling Addis AI:",
error.response?.data || error.message,
);
if (error.response) {
// Forward Addis AI error status and message
res.status(error.response.status).json({
error: "Error from Addis AI API",
details: error.response.data,
});
} else {
res.status(500).json({ error: "Internal server error" });
}
}
});
// Text-to-speech endpoint
app.post("/api/tts", rateLimiter, async (req, res) => {
try {
const { text, language } = req.body;
// Validate required fields
if (!text) {
return res.status(400).json({ error: "Text is required" });
}
// Forward request to Addis AI
const response = await axios.post(
`${ADDIS_AI_BASE_URL}/audio`,
{
text,
language: language || "am",
},
{
headers: {
"Content-Type": "application/json",
"X-API-Key": ADDIS_AI_API_KEY,
},
},
);
res.json(response.data);
} catch (error) {
console.error(
"Error calling Addis AI TTS:",
error.response?.data || error.message,
);
res.status(error.response?.status || 500).json({
error: "Error from Addis AI API",
details: error.response?.data || error.message,
});
}
});
// Multi-modal chat with image
app.post(
"/api/chat-with-image",
rateLimiter,
upload.single("image"),
async (req, res) => {
try {
const { prompt, target_language } = req.body;
const imageFile = req.file;
// Validate required fields
if (!prompt) {
return res.status(400).json({ error: "Prompt is required" });
}
if (!imageFile) {
return res.status(400).json({ error: "Image file is required" });
}
// Create form data
const formData = new FormData();
formData.append("image1", fs.createReadStream(imageFile.path));
formData.append(
"request_data",
JSON.stringify({
prompt,
target_language: target_language || "am",
attachment_field_names: ["image1"],
}),
);
// Forward request to Addis AI
const response = await axios.post(
`${ADDIS_AI_BASE_URL}/chat_generate`,
formData,
{
headers: {
"Content-Type": "application/json",
"X-API-Key": ADDIS_AI_API_KEY,
},
},
);
// Clean up uploaded file
fs.unlinkSync(imageFile.path);
res.json(response.data);
} catch (error) {
console.error(
"Error calling Addis AI with image:",
error.response?.data || error.message,
);
res.status(error.response?.status || 500).json({
error: "Error from Addis AI API",
details: error.response?.data || error.message,
});
}
},
);
// Start server
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
javascript

Streaming Implementation with Node.js

To implement streaming responses from Addis AI:
app.post("/api/chat-stream", rateLimiter, (req, res) => {
try {
const { prompt, target_language, conversation_history } = req.body;
// Validate required fields
if (!prompt) {
return res.status(400).json({ error: "Prompt is required" });
}
// Set up response headers for streaming
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
// Create request options
const requestData = {
prompt,
target_language: target_language || "am",
conversation_history: conversation_history || [],
generation_config: {
temperature: 0.7,
stream: true,
},
};
// Make a request to Addis AI
const httpsRequest = https.request(
{
hostname: "api.addisassistant.com",
path: "/chat_generate",
method: "POST",
headers: {
"Content-Type": "application/json",
"X-API-Key": ADDIS_AI_API_KEY,
},
},
(response) => {
// Handle errors in the response
if (response.statusCode !== 200) {
res.write(
`data: ${JSON.stringify({
error: `API responded with status ${response.statusCode}`,
})}\n\n`,
);
res.end();
return;
}
// Forward chunks to the client
response.on("data", (chunk) => {
res.write(`data: ${chunk}\n\n`);
});
// End when Addis AI response ends
response.on("end", () => {
res.write("data: [DONE]\n\n");
res.end();
});
},
);
// Handle request errors
httpsRequest.on("error", (err) => {
console.error("Error in Addis AI streaming request:", err);
res.write(
`data: ${JSON.stringify({ error: "Internal server error" })}\n\n`,
);
res.end();
});
// Send the request
httpsRequest.write(JSON.stringify(requestData));
httpsRequest.end();
} catch (error) {
console.error("Error setting up Addis AI stream:", error);
res.status(500).json({ error: "Internal server error" });
}
});
javascript

Python Implementation

Here's how to implement an Addis AI proxy server using FastAPI:

Installation

pip install fastapi uvicorn python-dotenv python-multipart requests
bash

Basic Setup

Create a .env file:
ADDIS_AI_API_KEY=your_api_key_here
text
Create a main.py file:
import os
import json
import requests
from dotenv import load_dotenv
from fastapi import FastAPI, HTTPException, Depends, File, UploadFile, Form
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from typing import Optional, List, Dict, Any
# Load environment variables
load_dotenv()
app = FastAPI(title="Addis AI Proxy API")
# CORS configuration
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Restrict in production
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Addis AI API configuration
ADDIS_AI_API_KEY = os.getenv("ADDIS_AI_API_KEY")
ADDIS_AI_BASE_URL = "https://api.addisassistant.com/api/v1"
# Models
class Message(BaseModel):
role: str
content: str
class GenerationConfig(BaseModel):
temperature: float = 0.7
stream: bool = False
class ChatRequest(BaseModel):
prompt: str
target_language: str = "am"
conversation_history: Optional[List[Message]] = None
generation_config: Optional[GenerationConfig] = None
class TTSRequest(BaseModel):
text: str
language: str = "am"
# Dependency for API key validation
def verify_api_key():
if not ADDIS_AI_API_KEY:
raise HTTPException(status_code=500, detail="API key not configured on server")
return ADDIS_AI_API_KEY
# Routes
@app.post("/api/chat")
async def chat_generate(request: ChatRequest, api_key: str = Depends(verify_api_key)):
try:
# Forward request to Addis AI
response = requests.post(
f"{ADDIS_AI_BASE_URL}/chat_generate",
json=request.dict(),
headers={
"Content-Type": "application/json",
"X-API-Key": api_key,
}
)
# Handle errors
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
status_code = e.response.status_code if hasattr(e, 'response') else 500
detail = e.response.json() if hasattr(e, 'response') else str(e)
raise HTTPException(status_code=status_code, detail=detail)
@app.post("/api/tts")
async def text_to_speech(request: TTSRequest, api_key: str = Depends(verify_api_key)):
try:
# Forward request to Addis AI
response = requests.post(
f"{ADDIS_AI_BASE_URL}/audio",
json=request.dict(),
headers={
"Content-Type": "application/json",
"X-API-Key": api_key,
}
)
# Handle errors
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
status_code = e.response.status_code if hasattr(e, 'response') else 500
detail = e.response.json() if hasattr(e, 'response') else str(e)
raise HTTPException(status_code=status_code, detail=detail)
@app.post("/api/chat-with-image")
async def chat_with_image(
prompt: str = Form(...),
target_language: str = Form("am"),
image: UploadFile = File(...),
api_key: str = Depends(verify_api_key)
):
try:
# Prepare multipart form data
files = {"image1": (image.filename, await image.read(), image.content_type)}
data = {
"request_data": json.dumps({
"prompt": prompt,
"target_language": target_language,
"attachment_field_names": ["image1"]
})
}
# Forward request to Addis AI
response = requests.post(
f"{ADDIS_AI_BASE_URL}/chat_generate",
files=files,
data=data,
headers={
"Content-Type": "application/json",
"X-API-Key": api_key,
}
)
# Handle errors
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
status_code = e.response.status_code if hasattr(e, 'response') else 500
detail = e.response.json() if hasattr(e, 'response') else str(e)
raise HTTPException(status_code=status_code, detail=detail)
# Run the app
if __name__ == "__main__":
import uvicorn
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)
python

Streaming Implementation with Python

To implement streaming with FastAPI:
from fastapi import Response
from starlette.background import BackgroundTask
import asyncio
@app.post("/api/chat-stream")
async def chat_stream(request: ChatRequest, api_key: str = Depends(verify_api_key)):
# Modify request to enable streaming
request_data = request.dict()
if "generation_config" not in request_data or request_data["generation_config"] is None:
request_data["generation_config"] = {"temperature": 0.7, "stream": True}
else:
request_data["generation_config"]["stream"] = True
async def stream_response():
try:
# Set up streaming request to Addis AI
with requests.post(
f"{ADDIS_AI_BASE_URL}/chat_generate",
json=request_data,
headers={
"Content-Type": "application/json",
"X-API-Key": api_key,
},
stream=True
) as response:
# Handle errors
if response.status_code != 200:
yield f"data: {json.dumps({'error': f'API responded with status {response.status_code}'})}\n\n"
return
# Stream each chunk back to the client
for line in response.iter_lines():
if line:
yield f"data: {line.decode('utf-8')}\n\n"
yield "data: [DONE]\n\n"
except Exception as e:
yield f"data: {json.dumps({'error': str(e)})}\n\n"
return Response(
content=stream_response(),
media_type="text/event-stream"
)
python

PHP Implementation

Here's how to implement a proxy server using PHP:

Basic Setup

Create an index.php file:
<?php
// Addis AI API configuration
$addisAiApiKey = getenv('ADDIS_AI_API_KEY');
$addisAiBaseUrl = 'https://api.addisassistant.com';
// Set headers for JSON responses
header('Content-Type: application/json');
header('Access-Control-Allow-Origin: *'); // Restrict in production
header('Access-Control-Allow-Methods: GET, POST, OPTIONS');
header('Access-Control-Allow-Headers: Content-Type, Authorization');
// Handle OPTIONS request for CORS preflight
if ($_SERVER['REQUEST_METHOD'] === 'OPTIONS') {
exit(0);
}
// Verify API key is configured
if (!$addisAiApiKey) {
http_response_code(500);
echo json_encode(['error' => 'API key not configured on server']);
exit;
}
// Parse request URI to determine endpoint
$uri = parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH);
$endpoint = null;
if (strpos($uri, '/api/chat') === 0) {
$endpoint = 'chat';
} elseif (strpos($uri, '/api/tts') === 0) {
$endpoint = 'tts';
} elseif (strpos($uri, '/api/chat-with-image') === 0) {
$endpoint = 'chat-with-image';
} else {
http_response_code(404);
echo json_encode(['error' => 'Endpoint not found']);
exit;
}
// Get request data
$requestData = json_decode(file_get_contents('php://input'), true);
// Handle different endpoints
switch ($endpoint) {
case 'chat':
handleChatRequest($requestData, $addisAiApiKey, $addisAiBaseUrl);
break;
case 'tts':
handleTtsRequest($requestData, $addisAiApiKey, $addisAiBaseUrl);
break;
case 'chat-with-image':
handleChatWithImageRequest($_FILES, $_POST, $addisAiApiKey, $addisAiBaseUrl);
break;
}
// Function to handle chat generation requests
function handleChatRequest($requestData, $apiKey, $baseUrl) {
// Validate request
if (!isset($requestData['prompt']) || empty($requestData['prompt'])) {
http_response_code(400);
echo json_encode(['error' => 'Prompt is required']);
exit;
}
// Set default values if not provided
if (!isset($requestData['target_language'])) {
$requestData['target_language'] = 'am';
}
if (!isset($requestData['conversation_history'])) {
$requestData['conversation_history'] = [];
}
if (!isset($requestData['generation_config'])) {
$requestData['generation_config'] = ['temperature' => 0.7];
}
// Initialize cURL session
$ch = curl_init($baseUrl . '/chat_generate');
// Set cURL options
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($requestData));
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Content-Type: application/json',
'X-API-Key: ' . $apiKey
]);
// Execute cURL request
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
// Check for errors
if (curl_errno($ch)) {
http_response_code(500);
echo json_encode(['error' => 'Error connecting to Addis AI: ' . curl_error($ch)]);
curl_close($ch);
exit;
}
// Forward Addis AI response
http_response_code($httpCode);
echo $response;
// Close cURL session
curl_close($ch);
}
// Function to handle text-to-speech requests
function handleTtsRequest($requestData, $apiKey, $baseUrl) {
// Validate request
if (!isset($requestData['text']) || empty($requestData['text'])) {
http_response_code(400);
echo json_encode(['error' => 'Text is required']);
exit;
}
// Set default language if not provided
if (!isset($requestData['language'])) {
$requestData['language'] = 'am';
}
// Initialize cURL session
$ch = curl_init($baseUrl . '/audio');
// Set cURL options
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($requestData));
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Content-Type: application/json',
'X-API-Key: ' . $apiKey
]);
// Execute cURL request
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
// Check for errors
if (curl_errno($ch)) {
http_response_code(500);
echo json_encode(['error' => 'Error connecting to Addis AI: ' . curl_error($ch)]);
curl_close($ch);
exit;
}
// Forward Addis AI response
http_response_code($httpCode);
echo $response;
// Close cURL session
curl_close($ch);
}
// Function to handle chat with image requests
function handleChatWithImageRequest($files, $post, $apiKey, $baseUrl) {
// Validate request
if (!isset($post['prompt']) || empty($post['prompt'])) {
http_response_code(400);
echo json_encode(['error' => 'Prompt is required']);
exit;
}
if (!isset($files['image']) || $files['image']['error'] !== UPLOAD_ERR_OK) {
http_response_code(400);
echo json_encode(['error' => 'Image file is required']);
exit;
}
// Set default language if not provided
$targetLanguage = isset($post['target_language']) ? $post['target_language'] : 'am';
// Prepare request data
$requestData = json_encode([
'prompt' => $post['prompt'],
'target_language' => $targetLanguage,
'attachment_field_names' => ['image1']
]);
// Initialize cURL session
$ch = curl_init($baseUrl . '/chat_generate');
// Create a cURL file
$cFile = curl_file_create(
$files['image']['tmp_name'],
$files['image']['type'],
$files['image']['name']
);
// Prepare multipart form data
$postFields = [
'image1' => $cFile,
'request_data' => $requestData
];
// Set cURL options
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $postFields);
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Content-Type: application/json',
'X-API-Key: ' . $apiKey
]);
// Execute cURL request
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
// Check for errors
if (curl_errno($ch)) {
http_response_code(500);
echo json_encode(['error' => 'Error connecting to Addis AI: ' . curl_error($ch)]);
curl_close($ch);
exit;
}
// Forward Addis AI response
http_response_code($httpCode);
echo $response;
// Close cURL session
curl_close($ch);
}
?>
text

Configuration with Apache

Create a .htaccess file for URL rewriting:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php [QSA,L]
text

Securing Your Server-side Implementation

API Key Security

  1. Environment Variables: Store API keys in environment variables, not in code
  2. Key Rotation: Regularly rotate your API keys
  3. Access Control: Limit which servers/processes can access the keys

Request Validation

Validate all user inputs to prevent:
  • Malicious prompts
  • Excessively long inputs
  • Invalid parameters
  • Input that could trigger unintended API behavior

Authentication and Authorization

  1. Authenticate Users: Require users to authenticate to your server
  2. Authorization: Control which users can access which AI features
  3. Rate Limiting: Implement per-user rate limiting
  4. Usage Tracking: Track usage per user for billing or limits

Error Handling

  1. Graceful Failures: Handle API errors without exposing sensitive information
  2. Logging: Log errors for debugging but sanitize sensitive data
  3. Fallbacks: Provide graceful degradation when the API is unavailable

Performance Optimization

Caching

Implement caching for appropriate endpoints:
// Example using Node.js with a simple in-memory cache
const cache = new Map();
const CACHE_TTL = 3600 * 1000; // 1 hour in milliseconds
app.post("/api/tts", rateLimiter, async (req, res) => {
try {
const { text, language } = req.body;
// Create cache key from request parameters
const cacheKey = `tts:${language}:${text}`;
// Check cache
if (cache.has(cacheKey)) {
const cachedData = cache.get(cacheKey);
if (cachedData.expiry > Date.now()) {
return res.json(cachedData.data);
}
// Remove expired cache entry
cache.delete(cacheKey);
}
// Forward request to Addis AI
const response = await axios.post(
`${ADDIS_AI_BASE_URL}/audio`,
{ text, language: language || "am" },
{
headers: {
"Content-Type": "application/json",
"X-API-Key": ADDIS_AI_API_KEY,
},
},
);
// Cache the response
cache.set(cacheKey, {
data: response.data,
expiry: Date.now() + CACHE_TTL,
});
res.json(response.data);
} catch (error) {
// Error handling...
}
});
javascript

Connection Pooling

For high-volume applications, implement connection pooling to the Addis AI API:
// Example using http-agent-keepalive with Node.js
const { HttpsAgent } = require("agentkeepalive");
const keepaliveAgent = new HttpsAgent({
maxSockets: 100,
maxFreeSockets: 10,
timeout: 60000,
freeSocketTimeout: 30000,
});
// Use the agent with axios
const axiosInstance = axios.create({
httpsAgent: keepaliveAgent,
});
javascript

Deployment Considerations

Scalability

To scale your server-side integration:
  1. Horizontal Scaling: Deploy multiple instances of your proxy API
  2. Load Balancing: Distribute requests across instances
  3. Asynchronous Processing: Use task queues for non-real-time requests

Monitoring and Logging

Implement monitoring for:
  1. API call volumes
  2. Response times
  3. Error rates
  4. Usage patterns

Containerization

Docker example for your Node.js proxy:
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
ENV PORT=3000
EXPOSE 3000
CMD ["node", "app.js"]
text

Best Practices Summary

  1. Never expose your API key: Keep it on the server side
  2. Validate all inputs: Before forwarding to Addis AI
  3. Implement authentication: Protect your proxy endpoints
  4. Rate limit requests: Prevent abuse and control costs
  5. Add caching: Improve performance and reduce API calls
  6. Handle errors gracefully: Provide meaningful responses to clients
  7. Monitor usage: Track API calls for billing and optimization
  8. Implement security headers: Protect your proxy API endpoints
  9. Use HTTPS: Encrypt all communications

Next Steps

Now that you've implemented a server-side integration with Addis AI, consider:
  1. Explore Streaming Implementation for real-time responses
  2. Implement Multi-modal Input through your proxy
  3. Add analytics to track usage patterns
  4. Develop business-specific features around Addis AI capabilities