Error Codes and Handling

This page provides a comprehensive reference for handling error responses from the Addis AI API.

Error Response Format

When an error occurs, the API returns a JSON response with the following structure:
{
"status": "error",
"error": {
"code": "ERROR_CODE",
"message": "A user-friendly error message",
"target": "Optional field indicating the target (e.g., language)"
}
}
json

Common Error Codes

Authentication Errors

| Error Code | HTTP Status | Description | Troubleshooting | | -------------- | ----------- | ------------------------------------- | --------------------------------------------------------- | | UNAUTHORIZED | 401 | Missing or invalid API key | Check that your X-API-Key header includes a valid API key | | FORBIDDEN | 403 | API key is valid but lacks permission | Verify that your API key has the necessary permissions |

Input Validation Errors

| Error Code | HTTP Status | Description | Troubleshooting | | ---------------------- | ----------- | --------------------------------------- | ----------------------------------------------------- | | INVALID_INPUT | 400 | Missing or invalid request parameters | Check that all required fields are provided and valid | | INVALID_JSON | 400 | Malformed JSON in request body | Verify that your JSON is properly formatted | | UNSUPPORTED_LANGUAGE | 400 | The requested language is not supported | Use only supported languages (am, om) |

File and Attachment Errors

| Error Code | HTTP Status | Description | Troubleshooting | | ---------------------- | ----------- | -------------------------- | ------------------------------------------------------------------------------------- | | ATTACHMENT_FAILED | 400 | Attachment upload failed | Check file format, size, and ensure attachment_field_names matches the provided files | | TRANSCRIPTION_FAILED | 400 | Audio transcription failed | Verify that the audio file is in a supported format and contains clear speech |

TTS Errors

| Error Code | HTTP Status | Description | Troubleshooting | | ------------ | ----------- | -------------------------------- | ----------------------------------------------------------------- | | TTS_FAILED | 500 | Text-to-speech conversion failed | Check that the text is in the specified language and not too long |

HTTP Method Errors

| Error Code | HTTP Status | Description | Troubleshooting | | -------------------- | ----------- | ----------------------- | -------------------------------------------------------- | | METHOD_NOT_ALLOWED | 405 | HTTP method not allowed | Use the correct HTTP method (POST) for all API endpoints |

Server Errors

| Error Code | HTTP Status | Description | Troubleshooting | | --------------------- | ----------- | ------------------------------- | -------------------------------------------------------------- | | INTERNAL_ERROR | 500 | Unexpected server error | Retry the request; if the error persists, contact support | | SERVICE_UNAVAILABLE | 503 | Service temporarily unavailable | Retry the request after a short delay with exponential backoff |

Error Handling Best Practices

1. Check for Error Status

Always check for the "status": "error" field in the response to detect error conditions:
const response = await fetch(
"https://api.addisassistant.com/api/v1/chat_generate",
{
headers: {
"X-API-Key": `${apiKey}`,
"Content-Type": "application/json",
},
// other request options
},
);
const data = await response.json();
if (data.status === "error") {
// Handle error based on error.code
console.error(`Error ${data.error.code}: ${data.error.message}`);
// Implement appropriate error handling
} else {
// Process successful response
}
javascript

2. Implement Code-based Error Handling

Use the error.code field for programmatic error handling rather than relying on the error message:
if (data.status === "error") {
switch (data.error.code) {
case "INVALID_INPUT":
// Handle input validation errors
break;
case "UNSUPPORTED_LANGUAGE":
// Handle language support issues
break;
case "UNAUTHORIZED":
// Handle authentication issues
refreshAuthToken(); // example function
break;
case "TTS_FAILED":
// Handle text-to-speech failures
break;
case "INTERNAL_ERROR":
case "SERVICE_UNAVAILABLE":
// Implement retry logic with backoff
retryWithBackoff(request); // example function
break;
default:
// Generic error handling
displayErrorToUser(data.error.message);
}
}
javascript

3. Implement Retry Logic with Exponential Backoff

For transient errors like INTERNAL_ERROR or SERVICE_UNAVAILABLE, implement retry logic with exponential backoff:
async function retryWithBackoff(requestFn, maxRetries = 3) {
let attempts = 0;
while (attempts < maxRetries) {
try {
return await requestFn();
} catch (error) {
attempts++;
if (attempts >= maxRetries) {
throw error; // Give up after max retries
}
// Calculate delay with exponential backoff (e.g., 1s, 2s, 4s)
const delay = Math.pow(2, attempts) * 1000;
console.log(`Retry attempt ${attempts} after ${delay}ms`);
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
}
// Usage example
try {
const result = await retryWithBackoff(() => callAddisAI(params));
processResult(result);
} catch (error) {
handleFinalError(error);
}
javascript

4. Gracefully Degrade Functionality

Design your application to gracefully degrade when API errors occur:
async function getAIResponse(prompt) {
try {
// Try AI-generated response first
const aiResponse = await callAddisAI(prompt);
return aiResponse;
} catch (error) {
console.error("AI response failed:", error);
// Fall back to pre-defined responses for common queries
if (isCommonQuery(prompt)) {
return getPreDefinedResponse(prompt);
}
// Or provide a generic fallback message
return {
response_text:
"I'm sorry, I'm having trouble processing your request right now. Please try again later.",
};
}
}
javascript

5. Log Errors for Debugging

Log errors with sufficient context for debugging while being careful with sensitive information:
function logAPIError(error, request) {
// Strip out sensitive information like API keys
const sanitizedRequest = { ...request };
delete sanitizedRequest.headers["X-API-Key"];
console.error("Addis AI API Error:", {
code: error.code,
message: error.message,
timestamp: new Date().toISOString(),
endpoint: request.url,
method: request.method,
// Don't log full request body which might contain sensitive user data
requestExcerpt: truncate(JSON.stringify(sanitizedRequest), 200),
});
// You might send these logs to your error monitoring service
// errorMonitoringService.captureException(error);
}
javascript

HTTP Status Codes

In addition to the JSON error format, the API also uses standard HTTP status codes: | HTTP Status | Category | Description | | ----------- | ------------ | ------------------------------------------- | | 200 | Success | The request was successful | | 400 | Client Error | Bad request (missing or invalid parameters) | | 401 | Client Error | Unauthorized (missing or invalid API key) | | 403 | Client Error | Forbidden (insufficient permissions) | | 404 | Client Error | Resource not found | | 405 | Client Error | Method not allowed | | 429 | Client Error | Too many requests (rate limit exceeded) | | 500 | Server Error | Internal server error | | 503 | Server Error | Service unavailable |

Special Cases

Rate Limiting

When you exceed rate limits, you'll receive a 429 Too Many Requests response. The response includes a Retry-After header indicating the number of seconds to wait before retrying. Example handling:
async function callAddisAI(params) {
const response = await fetch(
"https://api.addisassistant.com/api/v1/chat_generate",
{
headers: {
"X-API-Key": `${apiKey}`,
"Content-Type": "application/json",
},
method: "POST",
body: JSON.stringify(params),
},
);
if (response.status === 429) {
const retryAfter = response.headers.get("Retry-After") || 60; // Default to 60 seconds
console.warn(`Rate limit exceeded. Retry after ${retryAfter} seconds.`);
// Implement waiting logic or queue the request for later
return new Promise((resolve) => {
setTimeout(() => {
resolve(callAddisAI(params));
}, retryAfter * 1000);
});
}
// Handle other response cases
}
javascript

Streaming Errors

When using streaming endpoints with stream: true, errors may occur mid-stream. In this case:
  1. For SSE streams, an error event will be sent with the error details
  2. For HTTP streams, the connection may be closed prematurely
Example handling for streaming errors:
// Using EventSource for SSE
const eventSource = new EventSource(
`${apiUrl}?x_api_key=${encodeURIComponent(apiKey)}`,
);
// Handle messages
eventSource.onmessage = (event) => {
try {
const data = JSON.parse(event.data);
// Check for error in the data
if (data.error) {
console.error("Stream error:", data.error);
eventSource.close();
// Handle the error appropriately
return;
}
// Process normal data
processChunk(data);
} catch (err) {
console.error("Error processing stream chunk:", err);
}
};
// Handle connection errors
eventSource.onerror = (error) => {
console.error("EventSource error:", error);
eventSource.close();
// Implement fallback or retry logic
};
javascript

Common Error Scenarios and Solutions

| Scenario | Possible Errors | Solution | | ----------------------- | ------------------------------- | --------------------------------------------------------------------- | | API key issues | UNAUTHORIZED, FORBIDDEN | Verify API key is correct and has necessary permissions | | Malformed request | INVALID_INPUT, INVALID_JSON | Validate request structure against documentation | | Unsupported language | UNSUPPORTED_LANGUAGE | Use only supported languages (am, om) | | File upload issues | ATTACHMENT_FAILED | Check file size, format, and ensure attachment_field_names is correct | | Server overload | SERVICE_UNAVAILABLE | Implement retry with exponential backoff | | Audio processing issues | TRANSCRIPTION_FAILED | Verify audio quality and format | | Text-to-speech failures | TTS_FAILED | Check text length and language correctness | For any persistent issues not addressed by these solutions, please contact support with detailed information about the error and the request that caused it.

Python Error Handling Examples

Basic Error Handling

import requests
import time
def call_addis_ai_api(prompt, target_language):
"""
Call the Addis AI API with basic error handling.
"""
api_key = "YOUR_API_KEY"
url = "https://api.addisassistant.com/api/v1/chat_generate"
headers = {
"X-API-Key": api_key,
"Content-Type": "application/json"
}
data = {
"prompt": prompt,
"target_language": target_language
}
try:
response = requests.post(url, headers=headers, json=data)
# Check if response is JSON
try:
response_data = response.json()
except ValueError:
print(f"Non-JSON response: {response.text}")
return None
# Check for error status
if response_data.get("status") == "error":
error = response_data.get("error", {})
error_code = error.get("code", "UNKNOWN_ERROR")
error_message = error.get("message", "Unknown error occurred")
print(f"API Error: {error_code} - {error_message}")
# Handle specific error types
if error_code == "UNAUTHORIZED":
print("Authentication failed. Please check your API key.")
elif error_code == "UNSUPPORTED_LANGUAGE":
print(f"Language not supported. Use 'am' or 'om' instead of '{error.get('target')}'")
return None
# If we reach here, response was successful
return response_data
except requests.exceptions.RequestException as e:
print(f"Request failed: {str(e)}")
return None
# Example usage
response = call_addis_ai_api("Hello, how are you?", "am")
if response:
print(response["response_text"])
python

Retry with Exponential Backoff

import requests
import time
import random
def call_api_with_backoff(prompt, target_language, max_retries=5):
"""
Call the Addis AI API with exponential backoff retry logic.
"""
api_key = "YOUR_API_KEY"
url = "https://api.addisassistant.com/api/v1/chat_generate"
headers = {
"X-API-Key": api_key,
"Content-Type": "application/json"
}
data = {
"prompt": prompt,
"target_language": target_language
}
# Initialize retry counter
retry_count = 0
while retry_count < max_retries:
try:
response = requests.post(url, headers=headers, json=data)
# Check for rate limit (429)
if response.status_code == 429:
retry_count += 1
# Get retry after time from header, or default to exponential backoff
retry_after = int(response.headers.get("Retry-After", "1"))
# Add jitter to prevent thundering herd
jitter = random.uniform(0, 0.1) * retry_after
# Calculate backoff with exponential increase
backoff = retry_after * (2 ** (retry_count - 1)) + jitter
print(f"Rate limited. Retrying in {backoff:.2f} seconds (attempt {retry_count}/{max_retries})")
time.sleep(backoff)
continue
# Parse response
try:
response_data = response.json()
except ValueError:
print(f"Invalid JSON response: {response.text}")
return None
# Check for error status
if response.status_code >= 400 or response_data.get("status") == "error":
error = response_data.get("error", {})
error_code = error.get("code", "UNKNOWN_ERROR")
error_message = error.get("message", "Unknown error occurred")
print(f"API Error: {error_code} - {error_message}")
# Determine if we should retry based on error type
if error_code in ["INTERNAL_ERROR", "SERVICE_UNAVAILABLE"]:
retry_count += 1
backoff = 2 ** retry_count
print(f"Retrying in {backoff} seconds (attempt {retry_count}/{max_retries})")
time.sleep(backoff)
continue
else:
# Non-retryable error
return None
# Success case
return response_data
except requests.exceptions.RequestException as e:
retry_count += 1
backoff = 2 ** retry_count
print(f"Request error: {str(e)}")
print(f"Retrying in {backoff} seconds (attempt {retry_count}/{max_retries})")
time.sleep(backoff)
# If we've exhausted retries
print(f"Failed after {max_retries} attempts")
return None
# Example usage
response = call_api_with_backoff("Tell me about Ethiopia", "am")
if response:
print(response["response_text"])
python

Error Handling Class

import requests
import time
import random
import logging
class AddisAIClient:
"""
A client for the Addis AI API with robust error handling.
"""
def __init__(self, api_key, base_url="https://api.addisassistant.com/api/v1"):
self.api_key = api_key
self.base_url = base_url
self.logger = logging.getLogger("AddisAIClient")
# Configure default headers
self.headers = {
"X-API-Key": api_key,
"Content-Type": "application/json"
}
def _handle_response(self, response):
"""Process API response and handle errors."""
try:
# Try to parse as JSON
response_data = response.json()
# Check for error status
if response_data.get("status") == "error":
error = response_data.get("error", {})
error_code = error.get("code", "UNKNOWN_ERROR")
error_message = error.get("message", "Unknown error occurred")
self.logger.error(f"API Error: {error_code} - {error_message}")
# Raise custom exception
raise AddisAIError(error_code, error_message, response.status_code)
return response_data
except ValueError:
# Response was not valid JSON
self.logger.error(f"Invalid JSON response: {response.text}")
raise AddisAIError("INVALID_RESPONSE", "Received non-JSON response", response.status_code)
def chat_generate(self, prompt, target_language, conversation_history=None,
generation_config=None, max_retries=3):
"""
Call the chat_generate endpoint with retry logic.
"""
url = f"{self.base_url}/chat_generate"
# Prepare request data
data = {
"prompt": prompt,
"target_language": target_language
}
if conversation_history:
data["conversation_history"] = conversation_history
if generation_config:
data["generation_config"] = generation_config
# Initialize retry counter
retry_count = 0
while retry_count < max_retries:
try:
response = requests.post(url, headers=self.headers, json=data)
# Handle rate limiting
if response.status_code == 429:
retry_count += 1
# Get retry after time
retry_after = int(response.headers.get("Retry-After", "1"))
# Add jitter to prevent thundering herd
jitter = random.uniform(0, 0.1) * retry_after
# Calculate backoff with exponential increase
backoff = retry_after * (2 ** (retry_count - 1)) + jitter
self.logger.warning(
f"Rate limited. Retrying in {backoff:.2f} seconds "
f"(attempt {retry_count}/{max_retries})"
)
time.sleep(backoff)
continue
# For server errors, retry with exponential backoff
if response.status_code >= 500:
retry_count += 1
backoff = 2 ** retry_count
self.logger.warning(
f"Server error {response.status_code}. Retrying in {backoff} seconds "
f"(attempt {retry_count}/{max_retries})"
)
time.sleep(backoff)
continue
# For 4xx errors, don't retry (except 429)
if response.status_code >= 400:
return self._handle_response(response)
# Success case
return self._handle_response(response)
except requests.exceptions.RequestException as e:
retry_count += 1
backoff = 2 ** retry_count
self.logger.error(f"Request error: {str(e)}")
if retry_count < max_retries:
self.logger.warning(
f"Retrying in {backoff} seconds (attempt {retry_count}/{max_retries})"
)
time.sleep(backoff)
else:
raise AddisAIError(
"CONNECTION_ERROR",
f"Failed to connect after {max_retries} attempts: {str(e)}",
0
)
# If we've exhausted retries
raise AddisAIError(
"MAX_RETRIES_EXCEEDED",
f"Failed after {max_retries} attempts",
0
)
class AddisAIError(Exception):
"""Custom exception for Addis AI API errors."""
def __init__(self, code, message, status_code):
self.code = code
self.message = message
self.status_code = status_code
super().__init__(f"{code}: {message} (HTTP {status_code})")
# Example usage
try:
# Set up logging
logging.basicConfig(level=logging.INFO)
# Initialize client
client = AddisAIClient(api_key="YOUR_API_KEY")
# Make request
response = client.chat_generate(
prompt="What is the capital of Ethiopia?",
target_language="am",
generation_config={"temperature": 0.7}
)
print(response["response_text"])
except AddisAIError as e:
print(f"API Error: {e.code} - {e.message}")
# Handle specific error types
if e.code == "UNAUTHORIZED":
print("Please check your API key")
elif e.code == "RATE_LIMIT_EXCEEDED":
print("Please reduce your request frequency")
except Exception as e:
print(f"Unexpected error: {str(e)}")
python