FAQ and Troubleshooting

This page provides answers to frequently asked questions and solutions to common issues when working with the Addis AI API.

Authentication

Q: Why am I getting "401 Unauthorized" errors?

A: This typically happens when:
  • Your API key is invalid or expired
  • You're not including the API key correctly in the X-API-Key header
Make sure you're including your API key correctly:
const headers = {
"X-API-Key": "YOUR_API_KEY",
"Content-Type": "application/json",
};
javascript
In Python:
headers = {
"X-API-Key": api_key,
"Content-Type": "application/json"
}
python

Q: How do I get an API key?

A: API keys can be obtained through the Addis AI developer portal. Visit api.addisassistant.com/signup to create an account and request access.

Q: Is there a way to test the API without a production key?

A: Yes, we provide sandbox API keys for testing purposes. These have limited quotas but let you explore the API capabilities before committing to a paid plan. Contact our support team to request a sandbox key.

API Usage

Q: What languages does Addis AI support?

A: Currently, Addis AI supports two languages:
  • Amharic (am)
  • Afan Oromo (om)
Additional Ethiopian languages are in development.

Q: What is the maximum length of text I can send?

A: The prompt field is limited to 32,000 tokens (approximately 24,000 words). For text-to-speech requests, the text field is limited to 2,000 characters.

Q: How do I check how many tokens I've used?

A: Every response includes a usage_metadata field with token usage information:
"usage_metadata": {
"prompt_token_count": 12,
"candidates_token_count": 8,
"total_token_count": 20
}
json
You can track these counts to monitor your usage against your quota.

Q: Can I use the API for commercial applications?

A: Yes, commercial use is permitted under the appropriate plan tier. Standard and Enterprise tiers include commercial usage rights. Check our pricing page or contact sales for specific licensing terms.

Multi-modal Features

Q: Why isn't my image being processed correctly?

A: Common issues with multi-modal requests include:
  1. Incorrect format: Check that your image is in a supported format (JPEG, PNG, GIF, WebP)
  2. File size too large: Images must be under 10MB
  3. Missing attachment fields: Ensure your attachment_field_names array correctly lists all attachment field names
  4. Incorrect Content-Type: Multi-modal requests must use multipart/form-data
Example of a correct multi-modal request in Python:
import requests
import json
api_key = "YOUR_API_KEY"
url = "https://api.addisassistant.com/api/v1/chat_generate"
# Prepare JSON part of the request
request_data = {
"prompt": "What's in this image?",
"target_language": "am",
"attachment_field_names": ["image1"]
}
# Create multipart request
files = {
"request_data": (None, json.dumps(request_data), "application/json"),
"image1": ("image.jpg", open("path/to/image.jpg", "rb"), "image/jpeg")
}
headers = {
"X-API-Key": api_key
}
response = requests.post(url, headers=headers, files=files)
print(response.json())
python

Q: How do I refer to previously uploaded images in a conversation?

A: To refer to previously uploaded images:
  1. Store the fileUri and mimeType from the uploaded_attachments field in the response
  2. Include these in the parts array of your conversation history in subsequent requests
# First request with image upload
response = api.chat_with_image("What's in this image?", "image.jpg", "am")
# Get the file URI information
file_uri = response["uploaded_attachments"][0]["fileUri"]
mime_type = response["uploaded_attachments"][0]["mimeType"]
# Create conversation history referring to the uploaded image
conversation_history = [
{
"role": "user",
"parts": [
{"fileData": {"fileUri": file_uri, "mimeType": mime_type}},
{"text": "What's in this image?"}
]
},
{
"role": "assistant",
"parts": [{"text": response["response_text"]}]
}
]
# Follow-up question about the same image
response2 = api.chat_generate(
prompt="What colors are in this image?",
target_language="am",
conversation_history=conversation_history
)
python

Streaming

Q: Why is my streaming request hanging or timing out?

A: Streaming can fail for several reasons:
  1. Network issues: Ensure you have a stable connection
  2. Proxy/firewall restrictions: Some firewalls block streaming connections
  3. Beta feature limitations: Chat streaming is in BETA and may not be stable
For chat streaming issues, try using non-streaming mode instead, as chat streaming is currently in beta. Audio streaming is stable and recommended for production use.

Q: How do I properly handle streaming responses?

A: For SSE-based chat streaming:
// JavaScript example
async function streamChat(prompt, targetLanguage) {
const response = await fetch(
"https://api.addisassistant.com/api/v1/chat_generate",
{
method: "POST",
headers: {
"X-API-Key": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt,
target_language: targetLanguage,
generation_config: { stream: true },
}),
},
);
const reader = response.body.getReader();
const decoder = new TextDecoder();
let result = "";
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split("\n").filter((line) => line.trim() !== "");
for (const line of lines) {
if (line.startsWith("data: ")) {
try {
const data = JSON.parse(line.substring(6));
// Process the chunk
result += data.response_text || "";
// Check if this is the last chunk
if (data.finish_reason || data.is_last_chunk) {
console.log("Stream complete:", result);
return result;
}
} catch (e) {
console.error("Error parsing SSE data:", e);
}
}
}
}
} catch (error) {
console.error("Stream error:", error);
// Fall back to non-streaming if needed
}
return result;
}
javascript
For Python:
import requests
import json
def stream_chat(prompt, target_language):
url = "https://api.addisassistant.com/api/v1/chat_generate"
headers = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
data = {
"prompt": prompt,
"target_language": target_language,
"generation_config": {
"stream": True
}
}
response = requests.post(url, headers=headers, json=data, stream=True)
if response.status_code != 200:
print(f"Error: {response.status_code}")
return None
result = ""
for line in response.iter_lines():
if not line:
continue
line_text = line.decode('utf-8')
# Check for SSE format (data: prefix)
if line_text.startswith('data: '):
try:
data = json.loads(line_text[6:]) # Remove 'data: ' prefix
if "response_text" in data:
result += data["response_text"]
print(data["response_text"], end="", flush=True)
# Check if this is the last chunk
if "finish_reason" in data or "is_last_chunk" in data:
print("\nStream complete")
break
except json.JSONDecodeError:
print(f"Error parsing JSON: {line_text}")
return result
python

Rate Limits and Quotas

Q: I'm hitting rate limits. What should I do?

A: If you're hitting rate limits, try these solutions:
  1. Implement backoff and retry: Use the Retry-After header to determine wait time
  2. Reduce request frequency: Implement client-side rate limiting
  3. Batch requests: Combine multiple small requests into fewer larger ones
  4. Upgrade your plan: Contact sales to discuss higher rate limits

Q: How can I estimate token counts before making requests?

A: You can roughly estimate token counts using these guidelines:
  • For English and similar languages: 1 token ≈ 4 characters or ¾ of a word
  • For Amharic and Afan Oromo: 1 token ≈ 3 characters (non-Latin scripts use more tokens)
This Python function provides a rough estimation:
def estimate_tokens(text):
# Rough heuristic: 1 token ≈ 3 chars for Amharic/Afan Oromo
return len(text) // 3
python

Q: My token usage seems high. How can I optimize it?

A: To reduce token usage:
  1. Be concise: Keep prompts short and specific
  2. Prune conversation history: Only keep relevant context
  3. Use streaming: Process partial responses without waiting for completion
  4. Implement caching: Store common responses locally

Error Handling

Q: How should I handle API errors in production?

A: For robust error handling:
  1. Check HTTP status codes: Handle 4xx and 5xx appropriately
  2. Parse error responses: Extract error codes and messages
  3. Implement retry logic: Use exponential backoff for transient errors
  4. Provide fallbacks: Have graceful degradation paths
Example production-level error handling in JavaScript:
async function callAddisAI(params) {
const maxRetries = 3;
let attempts = 0;
while (attempts < maxRetries) {
try {
const response = await fetch(
"https://api.addisassistant.com/api/v1/chat_generate",
{
method: "POST",
headers: {
"X-API-Key": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify(params),
},
);
// Handle rate limiting
if (response.status === 429) {
const retryAfter = parseInt(
response.headers.get("Retry-After") || "60",
10,
);
console.log(`Rate limited. Retrying after ${retryAfter}s`);
await new Promise((r) => setTimeout(r, retryAfter * 1000));
attempts++;
continue;
}
// Handle other HTTP errors
if (!response.ok) {
const error = await response.json();
console.error(`API error: ${error.error?.code}`, error);
// Determine if error is retriable
if (response.status >= 500) {
// Server errors are retriable
attempts++;
const backoff = Math.pow(2, attempts) * 1000; // Exponential backoff
console.log(`Server error. Retrying in ${backoff / 1000}s`);
await new Promise((r) => setTimeout(r, backoff));
continue;
}
// Client errors are not retriable
throw new Error(
`API error: ${error.error?.code} - ${error.error?.message}`,
);
}
// Success!
return await response.json();
} catch (error) {
if (error.name === "TypeError" || error.name === "NetworkError") {
// Network errors are retriable
attempts++;
const backoff = Math.pow(2, attempts) * 1000;
console.log(`Network error. Retrying in ${backoff / 1000}s`);
await new Promise((r) => setTimeout(r, backoff));
continue;
}
// Re-throw other errors
throw error;
}
}
throw new Error(`Failed after ${maxRetries} attempts`);
}
javascript

Audio and TTS

Q: Why is text-to-speech not working correctly?

A: Common TTS issues include:
  1. Unsupported characters: Ensure your text contains only characters in the target language
  2. Text too long: Keep text under 2,000 characters per request
  3. Incorrect language: Verify you're using the right language code (am or om)

Q: How do I play the returned audio in the browser?

A: To play audio in the browser:
async function playTTS(text, language) {
const response = await fetch("https://api.addisassistant.com/api/v1/audio", {
method: "POST",
headers: {
"X-API-Key": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify({
text,
language,
}),
});
const data = await response.json();
// Create audio from base64
const audioData = atob(data.audio);
const arrayBuffer = new ArrayBuffer(audioData.length);
const view = new Uint8Array(arrayBuffer);
for (let i = 0; i < audioData.length; i++) {
view[i] = audioData.charCodeAt(i);
}
// Create blob and play
const blob = new Blob([arrayBuffer], { type: "audio/wav" });
const audioUrl = URL.createObjectURL(blob);
const audio = new Audio(audioUrl);
audio.play();
}
javascript

Q: How can I save the audio to a file?

A: In Python, you can save the audio to a file like this:
import requests
import base64
def get_and_save_audio(text, language, output_file="output.wav"):
url = "https://api.addisassistant.com/api/v1/audio"
headers = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
data = {
"text": text,
"language": language
}
response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
audio_data = response.json()["audio"]
# Decode base64 data
audio_bytes = base64.b64decode(audio_data)
# Save to file
with open(output_file, "wb") as f:
f.write(audio_bytes)
print(f"Audio saved to {output_file}")
return True
else:
print(f"Error: {response.status_code}")
print(response.text)
return False
python

Conversation Management

Q: How do I maintain context across multiple interactions?

A: Use the conversation_history parameter to maintain context:
let conversationHistory = [];
async function chatWithHistory(prompt, targetLanguage) {
const response = await fetch(
"https://api.addisassistant.com/api/v1/chat_generate",
{
method: "POST",
headers: {
"X-API-Key": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt,
target_language: targetLanguage,
conversation_history: conversationHistory,
}),
},
);
const data = await response.json();
// Add the current exchange to history
conversationHistory.push(
{ role: "user", content: prompt },
{ role: "assistant", content: data.response_text },
);
return data.response_text;
}
javascript

Q: My conversation history is getting too large. What should I do?

A: Implement a pruning strategy to keep conversation history manageable:
  1. Keep only recent messages: Maintain a sliding window of the most recent N messages
  2. Remove oldest messages first: When approaching token limits, remove oldest messages
  3. Summarize context: Use AI to summarize long conversations into shorter context
Example pruning function:
function pruneHistory(history, maxMessages = 10) {
// If history is within limit, return as is
if (history.length <= maxMessages) {
return history;
}
// Keep the most recent messages
return history.slice(history.length - maxMessages);
}
javascript

Q: How do I reset a conversation?

A: To reset a conversation, simply clear the conversation history:
// JavaScript
conversationHistory = [];
// Python
conversation_history = [];
javascript

Technical and Integration

Q: Can I self-host Addis AI?

A: No, Addis AI is available exclusively as a cloud-based API. We don't currently offer on-premises or self-hosted deployments.

Q: Is there a client library available?

A: We don't currently provide official client libraries, but you can use standard HTTP libraries in any programming language. The code examples in our documentation show how to build simple client wrappers.

Q: How can I integrate Addis AI with my mobile app?

A: For mobile app integration:
  1. Create a server-side proxy: Route API calls through your backend to protect your API key
  2. Implement streaming efficiently: Use proper streaming handlers to avoid memory issues
  3. Cache responses: Store common responses locally to reduce API calls
  4. Handle network issues: Design for intermittent connectivity

Q: What's the best way to debug API calls?

A: For debugging:
  1. Use request/response logging: Log all API interactions (sanitizing sensitive data)
  2. Set up monitoring: Track API response times and error rates
  3. Test with sandbox keys: Use sandbox environments for testing
  4. Implement verbose error modes: Add detailed logging in development

Miscellaneous

Q: What's the difference between Amharic and Afan Oromo support?

A: Both languages are fully supported with equal capabilities. The primary difference is in the language models and training data used behind the scenes. Specify the language using:
  • "am" for Amharic
  • "om" for Afan Oromo

Q: Are there any content restrictions?

A: Yes, Addis AI follows ethical AI guidelines and restricts:
  • Harmful, illegal, or malicious content
  • Explicit or inappropriate content
  • Content that promotes hate speech or discrimination
Requests that violate these guidelines will return appropriate error codes.

Q: How do I report issues or bugs?

A: Contact us through:
  • Email: [email protected]
  • Developer forum: forum.addisassistant.com
  • Issue tracker: github.com/addisassistant/api-issues
When reporting bugs, please include:
  1. Request parameters (with sensitive data removed)
  2. Response received
  3. Expected behavior
  4. Steps to reproduce

Q: Where can I find example applications?

A: Example applications and integration templates are available at:
  • GitHub: github.com/addisassistant/examples
  • Documentation: api.addisassistant.com/docs/examples
These examples cover web, mobile, and backend integration patterns.