FAQ and Troubleshooting
This page provides answers to frequently asked questions and solutions to common issues when working with the Addis AI API.
Authentication
Q: Why am I getting "401 Unauthorized" errors?
A: This typically happens when:
- Your API key is invalid or expired
- You're not including the API key correctly in the X-API-Key header
Make sure you're including your API key correctly:
const headers = {
"X-API-Key": "YOUR_API_KEY",
"Content-Type": "application/json",
};
In Python:
headers = {
"X-API-Key": api_key,
"Content-Type": "application/json"
}
Q: How do I get an API key?
A: API keys can be obtained through the Addis AI developer portal. Visit
api.addisassistant.com/signup to create an account and request access.
Q: Is there a way to test the API without a production key?
A: Yes, we provide sandbox API keys for testing purposes. These have limited quotas but let you explore the API capabilities before committing to a paid plan. Contact our support team to request a sandbox key.
API Usage
Q: What languages does Addis AI support?
A: Currently, Addis AI supports two languages:
- Amharic (
am
)
- Afan Oromo (
om
)
Additional Ethiopian languages are in development.
Q: What is the maximum length of text I can send?
A: The
prompt
field is limited to 32,000 tokens (approximately 24,000 words). For text-to-speech requests, the
text
field is limited to 2,000 characters.
Q: How do I check how many tokens I've used?
A: Every response includes a
usage_metadata
field with token usage information:
"usage_metadata": {
"prompt_token_count": 12,
"candidates_token_count": 8,
"total_token_count": 20
}
You can track these counts to monitor your usage against your quota.
Q: Can I use the API for commercial applications?
A: Yes, commercial use is permitted under the appropriate plan tier. Standard and Enterprise tiers include commercial usage rights. Check our pricing page or contact sales for specific licensing terms.
Multi-modal Features
Q: Why isn't my image being processed correctly?
A: Common issues with multi-modal requests include:
- Incorrect format: Check that your image is in a supported format (JPEG, PNG, GIF, WebP)
- File size too large: Images must be under 10MB
- Missing attachment fields: Ensure your
attachment_field_names
array correctly lists all attachment field names
- Incorrect Content-Type: Multi-modal requests must use
multipart/form-data
Example of a correct multi-modal request in Python:
import requests
import json
api_key = "YOUR_API_KEY"
url = "https://api.addisassistant.com/api/v1/chat_generate"
request_data = {
"prompt": "What's in this image?",
"target_language": "am",
"attachment_field_names": ["image1"]
}
files = {
"request_data": (None, json.dumps(request_data), "application/json"),
"image1": ("image.jpg", open("path/to/image.jpg", "rb"), "image/jpeg")
}
headers = {
"X-API-Key": api_key
}
response = requests.post(url, headers=headers, files=files)
print(response.json())
Q: How do I refer to previously uploaded images in a conversation?
A: To refer to previously uploaded images:
- Store the
fileUri
and mimeType
from the uploaded_attachments
field in the response
- Include these in the
parts
array of your conversation history in subsequent requests
response = api.chat_with_image("What's in this image?", "image.jpg", "am")
file_uri = response["uploaded_attachments"][0]["fileUri"]
mime_type = response["uploaded_attachments"][0]["mimeType"]
conversation_history = [
{
"role": "user",
"parts": [
{"fileData": {"fileUri": file_uri, "mimeType": mime_type}},
{"text": "What's in this image?"}
]
},
{
"role": "assistant",
"parts": [{"text": response["response_text"]}]
}
]
response2 = api.chat_generate(
prompt="What colors are in this image?",
target_language="am",
conversation_history=conversation_history
)
Streaming
Q: Why is my streaming request hanging or timing out?
A: Streaming can fail for several reasons:
- Network issues: Ensure you have a stable connection
- Proxy/firewall restrictions: Some firewalls block streaming connections
- Beta feature limitations: Chat streaming is in BETA and may not be stable
For chat streaming issues, try using non-streaming mode instead, as chat streaming is currently in beta. Audio streaming is stable and recommended for production use.
Q: How do I properly handle streaming responses?
A: For SSE-based chat streaming:
async function streamChat(prompt, targetLanguage) {
const response = await fetch(
"https://api.addisassistant.com/api/v1/chat_generate",
{
method: "POST",
headers: {
"X-API-Key": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt,
target_language: targetLanguage,
generation_config: { stream: true },
}),
},
);
const reader = response.body.getReader();
const decoder = new TextDecoder();
let result = "";
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split("\n").filter((line) => line.trim() !== "");
for (const line of lines) {
if (line.startsWith("data: ")) {
try {
const data = JSON.parse(line.substring(6));
result += data.response_text || "";
if (data.finish_reason || data.is_last_chunk) {
console.log("Stream complete:", result);
return result;
}
} catch (e) {
console.error("Error parsing SSE data:", e);
}
}
}
}
} catch (error) {
console.error("Stream error:", error);
}
return result;
}
For Python:
import requests
import json
def stream_chat(prompt, target_language):
url = "https://api.addisassistant.com/api/v1/chat_generate"
headers = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
data = {
"prompt": prompt,
"target_language": target_language,
"generation_config": {
"stream": True
}
}
response = requests.post(url, headers=headers, json=data, stream=True)
if response.status_code != 200:
print(f"Error: {response.status_code}")
return None
result = ""
for line in response.iter_lines():
if not line:
continue
line_text = line.decode('utf-8')
if line_text.startswith('data: '):
try:
data = json.loads(line_text[6:])
if "response_text" in data:
result += data["response_text"]
print(data["response_text"], end="", flush=True)
if "finish_reason" in data or "is_last_chunk" in data:
print("\nStream complete")
break
except json.JSONDecodeError:
print(f"Error parsing JSON: {line_text}")
return result
Rate Limits and Quotas
Q: I'm hitting rate limits. What should I do?
A: If you're hitting rate limits, try these solutions:
- Implement backoff and retry: Use the
Retry-After
header to determine wait time
- Reduce request frequency: Implement client-side rate limiting
- Batch requests: Combine multiple small requests into fewer larger ones
- Upgrade your plan: Contact sales to discuss higher rate limits
Q: How can I estimate token counts before making requests?
A: You can roughly estimate token counts using these guidelines:
- For English and similar languages: 1 token ≈ 4 characters or ¾ of a word
- For Amharic and Afan Oromo: 1 token ≈ 3 characters (non-Latin scripts use more tokens)
This Python function provides a rough estimation:
def estimate_tokens(text):
return len(text) // 3
Q: My token usage seems high. How can I optimize it?
A: To reduce token usage:
- Be concise: Keep prompts short and specific
- Prune conversation history: Only keep relevant context
- Use streaming: Process partial responses without waiting for completion
- Implement caching: Store common responses locally
Error Handling
Q: How should I handle API errors in production?
A: For robust error handling:
- Check HTTP status codes: Handle 4xx and 5xx appropriately
- Parse error responses: Extract error codes and messages
- Implement retry logic: Use exponential backoff for transient errors
- Provide fallbacks: Have graceful degradation paths
Example production-level error handling in JavaScript:
async function callAddisAI(params) {
const maxRetries = 3;
let attempts = 0;
while (attempts < maxRetries) {
try {
const response = await fetch(
"https://api.addisassistant.com/api/v1/chat_generate",
{
method: "POST",
headers: {
"X-API-Key": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify(params),
},
);
if (response.status === 429) {
const retryAfter = parseInt(
response.headers.get("Retry-After") || "60",
10,
);
console.log(`Rate limited. Retrying after ${retryAfter}s`);
await new Promise((r) => setTimeout(r, retryAfter * 1000));
attempts++;
continue;
}
if (!response.ok) {
const error = await response.json();
console.error(`API error: ${error.error?.code}`, error);
if (response.status >= 500) {
attempts++;
const backoff = Math.pow(2, attempts) * 1000;
console.log(`Server error. Retrying in ${backoff / 1000}s`);
await new Promise((r) => setTimeout(r, backoff));
continue;
}
throw new Error(
`API error: ${error.error?.code} - ${error.error?.message}`,
);
}
return await response.json();
} catch (error) {
if (error.name === "TypeError" || error.name === "NetworkError") {
attempts++;
const backoff = Math.pow(2, attempts) * 1000;
console.log(`Network error. Retrying in ${backoff / 1000}s`);
await new Promise((r) => setTimeout(r, backoff));
continue;
}
throw error;
}
}
throw new Error(`Failed after ${maxRetries} attempts`);
}
Audio and TTS
Q: Why is text-to-speech not working correctly?
A: Common TTS issues include:
- Unsupported characters: Ensure your text contains only characters in the target language
- Text too long: Keep text under 2,000 characters per request
- Incorrect language: Verify you're using the right language code (
am
or om
)
Q: How do I play the returned audio in the browser?
A: To play audio in the browser:
async function playTTS(text, language) {
const response = await fetch("https://api.addisassistant.com/api/v1/audio", {
method: "POST",
headers: {
"X-API-Key": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify({
text,
language,
}),
});
const data = await response.json();
const audioData = atob(data.audio);
const arrayBuffer = new ArrayBuffer(audioData.length);
const view = new Uint8Array(arrayBuffer);
for (let i = 0; i < audioData.length; i++) {
view[i] = audioData.charCodeAt(i);
}
const blob = new Blob([arrayBuffer], { type: "audio/wav" });
const audioUrl = URL.createObjectURL(blob);
const audio = new Audio(audioUrl);
audio.play();
}
Q: How can I save the audio to a file?
A: In Python, you can save the audio to a file like this:
import requests
import base64
def get_and_save_audio(text, language, output_file="output.wav"):
url = "https://api.addisassistant.com/api/v1/audio"
headers = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
data = {
"text": text,
"language": language
}
response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
audio_data = response.json()["audio"]
audio_bytes = base64.b64decode(audio_data)
with open(output_file, "wb") as f:
f.write(audio_bytes)
print(f"Audio saved to {output_file}")
return True
else:
print(f"Error: {response.status_code}")
print(response.text)
return False
Conversation Management
Q: How do I maintain context across multiple interactions?
A: Use the
conversation_history
parameter to maintain context:
let conversationHistory = [];
async function chatWithHistory(prompt, targetLanguage) {
const response = await fetch(
"https://api.addisassistant.com/api/v1/chat_generate",
{
method: "POST",
headers: {
"X-API-Key": API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt,
target_language: targetLanguage,
conversation_history: conversationHistory,
}),
},
);
const data = await response.json();
conversationHistory.push(
{ role: "user", content: prompt },
{ role: "assistant", content: data.response_text },
);
return data.response_text;
}
Q: My conversation history is getting too large. What should I do?
A: Implement a pruning strategy to keep conversation history manageable:
- Keep only recent messages: Maintain a sliding window of the most recent N messages
- Remove oldest messages first: When approaching token limits, remove oldest messages
- Summarize context: Use AI to summarize long conversations into shorter context
Example pruning function:
function pruneHistory(history, maxMessages = 10) {
if (history.length <= maxMessages) {
return history;
}
return history.slice(history.length - maxMessages);
}
Q: How do I reset a conversation?
A: To reset a conversation, simply clear the conversation history:
conversationHistory = [];
conversation_history = [];
Technical and Integration
Q: Can I self-host Addis AI?
A: No, Addis AI is available exclusively as a cloud-based API. We don't currently offer on-premises or self-hosted deployments.
Q: Is there a client library available?
A: We don't currently provide official client libraries, but you can use standard HTTP libraries in any programming language. The code examples in our documentation show how to build simple client wrappers.
Q: How can I integrate Addis AI with my mobile app?
A: For mobile app integration:
- Create a server-side proxy: Route API calls through your backend to protect your API key
- Implement streaming efficiently: Use proper streaming handlers to avoid memory issues
- Cache responses: Store common responses locally to reduce API calls
- Handle network issues: Design for intermittent connectivity
Q: What's the best way to debug API calls?
A: For debugging:
- Use request/response logging: Log all API interactions (sanitizing sensitive data)
- Set up monitoring: Track API response times and error rates
- Test with sandbox keys: Use sandbox environments for testing
- Implement verbose error modes: Add detailed logging in development
Miscellaneous
Q: What's the difference between Amharic and Afan Oromo support?
A: Both languages are fully supported with equal capabilities. The primary difference is in the language models and training data used behind the scenes. Specify the language using:
"am"
for Amharic
"om"
for Afan Oromo
Q: Are there any content restrictions?
A: Yes, Addis AI follows ethical AI guidelines and restricts:
- Harmful, illegal, or malicious content
- Explicit or inappropriate content
- Content that promotes hate speech or discrimination
Requests that violate these guidelines will return appropriate error codes.
Q: How do I report issues or bugs?
A: Contact us through:
- Email: [email protected]
- Developer forum: forum.addisassistant.com
- Issue tracker: github.com/addisassistant/api-issues
When reporting bugs, please include:
- Request parameters (with sensitive data removed)
- Response received
- Expected behavior
- Steps to reproduce
Q: Where can I find example applications?
A: Example applications and integration templates are available at:
- GitHub: github.com/addisassistant/examples
- Documentation: api.addisassistant.com/docs/examples
These examples cover web, mobile, and backend integration patterns.