Integrating with Mobile Applications

This guide explains how to integrate Addis AI into your mobile applications for both iOS and Android platforms.

Getting Started

Before integrating Addis AI into your mobile application, you'll need:
  1. An API key from Addis AI
  2. Basic knowledge of mobile app development (iOS or Android)
  3. Understanding of HTTP requests in mobile environments

Integration Strategy for Mobile

When integrating Addis AI into mobile applications, consider these approaches:
  1. Direct API Integration: Make HTTP requests directly from your mobile app
  2. Backend Proxy: Route requests through your backend to protect API keys
  3. Hybrid Approach: Use web views for certain features while keeping native UI

Android Implementation (Kotlin)

Here's how to integrate Addis AI into an Android application using Kotlin:

Required Dependencies

Add these dependencies to your build.gradle file:
// For HTTP requests
implementation 'com.squareup.retrofit2:retrofit:2.9.0'
implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
implementation 'com.squareup.okhttp3:okhttp:4.9.3'
implementation 'com.squareup.okhttp3:logging-interceptor:4.9.3'
// For UI
implementation 'androidx.recyclerview:recyclerview:1.2.1'
text

API Interface and Models

Create the necessary data models and Retrofit interface:
// Data models
data class ChatRequest(
val prompt: String,
val target_language: String,
val conversation_history: List<Message>? = null,
val generation_config: GenerationConfig = GenerationConfig()
)
data class Message(
val role: String, // "user" or "assistant"
val content: String
)
data class GenerationConfig(
val temperature: Double = 0.7,
val stream: Boolean = false
)
data class ChatResponse(
val response_text: String,
val finish_reason: String? = null,
val usage_metadata: UsageMetadata? = null
)
data class UsageMetadata(
val prompt_token_count: Int,
val candidates_token_count: Int,
val total_token_count: Int
)
// Retrofit API interface
interface AddisAIService {
@POST("chat_generate")
suspend fun generateChat(@Body request: ChatRequest): ChatResponse
@POST("audio")
suspend fun textToSpeech(@Body request: Map<String, Any>): Map<String, String>
}
text

API Client Setup

Create a client class to handle API communication:
class AddisAIClient(private val apiKey: String) {
private val baseUrl = "https://api.addisassistant.com/"
private val conversationHistory = mutableListOf<Message>()
private val httpClient = OkHttpClient.Builder()
.addInterceptor { chain ->
val request = chain.request().newBuilder()
.addHeader("X-API-Key", apiKey)
.build()
chain.proceed(request)
}
.addInterceptor(HttpLoggingInterceptor().apply {
level = HttpLoggingInterceptor.Level.BODY
})
.build()
private val retrofit = Retrofit.Builder()
.baseUrl(baseUrl)
.client(httpClient)
.addConverterFactory(GsonConverterFactory.create())
.build()
private val service = retrofit.create(AddisAIService::class.java)
suspend fun sendMessage(message: String, language: String = "am"): ChatResponse {
val userMessage = Message("user", message)
val request = ChatRequest(
prompt = message,
target_language = language,
conversation_history = conversationHistory.toList()
)
return try {
val response = service.generateChat(request)
// Update conversation history
conversationHistory.add(userMessage)
conversationHistory.add(Message("assistant", response.response_text))
response
} catch (e: Exception) {
// Handle errors
throw e
}
}
suspend fun textToSpeech(text: String, language: String = "am"): String {
val request = mapOf(
"text" to text,
"language" to language
)
return try {
val response = service.textToSpeech(request)
response["audio"] ?: throw Exception("No audio in response")
} catch (e: Exception) {
throw e
}
}
fun clearConversation() {
conversationHistory.clear()
}
}
text

Chat Activity Implementation

Create a chat activity to interact with Addis AI:
class ChatActivity : AppCompatActivity() {
private lateinit var binding: ActivityChatBinding
private lateinit var adapter: ChatAdapter
private lateinit var client: AddisAIClient
private val messages = mutableListOf<ChatMessage>()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityChatBinding.inflate(layoutInflater)
setContentView(binding.root)
// Initialize API client
client = AddisAIClient(BuildConfig.ADDIS_AI_API_KEY)
// Set up RecyclerView
adapter = ChatAdapter(messages)
binding.recyclerView.adapter = adapter
binding.recyclerView.layoutManager = LinearLayoutManager(this)
// Send button click handler
binding.sendButton.setOnClickListener {
val message = binding.messageInput.text.toString().trim()
if (message.isNotEmpty()) {
sendMessage(message)
binding.messageInput.setText("")
}
}
}
private fun sendMessage(message: String) {
// Add user message to UI
val userMessage = ChatMessage("user", message)
messages.add(userMessage)
adapter.notifyItemInserted(messages.size - 1)
binding.recyclerView.scrollToPosition(messages.size - 1)
// Show loading indicator
binding.progressBar.visibility = View.VISIBLE
binding.sendButton.isEnabled = false
// Call API in background
lifecycleScope.launch(Dispatchers.IO) {
try {
val response = client.sendMessage(message)
withContext(Dispatchers.Main) {
// Add assistant response to UI
val assistantMessage = ChatMessage("assistant", response.response_text)
messages.add(assistantMessage)
adapter.notifyItemInserted(messages.size - 1)
binding.recyclerView.scrollToPosition(messages.size - 1)
}
} catch (e: Exception) {
withContext(Dispatchers.Main) {
// Show error message
Toast.makeText(this@ChatActivity,
"Error: ${e.message}", Toast.LENGTH_SHORT).show()
}
} finally {
withContext(Dispatchers.Main) {
// Hide loading indicator
binding.progressBar.visibility = View.GONE
binding.sendButton.isEnabled = true
}
}
}
}
data class ChatMessage(val sender: String, val text: String)
class ChatAdapter(private val messages: List<ChatMessage>) :
RecyclerView.Adapter<ChatAdapter.MessageViewHolder>() {
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MessageViewHolder {
val inflater = LayoutInflater.from(parent.context)
val binding = ItemMessageBinding.inflate(inflater, parent, false)
return MessageViewHolder(binding)
}
override fun onBindViewHolder(holder: MessageViewHolder, position: Int) {
val message = messages[position]
holder.bind(message)
}
override fun getItemCount() = messages.size
class MessageViewHolder(private val binding: ItemMessageBinding) :
RecyclerView.ViewHolder(binding.root) {
fun bind(message: ChatMessage) {
binding.messageText.text = message.text
// Apply different styles based on sender
if (message.sender == "user") {
binding.messageContainer.setBackgroundResource(R.drawable.bg_message_user)
binding.messageContainer.gravity = Gravity.END
} else {
binding.messageContainer.setBackgroundResource(R.drawable.bg_message_assistant)
binding.messageContainer.gravity = Gravity.START
}
}
}
}
}
text

Multi-modal Implementation (Images + Text)

To send images along with text in Android:
suspend fun sendImageWithText(
image: File,
prompt: String,
language: String = "am"
): ChatResponse {
val requestBody = MultipartBody.Builder()
.setType(MultipartBody.FORM)
.addFormDataPart(
"image1",
image.name,
image.asRequestBody("image/*".toMediaTypeOrNull())
)
.addFormDataPart(
"request_data",
"""{
"prompt": "$prompt",
"target_language": "$language",
"attachment_field_names": ["image1"]
}"""
)
.build()
val request = Request.Builder()
.url("${baseUrl}chat_generate")
.addHeader("X-API-Key", apiKey)
.post(requestBody)
.build()
return withContext(Dispatchers.IO) {
httpClient.newCall(request).execute().use { response ->
if (!response.isSuccessful) {
throw IOException("API call failed with code ${response.code}")
}
val responseBody = response.body?.string()
val gson = Gson()
gson.fromJson(responseBody, ChatResponse::class.java)
}
}
}
text

iOS Implementation (Swift)

Here's how to integrate Addis AI into an iOS application using Swift:

API Client Implementation

Create an API client class to handle communication with Addis AI:
import Foundation
class AddisAIClient {
private let baseURL = "https://api.addisassistant.com/api/v1"
private let apiKey: String
private var conversationHistory: [Message] = []
struct Message: Codable {
let role: String // "user" or "assistant"
let content: String
}
struct ChatRequest: Codable {
let prompt: String
let target_language: String
let conversation_history: [Message]?
let generation_config: GenerationConfig
}
struct GenerationConfig: Codable {
let temperature: Double
}
struct ChatResponse: Codable {
let response_text: String
let finish_reason: String?
let usage_metadata: UsageMetadata?
}
struct UsageMetadata: Codable {
let prompt_token_count: Int
let candidates_token_count: Int
let total_token_count: Int
}
init(apiKey: String) {
self.apiKey = apiKey
}
func sendMessage(
message: String,
language: String = "am",
completion: @escaping (Result<ChatResponse, Error>) -> Void
) {
let userMessage = Message(role: "user", content: message)
let request = ChatRequest(
prompt: message,
target_language: language,
conversation_history: self.conversationHistory,
generation_config: GenerationConfig(temperature: 0.7)
)
guard let url = URL(string: "\(baseURL)/chat_generate") else {
completion(.failure(NSError(domain: "AddisAI", code: 0, userInfo: [NSLocalizedDescriptionKey: "Invalid URL"])))
return
}
var urlRequest = URLRequest(url: url)
urlRequest.httpMethod = "POST"
urlRequest.addValue(apiKey, forHTTPHeaderField: "X-API-Key")
urlRequest.addValue("application/json", forHTTPHeaderField: "Content-Type")
do {
let jsonData = try JSONEncoder().encode(request)
urlRequest.httpBody = jsonData
} catch {
completion(.failure(error))
return
}
URLSession.shared.dataTask(with: urlRequest) { data, response, error in
if let error = error {
completion(.failure(error))
return
}
guard let data = data else {
completion(.failure(NSError(domain: "AddisAI", code: 0, userInfo: [NSLocalizedDescriptionKey: "No data in response"])))
return
}
do {
let response = try JSONDecoder().decode(ChatResponse.self, from: data)
// Update conversation history
self.conversationHistory.append(userMessage)
self.conversationHistory.append(Message(role: "assistant", content: response.response_text))
completion(.success(response))
} catch {
completion(.failure(error))
}
}.resume()
}
func textToSpeech(
text: String,
language: String = "am",
completion: @escaping (Result<String, Error>) -> Void
) {
guard let url = URL(string: "\(baseURL)/audio") else {
completion(.failure(NSError(domain: "AddisAI", code: 0, userInfo: [NSLocalizedDescriptionKey: "Invalid URL"])))
return
}
var urlRequest = URLRequest(url: url)
urlRequest.httpMethod = "POST"
urlRequest.addValue(apiKey, forHTTPHeaderField: "X-API-Key")
urlRequest.addValue("application/json", forHTTPHeaderField: "Content-Type")
let requestBody: [String: Any] = [
"text": text,
"language": language
]
do {
let jsonData = try JSONSerialization.data(withJSONObject: requestBody)
urlRequest.httpBody = jsonData
} catch {
completion(.failure(error))
return
}
URLSession.shared.dataTask(with: urlRequest) { data, response, error in
if let error = error {
completion(.failure(error))
return
}
guard let data = data else {
completion(.failure(NSError(domain: "AddisAI", code: 0, userInfo: [NSLocalizedDescriptionKey: "No data in response"])))
return
}
do {
if let json = try JSONSerialization.jsonObject(with: data) as? [String: Any],
let audioBase64 = json["audio"] as? String {
completion(.success(audioBase64))
} else {
completion(.failure(NSError(domain: "AddisAI", code: 0, userInfo: [NSLocalizedDescriptionKey: "Invalid response format"])))
}
} catch {
completion(.failure(error))
}
}.resume()
}
func clearConversation() {
self.conversationHistory.removeAll()
}
}
text

Chat View Controller

Create a view controller to handle the chat interface:
import UIKit
class ChatViewController: UIViewController {
private let tableView = UITableView()
private let messageField = UITextField()
private let sendButton = UIButton()
private let loadingIndicator = UIActivityIndicatorView(style: .medium)
private let client = AddisAIClient(apiKey: "YOUR_API_KEY")
private var messages: [(sender: String, text: String)] = []
override func viewDidLoad() {
super.viewDidLoad()
setupUI()
}
private func setupUI() {
// Configure tableView
tableView.translatesAutoresizingMaskIntoConstraints = false
tableView.register(MessageCell.self, forCellReuseIdentifier: "MessageCell")
tableView.dataSource = self
tableView.separatorStyle = .none
view.addSubview(tableView)
// Configure input area
let inputContainer = UIView()
inputContainer.translatesAutoresizingMaskIntoConstraints = false
inputContainer.backgroundColor = .systemGray6
view.addSubview(inputContainer)
messageField.translatesAutoresizingMaskIntoConstraints = false
messageField.placeholder = "Type a message..."
messageField.borderStyle = .roundedRect
inputContainer.addSubview(messageField)
sendButton.translatesAutoresizingMaskIntoConstraints = false
sendButton.setTitle("Send", for: .normal)
sendButton.setTitleColor(.systemBlue, for: .normal)
sendButton.addTarget(self, action: #selector(sendMessage), for: .touchUpInside)
inputContainer.addSubview(sendButton)
loadingIndicator.translatesAutoresizingMaskIntoConstraints = false
loadingIndicator.hidesWhenStopped = true
inputContainer.addSubview(loadingIndicator)
// Set up constraints
NSLayoutConstraint.activate([
inputContainer.leadingAnchor.constraint(equalTo: view.leadingAnchor),
inputContainer.trailingAnchor.constraint(equalTo: view.trailingAnchor),
inputContainer.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor),
inputContainer.heightAnchor.constraint(equalToConstant: 60),
tableView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
tableView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
tableView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor),
tableView.bottomAnchor.constraint(equalTo: inputContainer.topAnchor),
messageField.leadingAnchor.constraint(equalTo: inputContainer.leadingAnchor, constant: 10),
messageField.centerYAnchor.constraint(equalTo: inputContainer.centerYAnchor),
sendButton.trailingAnchor.constraint(equalTo: inputContainer.trailingAnchor, constant: -10),
sendButton.centerYAnchor.constraint(equalTo: inputContainer.centerYAnchor),
sendButton.leadingAnchor.constraint(equalTo: messageField.trailingAnchor, constant: 10),
sendButton.widthAnchor.constraint(equalToConstant: 60),
loadingIndicator.centerYAnchor.constraint(equalTo: sendButton.centerYAnchor),
loadingIndicator.trailingAnchor.constraint(equalTo: sendButton.leadingAnchor, constant: -10)
])
}
@objc private func sendMessage() {
guard let text = messageField.text?.trimmingCharacters(in: .whitespacesAndNewlines), !text.isEmpty else {
return
}
// Add user message to UI
messages.append(("user", text))
tableView.reloadData()
scrollToBottom()
// Clear input field
messageField.text = ""
// Show loading indicator
loadingIndicator.startAnimating()
sendButton.isEnabled = false
// Call API
client.sendMessage(message: text) { [weak self] result in
DispatchQueue.main.async {
self?.loadingIndicator.stopAnimating()
self?.sendButton.isEnabled = true
switch result {
case .success(let response):
// Add assistant message to UI
self?.messages.append(("assistant", response.response_text))
self?.tableView.reloadData()
self?.scrollToBottom()
case .failure(let error):
// Show error alert
let alert = UIAlertController(
title: "Error",
message: error.localizedDescription,
preferredStyle: .alert
)
alert.addAction(UIAlertAction(title: "OK", style: .default))
self?.present(alert, animated: true)
}
}
}
}
private func scrollToBottom() {
guard !messages.isEmpty else { return }
let indexPath = IndexPath(row: messages.count - 1, section: 0)
tableView.scrollToRow(at: indexPath, at: .bottom, animated: true)
}
}
// MARK: - UITableViewDataSource
extension ChatViewController: UITableViewDataSource {
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return messages.count
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "MessageCell", for: indexPath) as! MessageCell
let message = messages[indexPath.row]
cell.configure(with: message.text, sender: message.sender)
return cell
}
}
// MARK: - MessageCell
class MessageCell: UITableViewCell {
private let bubbleView = UIView()
private let messageLabel = UILabel()
override init(style: UITableViewCell.CellStyle, reuseIdentifier: String?) {
super.init(style: style, reuseIdentifier: reuseIdentifier)
setupUI()
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
private func setupUI() {
selectionStyle = .none
bubbleView.translatesAutoresizingMaskIntoConstraints = false
bubbleView.layer.cornerRadius = 12
contentView.addSubview(bubbleView)
messageLabel.translatesAutoresizingMaskIntoConstraints = false
messageLabel.numberOfLines = 0
bubbleView.addSubview(messageLabel)
NSLayoutConstraint.activate([
bubbleView.topAnchor.constraint(equalTo: contentView.topAnchor, constant: 4),
bubbleView.bottomAnchor.constraint(equalTo: contentView.bottomAnchor, constant: -4),
bubbleView.widthAnchor.constraint(lessThanOrEqualTo: contentView.widthAnchor, multiplier: 0.75),
messageLabel.topAnchor.constraint(equalTo: bubbleView.topAnchor, constant: 8),
messageLabel.bottomAnchor.constraint(equalTo: bubbleView.bottomAnchor, constant: -8),
messageLabel.leadingAnchor.constraint(equalTo: bubbleView.leadingAnchor, constant: 12),
messageLabel.trailingAnchor.constraint(equalTo: bubbleView.trailingAnchor, constant: -12)
])
}
func configure(with message: String, sender: String) {
messageLabel.text = message
if sender == "user" {
bubbleView.backgroundColor = .systemBlue
messageLabel.textColor = .white
bubbleView.leadingAnchor.constraint(greaterThanOrEqualTo: contentView.leadingAnchor, constant: 60).isActive = true
bubbleView.trailingAnchor.constraint(equalTo: contentView.trailingAnchor, constant: -8).isActive = true
} else {
bubbleView.backgroundColor = .systemGray5
messageLabel.textColor = .black
bubbleView.leadingAnchor.constraint(equalTo: contentView.leadingAnchor, constant: 8).isActive = true
bubbleView.trailingAnchor.constraint(lessThanOrEqualTo: contentView.trailingAnchor, constant: -60).isActive = true
}
}
}
text

Multi-modal Implementation (Images + Text) - iOS

To send images along with text in iOS:
func sendImageWithText(
image: UIImage,
prompt: String,
language: String = "am",
completion: @escaping (Result<ChatResponse, Error>) -> Void
) {
guard let url = URL(string: "\(baseURL)/chat_generate") else {
completion(.failure(NSError(domain: "AddisAI", code: 0, userInfo: [NSLocalizedDescriptionKey: "Invalid URL"])))
return
}
guard let imageData = image.jpegData(compressionQuality: 0.8) else {
completion(.failure(NSError(domain: "AddisAI", code: 0, userInfo: [NSLocalizedDescriptionKey: "Failed to convert image to data"])))
return
}
// Create multipart request
let boundary = UUID().uuidString
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.addValue(apiKey, forHTTPHeaderField: "X-API-Key")
request.addValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")
var body = Data()
// Add image part
body.append("--\(boundary)\r\n".data(using: .utf8)!)
body.append("Content-Disposition: form-data; name=\"image1\"; filename=\"image.jpg\"\r\n".data(using: .utf8)!)
body.append("Content-Type: image/jpeg\r\n\r\n".data(using: .utf8)!)
body.append(imageData)
body.append("\r\n".data(using: .utf8)!)
// Add request_data part
let requestData = """
{
"prompt": "\(prompt)",
"target_language": "\(language)",
"attachment_field_names": ["image1"]
}
"""
body.append("--\(boundary)\r\n".data(using: .utf8)!)
body.append("Content-Disposition: form-data; name=\"request_data\"\r\n".data(using: .utf8)!)
body.append("Content-Type: application/json\r\n\r\n".data(using: .utf8)!)
body.append(requestData.data(using: .utf8)!)
body.append("\r\n".data(using: .utf8)!)
// End boundary
body.append("--\(boundary)--\r\n".data(using: .utf8)!)
request.httpBody = body
URLSession.shared.dataTask(with: request) { data, response, error in
if let error = error {
completion(.failure(error))
return
}
guard let data = data else {
completion(.failure(NSError(domain: "AddisAI", code: 0, userInfo: [NSLocalizedDescriptionKey: "No data in response"])))
return
}
do {
let response = try JSONDecoder().decode(ChatResponse.self, from: data)
completion(.success(response))
} catch {
completion(.failure(error))
}
}.resume()
}
text

React Native Implementation

For cross-platform mobile development, you can integrate Addis AI into a React Native application:
// AddisAIClient.js
export default class AddisAIClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.baseUrl = "https://api.addisassistant.com/api/v1";
this.conversationHistory = [];
}
async sendMessage(message, language = "am") {
try {
const response = await fetch(`${this.baseUrl}/chat_generate`, {
method: "POST",
headers: {
"X-API-Key": this.apiKey,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt: message,
target_language: language,
conversation_history: this.conversationHistory,
generation_config: {
temperature: 0.7,
},
}),
});
if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}
const data = await response.json();
// Update conversation history
this.conversationHistory.push(
{ role: "user", content: message },
{ role: "assistant", content: data.response_text },
);
return data;
} catch (error) {
console.error("Error sending message:", error);
throw error;
}
}
async sendImageWithText(imageUri, prompt, language = "am") {
const formData = new FormData();
// Add image
formData.append("image1", {
uri: imageUri,
type: "image/jpeg",
name: "image.jpg",
});
// Add request data
formData.append(
"request_data",
JSON.stringify({
prompt,
target_language: language,
attachment_field_names: ["image1"],
}),
);
try {
const response = await fetch(`${this.baseUrl}/chat_generate`, {
method: "POST",
headers: {
"X-API-Key": this.apiKey,
"Content-Type": "multipart/form-data",
},
body: formData,
});
if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}
return await response.json();
} catch (error) {
console.error("Error sending image with text:", error);
throw error;
}
}
clearConversation() {
this.conversationHistory = [];
}
}
javascript

Best Practices for Mobile Integration

  1. API Key Security: Never store API keys in your mobile app code. Use secure storage or a backend proxy.
  2. Network Handling:
    • Implement proper timeout handling
    • Add retry logic for failed requests
    • Handle different network conditions (offline, slow connection)
  3. Battery and Data Usage:
    • Minimize unnecessary API calls
    • Compress images before uploading
    • Consider implementing a "Wi-Fi only" option for large uploads or TTS
  4. Performance Optimization:
    • Implement caching for responses
    • Use background threads for API calls
    • Handle UI updates on the main thread
  5. User Experience:
    • Show clear loading indicators
    • Implement typing indicators for chat interfaces
    • Provide fallbacks for API failures
  6. Permissions:
    • Request only necessary permissions (camera, microphone)
    • Explain why permissions are needed
    • Handle permission denials gracefully

Common Issues and Solutions

Network Security Configuration

For Android apps targeting API level 28+, you need to configure network security:
<!-- res/xml/network_security_config.xml -->
<network-security-config>
<domain-config cleartextTrafficPermitted="false">
<domain includeSubdomains="true">api.addisassistant.com</domain>
</domain-config>
</network-security-config>
xml
Then reference it in your AndroidManifest.xml:
<application
android:networkSecurityConfig="@xml/network_security_config"
...>
</application>
xml

API Key Protection

To protect your API key in mobile apps:
  1. Use a backend proxy service instead of directly calling Addis AI
  2. Implement app-specific authentication
  3. Use secure storage (KeyStore on Android, Keychain on iOS)

Large File Uploads

For sending large files:
  1. Implement progress indicators for uploads
  2. Compress files before uploading
  3. Check network connectivity before uploading

Next Steps

Now that you've integrated Addis AI into your mobile application, you might want to:
  1. Explore Server-side Implementation for a more secure architecture
  2. Learn about Streaming Implementation for real-time responses
  3. Check out Multi-modal Input for richer interactions