AimableDocs
DocsAPI Reference

Admin LLM Providers

Introduction to Admin LLM Providers

The admin-llm-providers domain within the FastAPI platform is designed to manage and interact with Large Language Model (LLM) providers. This domain is crucial for administrators who need to oversee various LLM providers and their associated models. By leveraging these endpoints, administrators can efficiently list available LLM providers and explore the models offered by each provider, facilitating better integration and management of AI capabilities.

Key Concepts

LLM Providers

LLM Providers are entities that offer access to large language models. These models are essential for various applications, including natural language processing, text generation, and more. Each provider may offer a range of models with different capabilities and configurations.

Models

Models are specific implementations of language models provided by LLM providers. They vary in size, complexity, and purpose. Understanding the available models from each provider is essential for selecting the right tool for your application needs.

Common Workflows

Listing LLM Providers

To get an overview of all available LLM providers, you can use the /v1/admin/llm-providers endpoint. This call is typically the starting point for administrators to understand which providers are accessible and to plan further interactions with specific models.

Exploring Provider Models

Once you have identified a provider of interest, the next step is to list the models they offer. This can be done using the /v1/admin/llm-providers/{provider_id}/models endpoint. This call provides detailed information about each model, enabling informed decisions about which models to integrate into your applications.

Practical Examples

Example: List All LLM Providers

To list all LLM providers, you can use the following curl command:

curl -X GET "https://api.example.com/v1/admin/llm-providers" \
     -H "X-API-Key: your_api_key_here"

This request will return a JSON array of LLM providers, each with its unique identifier and metadata.

Example: List Models for a Specific Provider

After identifying a provider, you can list their models using:

curl -X GET "https://api.example.com/v1/admin/llm-providers/{provider_id}/models" \
     -H "X-API-Key: your_api_key_here"

Replace {provider_id} with the actual ID of the provider you are interested in. This request will return a list of models, each with details such as model name, description, and capabilities.

Important Considerations

Authentication

While the X-API-Key header is optional, it is recommended to use it to authenticate your requests. This ensures that you have the necessary permissions to access the data and helps prevent unauthorized access.

Pagination

For endpoints that return lists, consider implementing pagination to handle large datasets efficiently. This will help manage response sizes and improve performance.

Error Handling

Ensure that your application can gracefully handle errors, such as network issues or invalid provider IDs. Implementing robust error handling will improve the reliability and user experience of your application.

By understanding these concepts and workflows, administrators can effectively manage LLM providers and their models, enhancing the capabilities of their applications with powerful language models.