root/private/: django-anyllm-0.1.2 metadata and description
Short description
| author_email | Diogo Rosa <[email protected]> |
| classifiers |
|
| description_content_type | text/markdown |
| requires_dist |
|
| requires_python | >=3.11 |
| File | Tox results | History |
|---|---|---|
django_anyllm-0.1.2-py3-none-any.whl
|
|
|
django_anyllm-0.1.2.tar.gz
|
|
django-anyllm
A reusable Django package that exposes your model layer as LLM-callable tools via any-llm-sdk. Install it in any Django project and let an LLM introspect your models, query instances, and manage resources safely.
Features
- Model Introspection — List registered models, describe their fields (including
help_textandverbose_name), and generate JSON schemas. - CRUD Tools — Query, create, update, and delete instances through a permission-aware registry.
- any-llm-sdk Native — Tools are plain Python callables that
any-llm-sdkconverts to OpenAI tool format automatically. - Safe by Default — Only explicitly registered models are accessible, and per-model permissions control read/write/delete access.
- High-Level Agent —
ModelAgenthandles prompts, streaming, reasoning models, and the full tool-call conversation loop. - Chat UI — Ready-to-use HTMX-based chat interface with multiple chat support, file uploads, and SSE streaming.
- Persistent Chats — Abstract
Chat/Message/MessageFilemodels for storing conversation history.
Quick Start
1. Install
pip install django-anyllm
2. Add to INSTALLED_APPS
INSTALLED_APPS = [
# ...
"django_anyllm",
]
3. Run Migrations
python manage.py migrate django_anyllm
This creates the default Chat, Message, and MessageFile tables.
4. Register Your Models
In your app's apps.py (or anywhere after Django is ready):
from django.apps import AppConfig
from django_anyllm import default_registry
from .models import Product, Order
class ShopConfig(AppConfig):
name = "shop"
def ready(self):
default_registry.register(Product)
default_registry.register(Order, permissions={"read", "create"})
5. Wire Up URLs (for the Chat UI)
# urls.py
from django.urls import include, path
urlpatterns = [
# ...
path("chat/", include("django_anyllm.urls")),
]
Visit /chat/ to start using the built-in chat interface.
Using the Agent
The ModelAgent automatically builds a system prompt from your registry, manages the conversation history, and handles tool execution:
from django_anyllm import ModelAgent
agent = ModelAgent(
model="gpt-4",
provider="openai",
api_key="sk-...",
)
# Non-streaming
answer = agent.chat("List all products under $20")
print(answer)
# Streaming
for event in agent.chat("Create a new order for product #1", stream=True):
if event.type == "content":
print(event.text, end="")
elif event.type == "reasoning":
print(f"\n[thinking: {event.text}]\n")
elif event.type == "tool_call":
print(f"\n[calling tool: {event.tool_name}]\n")
elif event.type == "tool_result":
print(f"\n[tool result: {event.tool_result}]\n")
# Reasoning models
answer = agent.chat(
"Analyze which products are out of stock and suggest reorders",
reasoning_effort="high",
)
Async Support
answer = await agent.achat("List all authors")
async for event in await agent.achat("List all authors", stream=True):
print(event.text or "", end="")
Persistent Chats
If you want the agent to save every message to the database, use PersistentModelAgent:
from django_anyllm import PersistentModelAgent
from django_anyllm.models import Chat
chat = Chat.objects.create(model="gpt-4", provider="openai")
agent = PersistentModelAgent(chat=chat)
# Messages are automatically saved to the DB
answer = agent.chat("List all authors")
# Stream and persist in real time
for event in agent.chat("Create a new author", stream=True):
print(event.text or "", end="")
Custom Chat / Message Models
If the default models don't fit your needs, subclass the abstract bases and add your own fields:
# myapp/models.py
from django.db import models
from django_anyllm.models import AbstractChat, AbstractMessage, AbstractMessageFile
class Chat(AbstractChat):
user = models.ForeignKey("auth.User", on_delete=models.CASCADE)
project = models.ForeignKey("myapp.Project", on_delete=models.CASCADE)
class Message(AbstractMessage):
chat = models.ForeignKey(Chat, on_delete=models.CASCADE, related_name="messages")
tokens_used = models.PositiveIntegerField(null=True, blank=True)
class MessageFile(AbstractMessageFile):
message = models.ForeignKey(Message, on_delete=models.CASCADE, related_name="files")
Then point the views at your custom models by overriding them in your own views, or simply use the low-level agent API and build your own UI.
Low-Level Tools
If you prefer full control, use the tools directly with any-llm-sdk:
from any_llm import completion
from django_anyllm.tools import get_tools
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "List all authors and their books."},
]
response = completion(
model="gpt-4",
provider="openai",
messages=messages,
tools=get_tools(),
)
The LLM can call:
list_registered_models— discover what's availabledescribe_model— understand fields, types, andhelp_textget_model_schema— get a JSON schema for structured responsesquery_instances— fetch data with filters and orderingget_instance— retrieve a single recordcreate_instance— insert new dataupdate_instance— modify existing datadelete_instance— remove data
Custom Registry
If you prefer isolation over the global default:
from django_anyllm.registry import ModelRegistry
from django_anyllm.tools import get_tools
registry = ModelRegistry()
registry.register(MyModel, permissions={"read"})
tools = get_tools(registry)
agent = ModelAgent(model="gpt-4", provider="openai", registry=registry)
Settings
# settings.py
DJANGO_ANYLLM = {
"MAX_QUERY_LIMIT": 100, # hard cap on query_instances limit
"DEFAULT_MODEL": "gpt-4", # default LLM model for new chats
"DEFAULT_PROVIDER": "openai", # default LLM provider for new chats
}
Development
uv sync --extra dev
uv run pytest
License
MIT