root/private/: django-anyllm-0.1.0 metadata and description

Simple index

Short description

author_email Diogo Rosa <[email protected]>
classifiers
  • Framework :: Django
  • Programming Language :: Python :: 3
description_content_type text/markdown
requires_dist
  • any-llm-sdk[all]>=1.14.0
  • django>=4.2
  • markdown>=3.0
requires_python >=3.11

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
django_anyllm-0.1.0-py3-none-any.whl
Size
39 KB
Type
Python Wheel
Python
3
django_anyllm-0.1.0.tar.gz
Size
268 KB
Type
Source

django-anyllm

A reusable Django package that exposes your model layer as LLM-callable tools via any-llm-sdk. Install it in any Django project and let an LLM introspect your models, query instances, and manage resources safely.

Features

Quick Start

1. Install

pip install django-anyllm

2. Add to INSTALLED_APPS

INSTALLED_APPS = [
    # ...
    "django_anyllm",
]

3. Run Migrations

python manage.py migrate django_anyllm

This creates the default Chat, Message, and MessageFile tables.

4. Register Your Models

In your app's apps.py (or anywhere after Django is ready):

from django.apps import AppConfig
from django_anyllm import default_registry

from .models import Product, Order


class ShopConfig(AppConfig):
    name = "shop"

    def ready(self):
        default_registry.register(Product)
        default_registry.register(Order, permissions={"read", "create"})

5. Wire Up URLs (for the Chat UI)

# urls.py
from django.urls import include, path

urlpatterns = [
    # ...
    path("chat/", include("django_anyllm.urls")),
]

Visit /chat/ to start using the built-in chat interface.


Using the Agent

The ModelAgent automatically builds a system prompt from your registry, manages the conversation history, and handles tool execution:

from django_anyllm import ModelAgent

agent = ModelAgent(
    model="gpt-4",
    provider="openai",
    api_key="sk-...",
)

# Non-streaming
answer = agent.chat("List all products under $20")
print(answer)

# Streaming
for event in agent.chat("Create a new order for product #1", stream=True):
    if event.type == "content":
        print(event.text, end="")
    elif event.type == "reasoning":
        print(f"\n[thinking: {event.text}]\n")
    elif event.type == "tool_call":
        print(f"\n[calling tool: {event.tool_name}]\n")
    elif event.type == "tool_result":
        print(f"\n[tool result: {event.tool_result}]\n")

# Reasoning models
answer = agent.chat(
    "Analyze which products are out of stock and suggest reorders",
    reasoning_effort="high",
)

Async Support

answer = await agent.achat("List all authors")

async for event in await agent.achat("List all authors", stream=True):
    print(event.text or "", end="")

Persistent Chats

If you want the agent to save every message to the database, use PersistentModelAgent:

from django_anyllm import PersistentModelAgent
from django_anyllm.models import Chat

chat = Chat.objects.create(model="gpt-4", provider="openai")
agent = PersistentModelAgent(chat=chat)

# Messages are automatically saved to the DB
answer = agent.chat("List all authors")

# Stream and persist in real time
for event in agent.chat("Create a new author", stream=True):
    print(event.text or "", end="")

Custom Chat / Message Models

If the default models don't fit your needs, subclass the abstract bases and add your own fields:

# myapp/models.py
from django.db import models
from django_anyllm.models import AbstractChat, AbstractMessage, AbstractMessageFile


class Chat(AbstractChat):
    user = models.ForeignKey("auth.User", on_delete=models.CASCADE)
    project = models.ForeignKey("myapp.Project", on_delete=models.CASCADE)


class Message(AbstractMessage):
    chat = models.ForeignKey(Chat, on_delete=models.CASCADE, related_name="messages")
    tokens_used = models.PositiveIntegerField(null=True, blank=True)


class MessageFile(AbstractMessageFile):
    message = models.ForeignKey(Message, on_delete=models.CASCADE, related_name="files")

Then point the views at your custom models by overriding them in your own views, or simply use the low-level agent API and build your own UI.


Low-Level Tools

If you prefer full control, use the tools directly with any-llm-sdk:

from any_llm import completion
from django_anyllm.tools import get_tools

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "List all authors and their books."},
]

response = completion(
    model="gpt-4",
    provider="openai",
    messages=messages,
    tools=get_tools(),
)

The LLM can call:

Custom Registry

If you prefer isolation over the global default:

from django_anyllm.registry import ModelRegistry
from django_anyllm.tools import get_tools

registry = ModelRegistry()
registry.register(MyModel, permissions={"read"})

tools = get_tools(registry)
agent = ModelAgent(model="gpt-4", provider="openai", registry=registry)

Settings

# settings.py
DJANGO_ANYLLM = {
    "MAX_QUERY_LIMIT": 100,           # hard cap on query_instances limit
    "DEFAULT_MODEL": "gpt-4",         # default LLM model for new chats
    "DEFAULT_PROVIDER": "openai",     # default LLM provider for new chats
}

Development

uv sync --extra dev
uv run pytest

License

MIT