Metadata-Version: 2.4
Name: django-anyllm
Version: 0.1.0
Summary: Short description
Author-email: Diogo Rosa <diogopt.rosa@gmail.com>
License-Expression: MIT
Classifier: Framework :: Django
Classifier: Programming Language :: Python :: 3
Requires-Python: >=3.11
Requires-Dist: any-llm-sdk[all]>=1.14.0
Requires-Dist: django>=4.2
Requires-Dist: markdown>=3.0
Description-Content-Type: text/markdown

# django-anyllm

A reusable Django package that exposes your model layer as **LLM-callable tools** via [`any-llm-sdk`](https://github.com/any-llm/any-llm-sdk). Install it in any Django project and let an LLM introspect your models, query instances, and manage resources safely.

## Features

- **Model Introspection** — List registered models, describe their fields (including `help_text` and `verbose_name`), and generate JSON schemas.
- **CRUD Tools** — Query, create, update, and delete instances through a permission-aware registry.
- **any-llm-sdk Native** — Tools are plain Python callables that `any-llm-sdk` converts to OpenAI tool format automatically.
- **Safe by Default** — Only explicitly registered models are accessible, and per-model permissions control read/write/delete access.
- **High-Level Agent** — `ModelAgent` handles prompts, streaming, reasoning models, and the full tool-call conversation loop.
- **Chat UI** — Ready-to-use HTMX-based chat interface with multiple chat support, file uploads, and SSE streaming.
- **Persistent Chats** — Abstract `Chat` / `Message` / `MessageFile` models for storing conversation history.

## Quick Start

### 1. Install

```bash
pip install django-anyllm
```

### 2. Add to `INSTALLED_APPS`

```python
INSTALLED_APPS = [
    # ...
    "django_anyllm",
]
```

### 3. Run Migrations

```bash
python manage.py migrate django_anyllm
```

This creates the default `Chat`, `Message`, and `MessageFile` tables.

### 4. Register Your Models

In your app's `apps.py` (or anywhere after Django is ready):

```python
from django.apps import AppConfig
from django_anyllm import default_registry

from .models import Product, Order


class ShopConfig(AppConfig):
    name = "shop"

    def ready(self):
        default_registry.register(Product)
        default_registry.register(Order, permissions={"read", "create"})
```

### 5. Wire Up URLs (for the Chat UI)

```python
# urls.py
from django.urls import include, path

urlpatterns = [
    # ...
    path("chat/", include("django_anyllm.urls")),
]
```

Visit `/chat/` to start using the built-in chat interface.

---

## Using the Agent

The `ModelAgent` automatically builds a system prompt from your registry, manages the conversation history, and handles tool execution:

```python
from django_anyllm import ModelAgent

agent = ModelAgent(
    model="gpt-4",
    provider="openai",
    api_key="sk-...",
)

# Non-streaming
answer = agent.chat("List all products under $20")
print(answer)

# Streaming
for event in agent.chat("Create a new order for product #1", stream=True):
    if event.type == "content":
        print(event.text, end="")
    elif event.type == "reasoning":
        print(f"\n[thinking: {event.text}]\n")
    elif event.type == "tool_call":
        print(f"\n[calling tool: {event.tool_name}]\n")
    elif event.type == "tool_result":
        print(f"\n[tool result: {event.tool_result}]\n")

# Reasoning models
answer = agent.chat(
    "Analyze which products are out of stock and suggest reorders",
    reasoning_effort="high",
)
```

### Async Support

```python
answer = await agent.achat("List all authors")

async for event in await agent.achat("List all authors", stream=True):
    print(event.text or "", end="")
```

---

## Persistent Chats

If you want the agent to save every message to the database, use `PersistentModelAgent`:

```python
from django_anyllm import PersistentModelAgent
from django_anyllm.models import Chat

chat = Chat.objects.create(model="gpt-4", provider="openai")
agent = PersistentModelAgent(chat=chat)

# Messages are automatically saved to the DB
answer = agent.chat("List all authors")

# Stream and persist in real time
for event in agent.chat("Create a new author", stream=True):
    print(event.text or "", end="")
```

---

## Custom Chat / Message Models

If the default models don't fit your needs, subclass the abstract bases and add your own fields:

```python
# myapp/models.py
from django.db import models
from django_anyllm.models import AbstractChat, AbstractMessage, AbstractMessageFile


class Chat(AbstractChat):
    user = models.ForeignKey("auth.User", on_delete=models.CASCADE)
    project = models.ForeignKey("myapp.Project", on_delete=models.CASCADE)


class Message(AbstractMessage):
    chat = models.ForeignKey(Chat, on_delete=models.CASCADE, related_name="messages")
    tokens_used = models.PositiveIntegerField(null=True, blank=True)


class MessageFile(AbstractMessageFile):
    message = models.ForeignKey(Message, on_delete=models.CASCADE, related_name="files")
```

Then point the views at your custom models by overriding them in your own views, or simply use the low-level agent API and build your own UI.

---

## Low-Level Tools

If you prefer full control, use the tools directly with `any-llm-sdk`:

```python
from any_llm import completion
from django_anyllm.tools import get_tools

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "List all authors and their books."},
]

response = completion(
    model="gpt-4",
    provider="openai",
    messages=messages,
    tools=get_tools(),
)
```

The LLM can call:

- `list_registered_models` — discover what's available
- `describe_model` — understand fields, types, and `help_text`
- `get_model_schema` — get a JSON schema for structured responses
- `query_instances` — fetch data with filters and ordering
- `get_instance` — retrieve a single record
- `create_instance` — insert new data
- `update_instance` — modify existing data
- `delete_instance` — remove data

## Custom Registry

If you prefer isolation over the global default:

```python
from django_anyllm.registry import ModelRegistry
from django_anyllm.tools import get_tools

registry = ModelRegistry()
registry.register(MyModel, permissions={"read"})

tools = get_tools(registry)
agent = ModelAgent(model="gpt-4", provider="openai", registry=registry)
```

## Settings

```python
# settings.py
DJANGO_ANYLLM = {
    "MAX_QUERY_LIMIT": 100,           # hard cap on query_instances limit
    "DEFAULT_MODEL": "gpt-4",         # default LLM model for new chats
    "DEFAULT_PROVIDER": "openai",     # default LLM provider for new chats
}
```

## Development

```bash
uv sync --extra dev
uv run pytest
```

## License

MIT
