feat: generic OpenAI-compatible LLM plugin + PPQ refactor #96
No reviewers
Labels
No labels
Compat/Breaking
Kind/Bug
Kind/Competitor
Kind/Documentation
Kind/Enhancement
Kind/Epic
Kind/Feature
Kind/Security
Kind/Story
Kind/Testing
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Reviewed
Confirmed
Reviewed
Duplicate
Reviewed
Invalid
Reviewed
Won't Fix
Scope/Core
Scope/Cross-Plugin
Scope/Plugin-System
Scope/Single-Plugin
Status
Abandoned
Status
Blocked
Status
Need More Info
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
ultanio/cobot!96
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "feature/openai-compat"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Summary
Adds
openai_compatplugin — a generic LLM provider for any OpenAI-compatible endpoint.Refactors PPQ to a thin 35-line wrapper that inherits from
OpenAICompatPlugin.Motivation
For budget reasons, we want to support Claude Max subscription via claude-max-api-proxy ($200/mo flat) as an alternative to per-query PPQ. Both speak OpenAI format, so the real abstraction is "OpenAI-compatible endpoint".
Config examples
Architecture
OpenAICompatPluginis the base class with configurable class attributes:DEFAULT_API_BASE,DEFAULT_MODEL,CONFIG_SECTION,API_KEY_ENVCreating a new service wrapper is ~10 lines (see PPQ plugin).
Changes
cobot/plugins/openai_compat/— generic provider + tests + READMEcobot/plugins/ppq/plugin.py— now inherits fromOpenAICompatPlugin(113 lines removed, 10 lines remain)Ref #91
🔍 Code Review — PR #96: OpenAI-Compatible LLM Plugin
✅ What looks good
Clean inheritance model — The 4 class attributes (
DEFAULT_API_BASE,DEFAULT_MODEL,CONFIG_SECTION,API_KEY_ENV) make subclassing trivial. PPQ went from ~120 lines to ~10. This is textbook Swappable Provider pattern from our plugin design guide.Backward compatibility — Existing PPQ configs work unchanged.
InsufficientFundsErroris re-exported via__all__. No breaking changes.No-auth mode — Gracefully omitting the
Authorizationheader when no API key is set is essential for local proxies like claude-max-api-proxy. Good design choice.Test coverage — 29 tests covering config, chat, errors, tool calls, and the PPQ wrapper. All passing.
Configurable timeout — Nice addition over the original PPQ plugin.
⚠️ Suggestions (non-blocking)
httpx.ConnectErrorhandling — The base plugin catchesConnectErrorbut the original PPQ plugin didn't. This is an improvement, but note the error message says "Is the endpoint running?" which is generic. Consider including theapi_baseURL in the error for easier debugging. ✅ Already done — I seeself._api_basein the error string. Good.Plugin discovery — Both
openai_compatandppqhavecreate_plugin()and both declarecapabilities=["llm"]. When both are present in the plugins directory, how does the registry pick which one to load? Theproviderconfig key presumably selects one, but it might be worth a note in the README about not loading both simultaneously (or the registry handles it viaproviderselection?).max_tokensdefault — TheLLMProvider.chat()interface defaults tomax_tokens=2048, but the plugin also has a_timeoutconfig. Consider makingmax_tokensconfigurable too (as a config default that can be overridden per-call), since different models/subscriptions have different sweet spots.Rate limit handling (429) — For Claude Max proxy and other endpoints, 429 responses are common. Currently these fall through to the generic
HTTPStatusErrorhandler. A future improvement could add retry-with-backoff for 429s, but that's out of scope for this PR.PPQ test patch path — The updated PPQ test patches
cobot.plugins.openai_compat.plugin.httpx.post(the parent class module). This is correct but couples PPQ tests to the openai_compat module path. If openai_compat ever moves, PPQ tests break. Minor concern — acceptable for now.📊 Verdict
APPROVE — Clean refactor, good test coverage, no breaking changes. The architecture enables easy addition of new OpenAI-compatible providers with minimal code. Ship it. 🚀
Reviewed by Doxios 🦊
View command line instructions
Checkout
From your project repository, check out a new branch and test the changes.