# testai **Repository Path**: alamhubb/testai ## Basic Information - **Project Name**: testai - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-19 - **Last Updated**: 2026-03-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # testai Protocol probe and OpenCode config examples for two custom OpenAI-style endpoints: - `https://timicc.com` - `https://aixj.vip` Tested on 2026-03-19. ## Files - `src/probe-openai-compat.mjs`: probes `/v1/models`, `/v1/chat/completions`, and `/v1/responses` - `src/opencode.providers.example.jsonc`: ready-to-copy OpenCode provider examples - `package.json`: npm scripts for direct execution - `url列表.json`: controls which endpoint entries appear in `timicc.json` and `aixj.json` - `result/`: generated probe outputs for each target plus an overall explanation file ## Run ```bash npm run probe:gpt54 ``` Other commands: - `npm run probe`: same script, defaulting to `gpt-5.4` - `npm run probe:auto`: uses the first model returned by `/v1/models` - `node src/probe-openai-compat.mjs --model gpt-5.4`: explicit model override The script also writes: - `result/timicc.json` - `result/aixj.json` - `result/output.json` ## Compatibility Proxy `src/openai-compatible-proxy.mjs` starts a small reverse proxy that: - keeps non-problematic routes as pass-through - keeps ordinary `POST /v1/chat/completions` on the original upstream path - repairs `tool_calls` shape when the upstream chat response is slightly malformed - keeps tool follow-up turns on `POST /v1/chat/completions` by default - can optionally bridge tool follow-up turns to `/v1/responses` for upstreams that really support that protocol - only applies compatibility fixes on `POST /v1/chat/completions` Default local address: - `http://127.0.0.1:8787` ### npm Commands Development mode: ```bash npm run dev ``` Production mode: ```bash npm run start ``` Build-time validation: ```bash npm run build ``` Compatibility verification: ```bash npm run verify:proxy ``` Behavior by mode: - `npm run dev`: enables console logging, `log/proxy.log`, and `log/requests/*.json` - `npm run start`: production mode, no request/runtime file logs by default - `npm run build`: syntax validation only, it does not start the proxy Useful flags: - `--port 9999` - `--upstream https://aixj.vip` - `--api-key sk-...` - `--compat-path /v1/chat/completions` - `--bridge-tool-turns true|false` - `--mode development` - `--mode production` - `--console-log true|false` - `--runtime-log true|false` - `--request-log true|false` - `--verbose` If you do not pass `--api-key`, the proxy forwards the caller's `Authorization` header upstream. The intended production setup is: - callers send their own `Authorization: Bearer ...` - the proxy forwards that header to `aixj.vip` - only `POST /v1/chat/completions` is normalized - tool follow-up turns stay on chat completions unless you explicitly enable `--bridge-tool-turns true` - all other upstream paths stay as pass-through Each proxied response also includes: - `x-proxy-request-id` ### Linux Background Scripts Production start on port `8787`: ```bash bash scripts/linux-proxy-start.sh ``` Development start on port `8787`: ```bash bash scripts/linux-proxy-start.sh dev ``` Stop: ```bash bash scripts/linux-proxy-stop.sh ``` Status: ```bash bash scripts/linux-proxy-status.sh ``` Notes: - these scripts use `nohup`, so the process keeps running after the terminal window closes - default port is `8787` - set `PORT=9999` before the command if you want a different port - set `UPSTREAM_BASE_URL=https://...` before the command if you want another upstream - development mode also writes an extra stdout file like `log/linux-proxy-dev-8787.out` ### Nginx + SSL Deployment Recommended production chain: - `Cline -> https://your-domain -> nginx -> http://127.0.0.1:8787 -> this proxy -> aixj` Why this works: - nginx handles your public domain and SSL certificate - the Node proxy keeps the compatibility fixes - Cline only sees your own HTTPS domain, so the protocol issue is solved at your gateway layer Files included: - `deploy/nginx/testai-proxy.conf.example` - `deploy/systemd/openai-compatible-proxy.service.example` - `deploy/systemd/openai-compatible-proxy.env.example` Typical Linux deployment steps: 1. Put this project on the server, for example `/opt/testai` 2. Install dependencies with `npm install` 3. Start the proxy in production mode with `bash scripts/linux-proxy-start.sh` 4. Or install the systemd service example and run it with `systemctl` 5. Copy the nginx example, replace the domain and certificate paths, then reload nginx If you prefer `systemd`, the rough steps are: ```bash sudo cp deploy/systemd/openai-compatible-proxy.service.example /etc/systemd/system/openai-compatible-proxy.service sudo cp deploy/systemd/openai-compatible-proxy.env.example /etc/default/openai-compatible-proxy sudo systemctl daemon-reload sudo systemctl enable --now openai-compatible-proxy sudo systemctl status openai-compatible-proxy ``` Switch service mode later: ```bash bash scripts/linux-proxy-service-mode.sh dev bash scripts/linux-proxy-service-mode.sh prod ``` Mode behavior for the `systemd` service: - `dev`: enables console logs plus `log/proxy.log` and `log/requests/*.json` - `prod`: keeps those logs off by default If you prefer nginx, the rough steps are: ```bash sudo cp deploy/nginx/testai-proxy.conf.example /etc/nginx/conf.d/testai-proxy.conf sudo nginx -t sudo systemctl reload nginx ``` After deployment, test: ```bash curl https://your-domain/healthz curl https://your-domain/v1/models -H "Authorization: Bearer sk-..." ``` For Cline usage, point the base URL to your own domain: - base URL: `https://your-domain` - path used by client: `/v1/chat/completions` - API key: the same key you want the proxy to forward upstream Important note: - if you do not set `--api-key` or `UPSTREAM_API_KEY`, the proxy forwards the caller's `Authorization` header upstream - so in the simplest setup, Cline can directly use the upstream provider key against your domain - if later you want to hide the upstream key from clients, you can set `UPSTREAM_API_KEY` on the server and then pair nginx with your own auth layer - leave `--bridge-tool-turns` as `false` for `aixj.vip`, because its `/v1/responses` implementation is not reliable enough for tool follow-up turns ## Current Findings - `timicc.com` supports `/v1/chat/completions` and `/v1/responses` - `timicc.com` also supports streaming on both endpoints - `timicc.com` lists and successfully runs `gpt-5.4` - `aixj.vip` supports `/v1/chat/completions` - `aixj.vip` also supports streaming chat completions - `aixj.vip` lists and successfully runs `gpt-5.4` on chat completions - `aixj.vip` returns `502 Bad Gateway` for `/v1/responses` - `aixj.vip` also fails on streaming `/v1/responses` ## OpenCode Mapping - Use `@ai-sdk/openai` when the provider really supports `/v1/responses` - Use `@ai-sdk/openai-compatible` when the provider only supports `/v1/chat/completions` For these two endpoints, that means: - `timicc.com`: can use either package - `aixj.vip`: should use `@ai-sdk/openai-compatible` ## Reference OpenCode provider docs: - https://opencode.ai/docs/providers/