# easy-proxy-wrap **Repository Path**: alamhubb/easy-proxy-wrap ## Basic Information - **Project Name**: easy-proxy-wrap - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-24 - **Last Updated**: 2026-03-26 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # easy-proxy-wrap Compatibility wrapper for the current `easy-proxies` deployment. This repository does not vendor the `easy-proxies` source tree. It keeps only: - the local `easy-proxies-subsync` sidecar - the custom `easy-proxies` entrypoint wrapper - the `docker compose` wiring for `easy-proxies` `easy-proxies` is pulled from upstream at runtime: - `ghcr.io/jasonwong1991/easy_proxies:latest` ## First Start ```bash ./start.sh ``` If `easy-proxies` needs to be reachable from another Docker stack on the same host, run with `EXTERNAL_DOCKER_NETWORK= ./start.sh`. The script will connect the `easy-proxies` container to that external network after `docker compose up -d --build`. The first run copies the required `*.example` files into local runtime files and exits. Edit the generated files, then rerun `./start.sh`. ## Files To Edit - `config.json` - `easy-proxies.yaml` - `easy-proxies-subscriptions.txt` This repository is intended to stay private. The live `easy-proxies-subscriptions.txt` file is tracked in Git so private deployments can update and roll back subscription endpoints with normal commits. Do not reuse this layout in a public repository unless you remove the live subscription file and rotate any exposed tokens. `easy-proxies.yaml` controls the upstream runtime pool behavior itself, including: - `pool.mode` - `pool.failure_threshold` - `pool.blacklist_duration` - `management.probe_target` Optional legacy files: - `priority-proxies.txt` when `config.json` sets `enable_priority_proxies` to `true` This is layer 1 / self-owned direct exits. If `config.json` also defines `priority_tunnels`, `./start.sh` renders `priority-proxies.generated.txt` automatically and you normally do not edit `priority-proxies.txt` by hand. - `easy-proxies-subscriptions.txt` This is layer 2 / subscription fallback. In the current lucen.cc deployment this points at the BBY subscription URL. - `preferred-proxies.txt` when `config.json` sets `enable_preferred_proxies` to `true` This is layer 3 / DataImpulse-style static fallback proxies. - `primary-proxy-list-urls.txt` when `config.json` sets `enable_primary_proxy_lists` to `true` This is layer 4 / NovProxy-style rotating proxy list URLs. ## Config Flags `config.json` controls the optional layers: - `enable_priority_proxies` - `enable_direct_proxies` - `enable_preferred_proxies` - `enable_primary_proxy_lists` - `priority_max_nodes` - `priority_selection_policy` - `priority_tunnels` - `filter_include_regex` - `filter_max_nodes` - `probe_wait_seconds` - `filter_region_priority` - `filter_exclude_regex` Priority, preferred, and primary layers default to `false`. Local direct fallback defaults to `true`. With the default config, `easy-proxies-subsync` keeps layer 5 local direct fallback enabled and evaluates the other layers above it in order. When all layers are enabled, the effective order is: 1. `priority-proxies.txt` or `priority-proxies.generated.txt` 2. `easy-proxies-subscriptions.txt` 3. `preferred-proxies.txt` 4. `primary-proxy-list-urls.txt` 5. local direct sidecar `http://local-direct-exit:2324#local-direct-exit` ## Ordered Direct-Exit Failover `priority_tunnels` turns the old single direct-exit tunnel into an ordered list of local WireGuard-backed HTTP sidecars. `./start.sh` renders: - `docker-compose.priority-tunnels.generated.yml` - `priority-proxies.generated.txt` The generated priority proxy list keeps the same order as `priority_tunnels`. `easy-proxies-subsync` now probes the layers every 300 seconds and writes only the highest healthy layer into `easy-proxies-nodes.txt`. Layer selection behavior is: 1. keep all healthy direct exits from layer 1 in config order, for example `pool5`, then `pool6` 2. `easy-proxies` uses that ordered list sequentially, so if `pool5` fails at runtime it can fall through to `pool6` immediately 3. if layer 1 has no usable node during a refresh cycle, switch the whole active layer to layer 2 subscription nodes 4. if layer 2 has no usable node, continue to layer 3 DataImpulse, then layer 4 NovProxy, then layer 5 local direct 5. once an earlier layer recovers, the next refresh cycle switches back to it Minimal example: ```json { "enable_priority_proxies": true, "enable_direct_proxies": true, "priority_selection_policy": "config-order", "priority_max_nodes": 0, "priority_tunnels": [ { "name": "pool5", "endpoint_private_ip": "10.4.0.4", "client_address": "10.250.240.2/32", "client_private_key_file": ".secrets/pool4_wg_client_privatekey" }, { "name": "pool6", "endpoint_private_ip": "10.4.0.5", "client_address": "10.250.241.2/32" } ] } ``` Each `priority_tunnels` item supports these useful fields: - `name` Defaults the endpoint to `.lucen.cc:51820`, service name to `priority-tunnel-`, proxy tag to `-direct-exit`, and key paths under `.secrets/`. - `endpoint` Optional override when the WireGuard server is not `name.lucen.cc:51820`. - `endpoint_private_ip` Optional VPC IP to inject into the container via `extra_hosts`, so the tunnel stays on the internal network. - `client_address` Required WireGuard client address for that exit. Use a unique subnet per exit. - `client_private_key_file` Optional override. Defaults to `.secrets/_wg_client_privatekey`. - `server_public_key_file` Optional override. Defaults to `.secrets/_wg_server_publickey`. To clone `pool5` into `pool6`, `pool7`, `pool8`, or `pool9`, the local flow is: 1. clone the VM and give it a unique internal IP and WireGuard subnet 2. provision the new server public key and matching client private key under `.secrets/` 3. append one new object to `priority_tunnels` 4. rerun `./start.sh` No additional `docker-compose.yml` edits are required for each new pool. ### One-Command Pool Bootstrap If the new VM is a direct clone of `pool5` and still accepts the inherited SSH key, you can do the whole attach flow from `lucen.cc` with one command: ```bash python3 ./scripts/add_pool.py \ --name pool6 \ --host 10.4.0.5 \ --wg-subnet 10.250.241.0/24 ``` The script will: 1. generate a fresh WireGuard client/server keypair for the new pool 2. SSH to the cloned host and replace the old `/root/*-wireguard-exit` layout 3. enable `net.ipv4.ip_forward=1` on the cloned host 4. start the new remote `pool6-wireguard-egress` container 5. append `pool6` to `config.json` 6. rerun `./start.sh` 7. run a real OpenAI exit probe against `priority-tunnel-pool6` Manual prerequisite: - the cloned pool still needs the cloud-side security group / NACL rule that allows `51820/udp` from `lucen.cc` Useful flags: - `--endpoint-private-ip` when the SSH host is not the same IP that the local WireGuard client should dial - `--ssh-key` if the clone does not use `.secrets/pool5_tunnel_id_ed25519` - `--dry-run` to print the derived addresses and file paths without changing anything ### Inventory-Driven Auto Sync If you want the whole pool fleet to be managed from one simple inventory file, use `pool-hosts.txt` with this format: ```text root 2012aihO 10.4.0.4 10.4.0.10 ``` Rules: - the first non-comment line is `username password` - each following non-comment line is one cloned pool host private IP - by default the IP list is sorted numerically before naming, so the smallest IP becomes `pool1`, the next becomes `pool2`, and so on - `pool1` gets WireGuard subnet `10.250.240.0/24`, `pool2` gets `10.250.241.0/24`, and so on One-time setup: ```bash ./scripts/install_pool_inventory_sync.sh ``` That installer will: 1. install `sshpass` if missing 2. create `pool-hosts.txt` from `pool-hosts.txt.example` if needed 3. install a root cron entry that runs every 30 minutes The recurring sync job is: ```bash python3 ./scripts/sync_pool_inventory.py --inventory ./pool-hosts.txt ``` If you already cloned a new internal VM manually and only want to append its private IP then run the full convergence immediately, use: ```bash python3 ./scripts/enroll_pool_host.py 10.4.0.20 ``` That wrapper will: 1. append the IP to `pool-hosts.txt` when it is not already present 2. auto-install local `sshpass` when missing 3. run `sync_pool_inventory.py` immediately 4. install or refresh the 30-minute cron entry automatically 5. let the normal inventory naming logic assign `pool1`, `pool2`, `pool3`, ... by sorted IP Manual prerequisite: - the VM clone already exists - the new host accepts the inventory username/password - cloud firewall / security-group / NACL already allows `51820/udp` from `lucen.cc` What the sync job does: 1. reads the inventory file 2. maps hosts to `pool1`, `pool2`, `pool3`, ... automatically 3. generates missing local WireGuard keys for any new pool 4. connects to each host with the inventory username/password 5. bootstraps or repairs the remote `*-wireguard-egress` container only when the remote host is not already converged 6. rewrites `config.json` `priority_tunnels` 7. reruns `./start.sh` only when local pool config changed 8. probes each pool locally after sync 9. triggers one immediate `easy-proxies-subsync` refresh so the active layer updates now instead of waiting for the next 5-minute cycle Manual prerequisite: - each cloned pool still needs cloud firewall / security-group / NACL access for `51820/udp` from `lucen.cc` The layer 1 direct-exit probe uses three recent OpenAI OAuth accounts from the local `sub2api` deployment and sends a real request to `https://chatgpt.com/backend-api/codex/responses` through the candidate exit. This lets the selector distinguish “the proxy can really reach OpenAI” from simple TCP reachability. The subscription layer keeps at most `filter_max_nodes` entries, filters obvious non-node labels via `filter_exclude_regex`, probes candidates against the current `management.probe_target`, then ranks them by success rate first and latency second. `filter_region_priority` is optional; leave it blank to disable region bias entirely. ## Default Local Ports - `127.0.0.1:2323` for the `easy-proxies` listener - `127.0.0.1:9091` for the `easy-proxies` management API ## Runtime Artifacts These files are generated locally and ignored by Git: - `easy-proxies-nodes.txt` - `docker-compose.priority-tunnels.generated.yml` - `priority-proxies.generated.txt` ## Update Flow Run `./start.sh` again on the target server. The script pulls the latest upstream `easy-proxies` image before starting the default stack.