# tsr-test **Repository Path**: tsrmy/tsr-test ## Basic Information - **Project Name**: tsr-test - **Description**: No description available - **Primary Language**: Python - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-13 - **Last Updated**: 2026-03-13 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # AI Assistant Project A general-purpose AI assistant built with LangChain and LangGraph. ## Features - Multi-model support (OpenAI GPT, Anthropic Claude, Ollama local models) - LangGraph workflow management - Vector database integration (Milvus) - HTTP client for external API calls - REST API server with FastAPI - Streaming response support - Modular agent architecture - Easy configuration via environment variables ## Installation 1. Install dependencies: ```bash poetry install ``` 2. Create a `.env` file based on `.env.example`: ```bash cp .env.example .env ``` 3. Add your API keys to `.env`: ``` OPENAI_API_KEY=your_openai_api_key_here ANTHROPIC_API_KEY=your_anthropic_api_key_here # Ollama Configuration OLLAMA_BASE_URL=http://localhost:11434 OLLAMA_MODEL=llama3.2 # Milvus Configuration MILVUS_HOST=localhost MILVUS_PORT=19530 MILVUS_COLLECTION_NAME=default ``` 4. Start Ollama (if using local models): ```bash # Install Ollama from https://ollama.ai # Pull a model ollama pull llama3.2 # Start Ollama server ollama serve ``` 5. Start Milvus (if using vector database): ```bash # Install Milvus from https://milvus.io # Start Milvus server docker run -d --name milvus-standalone -p 19530:19530 -p 9091:9091 milvusdb/milvus:latest ``` ## Usage ### Basic Usage with OpenAI/Anthropic ```python from tsr_test.agent import create_openai_agent, create_simple_graph # Create an agent agent = create_openai_agent(api_key="your_api_key") # Create a workflow graph = create_simple_graph() ``` ### Using Ollama (Local Models) ```python from tsr_test.services import OllamaAgent # Create Ollama agent agent = OllamaAgent(base_url="http://localhost:11434", model="llama3.2") # Chat with the model response = agent.chat("Hello, how are you?") print(response) # Stream chat for chunk in agent.stream_chat("Tell me a story"): print(chunk, end="", flush=True) ``` ### Using Milvus Vector Database ```python from tsr_test.services import MilvusVectorStore # Create vector store store = MilvusVectorStore(host="localhost", port=19530, collection_name="documents") # Insert vectors data = [ {"vector": [0.1, 0.2, 0.3, ...], "text": "Document 1", "metadata": {"source": "web"}}, {"vector": [0.4, 0.5, 0.6, ...], "text": "Document 2", "metadata": {"source": "pdf"}}, ] store.insert(data) # Search vectors query_vector = [0.1, 0.2, 0.3, ...] results = store.search(query_vector, limit=5) for result in results: print(f"Score: {result['distance']}, Text: {result['entity']['text']}") ``` ### Using HTTP Client ```python from tsr_test.services import HTTPClient # Create HTTP client client = HTTPClient(timeout=30.0) # Synchronous GET request response = client.get("https://api.example.com/data", params={"page": 1}) print(response) # Synchronous POST request response = client.post("https://api.example.com/create", json_data={"name": "test"}) print(response) # Asynchronous requests (requires async context) import asyncio async def async_example(): response = await client.get_async("https://api.example.com/data") print(response) asyncio.run(async_example()) ``` ### Complete AI Service ```python from tsr_test.services import AIService # Create AI service with all components service = AIService( ollama_base_url="http://localhost:11434", milvus_host="localhost", milvus_port=19530 ) # Chat with Ollama response = service.chat_with_ollama("Hello!") print(response) # Store vectors in Milvus vectors = [{"vector": [0.1, 0.2, 0.3, ...], "text": "Sample text"}] service.store_in_milvus(vectors) # Search Milvus results = service.search_milvus([0.1, 0.2, 0.3, ...], limit=5) # Call external API import asyncio async def call_api(): result = await service.call_external_api("https://api.example.com/data", method="GET") print(result) asyncio.run(call_api()) ``` ## API Server ### Starting the Server **Windows:** ```bash start_server.bat ``` **Linux/Mac:** ```bash chmod +x start_server.sh ./start_server.sh ``` **Or manually:** ```bash poetry run uvicorn tsr_test.api:app --host 0.0.0.0 --port 8000 --reload ``` The API server will start at `http://localhost:8000` ### API Documentation Once the server is running, visit: - **Swagger UI**: http://localhost:8000/docs - **ReDoc**: http://localhost:8000/redoc ### API Endpoints #### Health Check ```bash GET /health ``` #### Chat with AI ```bash POST /api/v1/chat Content-Type: application/json { "message": "Hello, how are you?", "model": "llama3.2", "temperature": 0.7, "stream": false } ``` #### Streaming Chat ```bash POST /api/v1/chat/stream Content-Type: application/json { "message": "Tell me a story", "model": "llama3.2", "temperature": 0.7, "stream": true } ``` #### Insert Vectors ```bash POST /api/v1/vectors/insert Content-Type: application/json { "vectors": [ {"vector": [0.1, 0.2, 0.3], "text": "Document 1"}, {"vector": [0.4, 0.5, 0.6], "text": "Document 2"} ], "collection_name": "default" } ``` #### Search Vectors ```bash POST /api/v1/vectors/search Content-Type: application/json { "query_vector": [0.1, 0.2, 0.3], "limit": 10, "collection_name": "default" } ``` #### HTTP Proxy ```bash POST /api/v1/http/proxy Content-Type: application/json { "url": "https://api.example.com/data", "method": "GET", "params": {"page": 1} } ``` #### List Available Models ```bash GET /api/v1/models ``` ### Example API Usage with curl ```bash # Chat with AI curl -X POST "http://localhost:8000/api/v1/chat" \ -H "Content-Type: application/json" \ -d '{"message": "Hello!", "model": "llama3.2"}' # Insert vectors curl -X POST "http://localhost:8000/api/v1/vectors/insert" \ -H "Content-Type: application/json" \ -d '{"vectors": [{"vector": [0.1, 0.2, 0.3], "text": "Test"}]}' # Search vectors curl -X POST "http://localhost:8000/api/v1/vectors/search" \ -H "Content-Type: application/json" \ -d '{"query_vector": [0.1, 0.2, 0.3], "limit": 5}' # HTTP proxy curl -X POST "http://localhost:8000/api/v1/http/proxy" \ -H "Content-Type: application/json" \ -d '{"url": "https://httpbin.org/get", "method": "GET"}' ``` ### Example API Usage with Python ```python import requests # Chat with AI response = requests.post( "http://localhost:8000/api/v1/chat", json={"message": "Hello!", "model": "llama3.2"} ) print(response.json()) # Insert vectors response = requests.post( "http://localhost:8000/api/v1/vectors/insert", json={ "vectors": [{"vector": [0.1, 0.2, 0.3], "text": "Test"}] } ) print(response.json()) # Search vectors response = requests.post( "http://localhost:8000/api/v1/vectors/search", json={"query_vector": [0.1, 0.2, 0.3], "limit": 5} ) print(response.json()) # HTTP proxy response = requests.post( "http://localhost:8000/api/v1/http/proxy", json={"url": "https://httpbin.org/get", "method": "GET"} ) print(response.json()) ``` ### Running Tests ```bash poetry run pytest ``` ## Project Structure ``` . ├── src/ │ └── tsr_test/ │ ├── __init__.py │ ├── agent.py │ ├── services.py │ └── api.py ├── tests/ │ ├── __init__.py │ ├── test_agent.py │ ├── test_services.py │ └── test_api.py ├── .env.example ├── .gitignore ├── start_server.bat ├── start_server.sh ├── pyproject.toml └── README.md ``` ## Dependencies - langchain>=0.3.0 - langchain-openai>=0.2.0 - langchain-anthropic>=0.2.0 - langchain-ollama>=0.2.0 - langgraph>=0.2.0 - openai>=1.0.0 - anthropic>=0.40.0 - python-dotenv>=1.0.0 - pymilvus>=2.4.0 - httpx>=0.27.0 - requests>=2.32.0 - fastapi>=0.115.0 - uvicorn[standard]>=0.32.0 - pydantic>=2.0.0 - python-multipart>=0.0.9 ## License MIT