feat: add anthropic channel
This commit is contained in:
58
README.md
58
README.md
@@ -12,7 +12,7 @@
|
||||
|
||||
## 功能特性
|
||||
|
||||
- **透明代理**: 完全保留原生 API 格式,支持 OpenAI 和 Google Gemini 等多种格式(持续扩展中)
|
||||
- **透明代理**: 完全保留原生 API 格式,支持 OpenAI、Google Gemini 和 Anthropic Claude 等多种格式(持续扩展中)
|
||||
- **智能密钥管理**: 高性能密钥池,支持分组管理、自动轮换和故障恢复
|
||||
- **负载均衡**: 支持多上游端点的加权负载均衡,提升服务可用性
|
||||
- **智能故障处理**: 自动密钥黑名单管理和恢复机制,确保服务连续性
|
||||
@@ -29,6 +29,7 @@ GPT-Load 作为透明代理服务,完整保留各 AI 服务商的原生 API
|
||||
|
||||
- **OpenAI 格式**: 官方 OpenAI API、Azure OpenAI、以及其他 OpenAI 兼容服务
|
||||
- **Google Gemini 格式**: Gemini Pro、Gemini Pro Vision 等模型的原生 API
|
||||
- **Anthropic Claude 格式**: Claude 系列模型,支持高质量的对话和文本生成
|
||||
- **扩展性**: 插件化架构设计,可快速集成新的 AI 服务提供商及其原生格式
|
||||
|
||||
## 快速开始
|
||||
@@ -326,7 +327,36 @@ curl -X POST http://localhost:3001/proxy/gemini/v1beta/models/gemini-2.5-pro:gen
|
||||
- 将 `https://generativelanguage.googleapis.com` 替换为 `http://localhost:3001/proxy/gemini`
|
||||
- 将 URL 参数中的 `key=your-gemini-key` 替换为统一认证密钥 `sk-123456`(默认值)
|
||||
|
||||
#### 5. 支持的接口
|
||||
#### 5. Anthropic 接口调用示例
|
||||
|
||||
假设创建了名为 `anthropic` 的分组:
|
||||
|
||||
**原始调用方式:**
|
||||
|
||||
```bash
|
||||
curl -X POST https://api.anthropic.com/v1/messages \
|
||||
-H "x-api-key: sk-ant-api03-your-anthropic-key" \
|
||||
-H "anthropic-version: 2023-06-01" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "Hello"}]}'
|
||||
```
|
||||
|
||||
**代理调用方式:**
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3001/proxy/anthropic/v1/messages \
|
||||
-H "x-api-key: sk-123456" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "Hello"}]}'
|
||||
```
|
||||
|
||||
**变更说明:**
|
||||
|
||||
- 将 `https://api.anthropic.com` 替换为 `http://localhost:3001/proxy/anthropic`
|
||||
- 将 `x-api-key` 头部中的原始 API Key 替换为统一认证密钥 `sk-123456`(默认值)
|
||||
- 无需手动设置 `anthropic-version` 头部,代理会自动添加
|
||||
|
||||
#### 6. 支持的接口
|
||||
|
||||
**OpenAI 格式:**
|
||||
|
||||
@@ -342,7 +372,13 @@ curl -X POST http://localhost:3001/proxy/gemini/v1beta/models/gemini-2.5-pro:gen
|
||||
- `/v1beta/models` - 模型列表
|
||||
- 以及其他所有 Gemini 原生接口
|
||||
|
||||
#### 6. 客户端 SDK 配置
|
||||
**Anthropic 格式:**
|
||||
|
||||
- `/v1/messages` - 消息对话
|
||||
- `/v1/models` - 模型列表(如果可用)
|
||||
- 以及其他所有 Anthropic 原生接口
|
||||
|
||||
#### 7. 客户端 SDK 配置
|
||||
|
||||
**OpenAI Python SDK:**
|
||||
|
||||
@@ -375,6 +411,22 @@ model = genai.GenerativeModel('gemini-2.5-pro')
|
||||
response = model.generate_content("Hello")
|
||||
```
|
||||
|
||||
**Anthropic SDK (Python):**
|
||||
|
||||
```python
|
||||
from anthropic import Anthropic
|
||||
|
||||
client = Anthropic(
|
||||
api_key="sk-123456", # 使用统一认证密钥
|
||||
base_url="http://localhost:3001/proxy/anthropic" # 使用代理端点
|
||||
)
|
||||
|
||||
response = client.messages.create(
|
||||
model="claude-sonnet-4-20250514",
|
||||
messages=[{"role": "user", "content": "Hello"}]
|
||||
)
|
||||
```
|
||||
|
||||
> **重要提示**:作为透明代理服务,GPT-Load 完全保留各 AI 服务的原生 API 格式和认证方式,仅需要替换端点地址并使用统一密钥值即可无缝迁移。
|
||||
|
||||
## 许可证
|
||||
|
58
README_EN.md
58
README_EN.md
@@ -12,7 +12,7 @@ For detailed documentation, please visit [Official Documentation](https://www.gp
|
||||
|
||||
## Features
|
||||
|
||||
- **Transparent Proxy**: Complete preservation of native API formats, supporting OpenAI and Google Gemini among other formats (continuously expanding)
|
||||
- **Transparent Proxy**: Complete preservation of native API formats, supporting OpenAI, Google Gemini, and Anthropic Claude among other formats (continuously expanding)
|
||||
- **Intelligent Key Management**: High-performance key pool with group-based management, automatic rotation, and failure recovery
|
||||
- **Load Balancing**: Weighted load balancing across multiple upstream endpoints to enhance service availability
|
||||
- **Smart Failure Handling**: Automatic key blacklist management and recovery mechanisms to ensure service continuity
|
||||
@@ -29,6 +29,7 @@ GPT-Load serves as a transparent proxy service, completely preserving the native
|
||||
|
||||
- **OpenAI Format**: Official OpenAI API, Azure OpenAI, and other OpenAI-compatible services
|
||||
- **Google Gemini Format**: Native APIs for Gemini Pro, Gemini Pro Vision, and other models
|
||||
- **Anthropic Claude Format**: Claude series models, supporting high-quality conversations and text generation
|
||||
- **Extensibility**: Plugin-based architecture design for rapid integration of new AI service providers and their native formats
|
||||
|
||||
## Quick Start
|
||||
@@ -326,7 +327,36 @@ curl -X POST http://localhost:3001/proxy/gemini/v1beta/models/gemini-2.5-pro:gen
|
||||
- Replace `https://generativelanguage.googleapis.com` with `http://localhost:3001/proxy/gemini`
|
||||
- Replace `key=your-gemini-key` in URL parameter with unified authentication key `sk-123456` (default value)
|
||||
|
||||
#### 5. Supported Interfaces
|
||||
#### 5. Anthropic Interface Example
|
||||
|
||||
Assuming a group named `anthropic` was created:
|
||||
|
||||
**Original invocation:**
|
||||
|
||||
```bash
|
||||
curl -X POST https://api.anthropic.com/v1/messages \
|
||||
-H "x-api-key: sk-ant-api03-your-anthropic-key" \
|
||||
-H "anthropic-version: 2023-06-01" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "Hello"}]}'
|
||||
```
|
||||
|
||||
**Proxy invocation:**
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3001/proxy/anthropic/v1/messages \
|
||||
-H "x-api-key: sk-123456" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "Hello"}]}'
|
||||
```
|
||||
|
||||
**Changes required:**
|
||||
|
||||
- Replace `https://api.anthropic.com` with `http://localhost:3001/proxy/anthropic`
|
||||
- Replace the original API Key in `x-api-key` header with unified authentication key `sk-123456` (default value)
|
||||
- No need to manually set `anthropic-version` header, the proxy will add it automatically
|
||||
|
||||
#### 6. Supported Interfaces
|
||||
|
||||
**OpenAI Format:**
|
||||
|
||||
@@ -342,7 +372,13 @@ curl -X POST http://localhost:3001/proxy/gemini/v1beta/models/gemini-2.5-pro:gen
|
||||
- `/v1beta/models` - Model list
|
||||
- And all other Gemini native interfaces
|
||||
|
||||
#### 6. Client SDK Configuration
|
||||
**Anthropic Format:**
|
||||
|
||||
- `/v1/messages` - Message conversations
|
||||
- `/v1/models` - Model list (if available)
|
||||
- And all other Anthropic native interfaces
|
||||
|
||||
#### 7. Client SDK Configuration
|
||||
|
||||
**OpenAI Python SDK:**
|
||||
|
||||
@@ -375,6 +411,22 @@ model = genai.GenerativeModel('gemini-2.5-pro')
|
||||
response = model.generate_content("Hello")
|
||||
```
|
||||
|
||||
**Anthropic SDK (Python):**
|
||||
|
||||
```python
|
||||
from anthropic import Anthropic
|
||||
|
||||
client = Anthropic(
|
||||
api_key="sk-123456", # Use unified authentication key
|
||||
base_url="http://localhost:3001/proxy/anthropic" # Use proxy endpoint
|
||||
)
|
||||
|
||||
response = client.messages.create(
|
||||
model="claude-sonnet-4-20250514",
|
||||
messages=[{"role": "user", "content": "Hello"}]
|
||||
)
|
||||
```
|
||||
|
||||
> **Important Note**: As a transparent proxy service, GPT-Load completely preserves the native API formats and authentication methods of various AI services. You only need to replace the endpoint address and use the unified key value for seamless migration.
|
||||
|
||||
## License
|
||||
|
132
internal/channel/anthropic_channel.go
Normal file
132
internal/channel/anthropic_channel.go
Normal file
@@ -0,0 +1,132 @@
|
||||
package channel
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
app_errors "gpt-load/internal/errors"
|
||||
"gpt-load/internal/models"
|
||||
"io"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func init() {
|
||||
Register("anthropic", newAnthropicChannel)
|
||||
}
|
||||
|
||||
type AnthropicChannel struct {
|
||||
*BaseChannel
|
||||
}
|
||||
|
||||
func newAnthropicChannel(f *Factory, group *models.Group) (ChannelProxy, error) {
|
||||
base, err := f.newBaseChannel("anthropic", group)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &AnthropicChannel{
|
||||
BaseChannel: base,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ModifyRequest sets the required headers for the Anthropic API.
|
||||
func (ch *AnthropicChannel) ModifyRequest(req *http.Request, apiKey *models.APIKey, group *models.Group) {
|
||||
req.Header.Set("x-api-key", apiKey.KeyValue)
|
||||
req.Header.Set("anthropic-version", "2023-06-01")
|
||||
}
|
||||
|
||||
// IsStreamRequest checks if the request is for a streaming response using the pre-read body.
|
||||
func (ch *AnthropicChannel) IsStreamRequest(c *gin.Context, bodyBytes []byte) bool {
|
||||
if strings.Contains(c.GetHeader("Accept"), "text/event-stream") {
|
||||
return true
|
||||
}
|
||||
|
||||
if c.Query("stream") == "true" {
|
||||
return true
|
||||
}
|
||||
|
||||
type streamPayload struct {
|
||||
Stream bool `json:"stream"`
|
||||
}
|
||||
var p streamPayload
|
||||
if err := json.Unmarshal(bodyBytes, &p); err == nil {
|
||||
return p.Stream
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// ExtractKey extracts the API key from the x-api-key header.
|
||||
func (ch *AnthropicChannel) ExtractKey(c *gin.Context) string {
|
||||
// Check x-api-key header (Anthropic's standard)
|
||||
if key := c.GetHeader("x-api-key"); key != "" {
|
||||
return key
|
||||
}
|
||||
|
||||
// Fallback to Authorization header for compatibility
|
||||
authHeader := c.GetHeader("Authorization")
|
||||
if authHeader != "" {
|
||||
const bearerPrefix = "Bearer "
|
||||
if strings.HasPrefix(authHeader, bearerPrefix) {
|
||||
return authHeader[len(bearerPrefix):]
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// ValidateKey checks if the given API key is valid by making a messages request.
|
||||
func (ch *AnthropicChannel) ValidateKey(ctx context.Context, key string) (bool, error) {
|
||||
upstreamURL := ch.getUpstreamURL()
|
||||
if upstreamURL == nil {
|
||||
return false, fmt.Errorf("no upstream URL configured for channel %s", ch.Name)
|
||||
}
|
||||
|
||||
reqURL := upstreamURL.String() + "/v1/messages"
|
||||
|
||||
// Use a minimal, low-cost payload for validation
|
||||
payload := gin.H{
|
||||
"model": ch.TestModel,
|
||||
"max_tokens": 100,
|
||||
"messages": []gin.H{
|
||||
{"role": "user", "content": "hi"},
|
||||
},
|
||||
}
|
||||
body, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to marshal validation payload: %w", err)
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", reqURL, bytes.NewBuffer(body))
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to create validation request: %w", err)
|
||||
}
|
||||
req.Header.Set("x-api-key", key)
|
||||
req.Header.Set("anthropic-version", "2023-06-01")
|
||||
|
||||
resp, err := ch.HTTPClient.Do(req)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to send validation request: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
// A 200 OK status code indicates the key is valid and can make requests.
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// For non-200 responses, parse the body to provide a more specific error reason.
|
||||
errorBody, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("key is invalid (status %d), but failed to read error body: %w", resp.StatusCode, err)
|
||||
}
|
||||
|
||||
// Use the new parser to extract a clean error message.
|
||||
parsedError := app_errors.ParseUpstreamError(errorBody)
|
||||
|
||||
return false, fmt.Errorf("[status %d] %s", resp.StatusCode, parsedError)
|
||||
}
|
Reference in New Issue
Block a user