feat: 工具模块界面优化 + OpenRouter集成

- 优化工具列表显示:工具名称优先显示,描述悬停提示
- 集成OpenRouter支持:400+模型一键访问
- 改进模型选择界面:支持搜索过滤和分类显示
- 自动同步新提供商配置
- 版本升级至0.1.7

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Meghan Morrow 2025-06-15 19:01:26 +08:00
parent 596e42e110
commit dad8d41e46
13 changed files with 615 additions and 20 deletions

169
CLAUDE.md Normal file
View File

@ -0,0 +1,169 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Development Commands
### Setup and Installation
```bash
npm run setup # Install dependencies and prepare OCR resources
```
### Development
```bash
npm run serve # Start all services in development mode (uses Turbo)
npm run build # Build all modules
npm run build:all # Build all modules (alias)
npm run build:electron # Build only Electron app
```
### Service Development
```bash
cd service
npm run serve # Start service with hot reload (nodemon + tsx)
npm run build # Compile TypeScript to dist/
npm run debug # Start with Node.js inspector
npm run typecheck # Type checking without emit
```
### Renderer Development
```bash
cd renderer
npm run serve # Start Vite dev server
npm run build # Build for production
npm run serve:website # Start in website mode
npm run type-check # Vue TypeScript checking
```
### VSCode Extension
```bash
npm run vscode:prepublish # Prepare for VSCode publishing (Rollup build)
npm run compile # Compile TypeScript
npm run watch # Watch mode compilation
vsce package # Create VSIX package for distribution
vsce publish # Publish to VSCode Marketplace (requires auth)
```
### Quality Assurance
```bash
npm run lint # ESLint for TypeScript files
npm run pretest # Run compile and lint
npm run test # Run tests
```
## Architecture Overview
### Multi-Module Structure
OpenMCP follows a **layered modular architecture** with three main deployment targets:
1. **VSCode Extension** (`src/extension.ts`) - IDE integration
2. **Service Layer** (`service/`) - Node.js backend handling MCP protocol
3. **Renderer Layer** (`renderer/`) - Vue.js frontend for UI
### Key Architectural Patterns
#### Message Bridge Communication
The system uses a **message bridge pattern** for cross-platform communication:
- **VSCode**: Uses `vscode.postMessage` API
- **Electron**: Uses IPC communication
- **Web**: Uses WebSocket connections
- **Node.js**: Uses EventEmitter for SDK mode
All communication flows through `MessageBridge` class in `renderer/src/api/message-bridge.ts`.
#### MCP Client Management
- **Connection Management**: `service/src/mcp/connect.service.ts` handles multiple MCP server connections
- **Client Pooling**: `clientMap` maintains active MCP client instances with UUID-based identification
- **Transport Abstraction**: Supports STDIO, SSE, and StreamableHTTP transports
- **Auto-reconnection**: `McpServerConnectMonitor` handles connection monitoring
#### Request/Response Flow
```
Frontend (Vue) → MessageBridge → Service Router → MCP Controllers → MCP SDK → External MCP Servers
```
### Important Service Patterns
#### Preprocessing Commands
`service/src/mcp/connect.service.ts` includes **automatic environment setup**:
- Python projects: Auto-runs `uv sync` and installs MCP CLI
- Node.js projects: Auto-runs `npm install` if node_modules missing
- Path resolution: Handles `~/` home directory expansion
#### OCR Integration
Built-in OCR using Tesseract.js:
- Images from MCP responses are automatically processed
- Base64 images saved to temp files and queued for OCR
- Results delivered via worker threads
### Frontend Architecture (Vue 3)
#### State Management
- **Panel System**: Tab-based interface in `renderer/src/components/main-panel/`
- **Reactive Connections**: MCP connection state managed reactively
- **Multi-language**: Vue i18n with support for 9 languages
#### Core Components
- **Chat Interface**: `main-panel/chat/` - LLM interaction with MCP tools
- **Tool Testing**: `main-panel/tool/` - Direct MCP tool invocation
- **Resource Browser**: `main-panel/resource/` - MCP resource exploration
- **Prompt Manager**: `main-panel/prompt/` - System prompt templates
### Build System
#### Turbo Monorepo
Uses Turbo for coordinated builds across modules:
- **Dependency ordering**: Renderer builds before Electron
- **Parallel execution**: Service and Renderer can build concurrently
- **Task caching**: Disabled for development iterations
#### Rollup Configuration
VSCode extension uses Rollup for optimal bundling:
- **ES modules**: Output as ESM format
- **External dependencies**: VSCode API marked as external
- **TypeScript**: Direct compilation without webpack
## Development Workflow
### Adding New MCP Features
1. **Service Layer**: Add controller in `service/src/mcp/`
2. **Router Registration**: Add to `ModuleControllers` in `service/src/common/router.ts`
3. **Frontend Integration**: Add API calls in `renderer/src/api/`
4. **UI Components**: Create components in `renderer/src/components/`
### Testing MCP Servers
1. **Connection**: Configure in connection panel (STDIO/SSE/HTTP)
2. **Validation**: Test tools/resources in respective panels
3. **Integration**: Verify LLM interaction in chat interface
### Packaging VSCode Extension
1. **Build Dependencies**: Run `npm run build` to build all modules
2. **Prepare Extension**: Run `npm run vscode:prepublish` to bundle extension code
3. **Create Package**: Run `vsce package` to generate `.vsix` file
4. **Install Locally**: Use `code --install-extension openmcp-x.x.x.vsix` for testing
5. **Publish**: Run `vsce publish` (requires marketplace publisher account)
### Platform-Specific Considerations
- **VSCode**: Uses webview API, limited to extension context
- **Electron**: Full desktop app capabilities, local service spawning
- **Web**: Requires external service, WebSocket limitations
- **SDK**: Embedded in other Node.js applications
## Important Files
### Configuration
- `turbo.json` - Monorepo build orchestration
- `rollup.config.js` - VSCode extension bundling
- `service/package.json` - Backend dependencies and scripts
- `renderer/package.json` - Frontend dependencies and scripts
### Core Architecture
- `src/extension.ts` - VSCode extension entry point
- `service/src/main.ts` - Service WebSocket server
- `service/src/common/router.ts` - Request routing system
- `renderer/src/api/message-bridge.ts` - Cross-platform communication
- `service/src/mcp/client.service.ts` - MCP client implementation

4
package-lock.json generated
View File

@ -1,12 +1,12 @@
{ {
"name": "openmcp", "name": "openmcp",
"version": "0.1.6", "version": "0.1.7",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "openmcp", "name": "openmcp",
"version": "0.1.6", "version": "0.1.7",
"workspaces": [ "workspaces": [
"service", "service",
"renderer" "renderer"

View File

@ -2,7 +2,7 @@
"name": "openmcp", "name": "openmcp",
"displayName": "OpenMCP", "displayName": "OpenMCP",
"description": "An all in one MCP Client/TestTool", "description": "An all in one MCP Client/TestTool",
"version": "0.1.6", "version": "0.1.7",
"publisher": "kirigaya", "publisher": "kirigaya",
"author": { "author": {
"name": "kirigaya", "name": "kirigaya",

View File

@ -0,0 +1,3 @@
# OpenRouter Icon Placeholder
# This would normally be an actual .ico file
# For now, using a placeholder that follows the naming convention

View File

@ -17,7 +17,13 @@
<div class="item" :class="{ 'active': tabStorage.currentToolName === tool.name }" <div class="item" :class="{ 'active': tabStorage.currentToolName === tool.name }"
v-for="tool of client.tools?.values()" :key="tool.name" @click="handleClick(tool)"> v-for="tool of client.tools?.values()" :key="tool.name" @click="handleClick(tool)">
<span>{{ tool.name }}</span> <span>{{ tool.name }}</span>
<span>{{ tool.description || '' }}</span> <el-tooltip
:content="tool.description || ''"
:disabled="!tool.description || tool.description.length <= 30"
placement="top"
:show-after="500">
<span class="tool-description">{{ tool.description || '' }}</span>
</el-tooltip>
</div> </div>
</div> </div>
</el-scrollbar> </el-scrollbar>
@ -27,7 +33,7 @@
</template> </template>
<script setup lang="ts"> <script setup lang="ts">
import { onMounted, defineProps, ref, type Reactive } from 'vue'; import { onMounted, defineProps, type Reactive } from 'vue';
import { useI18n } from 'vue-i18n'; import { useI18n } from 'vue-i18n';
import type { ToolStorage } from './tools'; import type { ToolStorage } from './tools';
import { tabs } from '../panel'; import { tabs } from '../panel';
@ -122,16 +128,18 @@ onMounted(async () => {
} }
.tool-list-container>.item>span:first-child { .tool-list-container>.item>span:first-child {
max-width: 200px; min-width: 120px;
max-width: 250px;
overflow: hidden; overflow: hidden;
text-overflow: ellipsis; text-overflow: ellipsis;
white-space: nowrap; white-space: nowrap;
flex-shrink: 0;
} }
.tool-list-container>.item>span:last-child { .tool-description {
opacity: 0.6; opacity: 0.6;
font-size: 12.5px; font-size: 12.5px;
max-width: 200px; max-width: 150px;
overflow: hidden; overflow: hidden;
text-overflow: ellipsis; text-overflow: ellipsis;
white-space: nowrap; white-space: nowrap;

View File

@ -237,13 +237,22 @@ async function updateModels() {
const proxyServer = mcpSetting.proxyServer; const proxyServer = mcpSetting.proxyServer;
const bridge = useMessageBridge(); const bridge = useMessageBridge();
const { code, msg } = await bridge.commandRequest('llm/models', {
apiKey,
baseURL,
proxyServer
});
// OpenRouter
let result;
if (llm.isDynamic && llm.id === 'openrouter') {
result = await bridge.commandRequest('llm/models/openrouter', {});
} else {
result = await bridge.commandRequest('llm/models', {
apiKey,
baseURL,
proxyServer
});
}
const { code, msg } = result;
const isGemini = baseURL.includes('googleapis'); const isGemini = baseURL.includes('googleapis');
const isOpenRouter = llm.id === 'openrouter';
if (code === 200 && Array.isArray(msg)) { if (code === 200 && Array.isArray(msg)) {
const models = msg const models = msg
@ -257,9 +266,27 @@ async function updateModels() {
}); });
llm.models = models; llm.models = models;
// OpenRouter
if (isOpenRouter && !llm.userModel && models.length > 0) {
// GPT-4
const recommendedModel = models.find(model =>
model.includes('gpt-4') ||
model.includes('claude') ||
model.includes('gemini')
) || models[0];
llm.userModel = recommendedModel;
}
saveLlmSetting(); saveLlmSetting();
if (isOpenRouter) {
ElMessage.success(`已更新 ${models.length} 个 OpenRouter 模型`);
} else {
ElMessage.success('模型列表更新成功');
}
} else { } else {
ElMessage.error('模型列表更新失败' + msg); ElMessage.error('模型列表更新失败: ' + msg);
} }
updateModelLoading.value = false; updateModelLoading.value = false;
} }

View File

@ -4,18 +4,42 @@
<span class="iconfont icon-llm"></span> <span class="iconfont icon-llm"></span>
<span class="option-title">{{ t('model') }}</span> <span class="option-title">{{ t('model') }}</span>
</span> </span>
<div style="width: 160px;"> <div style="width: 240px;">
<el-select v-if="llms[llmManager.currentModelIndex]" <el-select v-if="llms[llmManager.currentModelIndex]"
name="language-setting" name="model-setting"
v-model="llms[llmManager.currentModelIndex].userModel" v-model="llms[llmManager.currentModelIndex].userModel"
@change="onmodelchange" @change="onmodelchange"
filterable
:placeholder="getPlaceholderText()"
:reserve-keyword="false"
size="default"
:popper-class="isOpenRouter ? 'openrouter-select-dropdown' : ''"
> >
<el-option <el-option
v-for="option in llms[llmManager.currentModelIndex].models" v-for="option in llms[llmManager.currentModelIndex].models"
:value="option" :value="option"
:label="option" :label="option"
:key="option" :key="option"
></el-option> >
<div v-if="isOpenRouter" class="openrouter-model-option">
<div class="model-info">
<span class="model-name">{{ getModelDisplayName(option) }}</span>
<span class="model-provider">{{ getModelProvider(option) }}</span>
</div>
<span v-if="getModelBadge(option)" class="model-badge">{{ getModelBadge(option) }}</span>
</div>
<span v-else class="regular-model-option">{{ option }}</span>
</el-option>
<!-- 当没有搜索结果时显示 -->
<el-option v-if="filteredModels.length === 0 && searchKeyword"
value=""
disabled
class="search-result-info">
<div class="search-empty">
<span>{{ `找到 0 个模型匹配 "${searchKeyword}"` }}</span>
</div>
</el-option>
</el-select> </el-select>
</div> </div>
</div> </div>
@ -46,7 +70,7 @@
<script setup lang="ts"> <script setup lang="ts">
/* eslint-disable */ /* eslint-disable */
import { defineComponent } from 'vue'; import { defineComponent, computed, ref, watch } from 'vue';
import { useI18n } from 'vue-i18n'; import { useI18n } from 'vue-i18n';
import { llmManager, llms } from './llm'; import { llmManager, llms } from './llm';
import { pinkLog } from './util'; import { pinkLog } from './util';
@ -57,6 +81,29 @@ defineComponent({ name: 'connect-interface-openai' });
const { t } = useI18n(); const { t } = useI18n();
//
const searchKeyword = ref('');
// OpenRouter
const isOpenRouter = computed(() => {
return llms[llmManager.currentModelIndex]?.id === 'openrouter';
});
//
const filteredModels = computed(() => {
if (!llms[llmManager.currentModelIndex]) return [];
const models = llms[llmManager.currentModelIndex].models;
if (!searchKeyword.value) return models;
const keyword = searchKeyword.value.toLowerCase();
return models.filter(model =>
model.toLowerCase().includes(keyword) ||
getModelDisplayName(model).toLowerCase().includes(keyword) ||
getModelProvider(model).toLowerCase().includes(keyword)
);
});
function saveLlmSetting() { function saveLlmSetting() {
saveSetting(() => { saveSetting(() => {
ElMessage({ ElMessage({
@ -71,7 +118,140 @@ function onmodelchange() {
saveLlmSetting(); saveLlmSetting();
} }
//
function filterModels(query: string) {
searchKeyword.value = query;
// false Element Plus 使 filteredModels
return false;
}
//
function getPlaceholderText() {
if (isOpenRouter.value) {
const modelCount = llms[llmManager.currentModelIndex]?.models?.length || 0;
return `搜索 ${modelCount} 个模型... (可输入模型名或提供商)`;
}
return '选择模型';
}
//
function getModelDisplayName(modelId: string) {
if (!modelId.includes('/')) return modelId;
return modelId.split('/')[1] || modelId;
}
//
function getModelProvider(modelId: string) {
if (!modelId.includes('/')) return '';
return modelId.split('/')[0];
}
// free
function getModelBadge(modelId: string) {
if (modelId.includes(':free')) return 'FREE';
if (modelId.includes(':thinking')) return 'THINKING';
if (modelId.includes(':beta')) return 'BETA';
if (modelId.includes('preview')) return 'PREVIEW';
if (modelId.includes('exp')) return 'EXP';
return '';
}
</script> </script>
<style> <style scoped>
/* OpenRouter 模型选项样式 */
.openrouter-model-option {
display: flex;
justify-content: space-between;
align-items: center;
width: 100%;
padding: 4px 0;
}
.model-info {
display: flex;
flex-direction: column;
flex: 1;
min-width: 0;
}
.model-name {
font-size: 14px;
font-weight: 500;
color: var(--el-text-color-primary, #303133);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.model-provider {
font-size: 12px;
color: var(--el-text-color-regular, #606266);
margin-top: 2px;
}
.model-badge {
font-size: 10px;
padding: 2px 6px;
border-radius: 4px;
background-color: #f0f9ff;
color: #0369a1;
font-weight: 600;
margin-left: 8px;
white-space: nowrap;
}
/* 普通模型选项样式 */
.regular-model-option {
font-size: 14px;
color: var(--el-text-color-primary, #303133);
font-weight: 500;
}
/* 搜索为空时的样式 */
.search-empty {
text-align: center;
color: var(--el-text-color-secondary, #909399);
font-size: 13px;
padding: 12px 0;
}
/* 下拉框样式优化 */
:global(.openrouter-select-dropdown) {
max-height: 300px !important;
}
:global(.openrouter-select-dropdown .el-select-dropdown__item) {
height: auto !important;
padding: 8px 12px !important;
line-height: normal !important;
}
/* 搜索框样式 */
:global(.el-select .el-input .el-input__wrapper) {
transition: all 0.3s ease;
}
:global(.el-select .el-input.is-focus .el-input__wrapper) {
box-shadow: 0 0 0 1px var(--el-color-primary, #409eff) inset;
}
/* 暗色主题下的文字优化 */
@media (prefers-color-scheme: dark) {
.model-name {
color: #e5eaf3 !important;
}
.model-provider {
color: #a3a6ad !important;
}
.regular-model-option {
color: #e5eaf3 !important;
}
.search-empty {
color: #8b949e !important;
}
}
</style> </style>

View File

@ -22,6 +22,9 @@ export interface BasicLlmDescription {
website: string, website: string,
userToken: string, userToken: string,
userModel: string, userModel: string,
isDynamic?: boolean,
modelsEndpoint?: string,
supportsPricing?: boolean,
[key: string]: any [key: string]: any
} }

View File

@ -130,6 +130,21 @@ export const llms = [
website: 'https://kimi.moonshot.cn', website: 'https://kimi.moonshot.cn',
userToken: '', userToken: '',
userModel: 'moonshot-v1-8k' userModel: 'moonshot-v1-8k'
},
{
id: 'openrouter',
name: 'OpenRouter',
baseUrl: 'https://openrouter.ai/api/v1',
models: [], // 动态加载
provider: 'OpenRouter',
isOpenAICompatible: true,
description: '400+ AI models from multiple providers in one API',
website: 'https://openrouter.ai',
userToken: '',
userModel: '',
isDynamic: true,
modelsEndpoint: 'https://openrouter.ai/api/v1/models',
supportsPricing: true
} }
]; ];

View File

@ -0,0 +1,100 @@
export interface OpenRouterModel {
id: string;
name: string;
description?: string;
context_length: number;
pricing: {
prompt: string;
completion: string;
};
architecture?: {
input_modalities?: string[];
output_modalities?: string[];
tokenizer?: string;
};
supported_parameters?: string[];
}
export interface OpenRouterModelsResponse {
data: OpenRouterModel[];
}
// 模型缓存避免频繁API调用
let modelsCache: { models: OpenRouterModel[]; timestamp: number } | null = null;
const CACHE_DURATION = 5 * 60 * 1000; // 5分钟缓存
export async function fetchOpenRouterModels(): Promise<OpenRouterModel[]> {
const now = Date.now();
// 检查缓存
if (modelsCache && (now - modelsCache.timestamp) < CACHE_DURATION) {
return modelsCache.models;
}
try {
const response = await fetch('https://openrouter.ai/api/v1/models');
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data: OpenRouterModelsResponse = await response.json();
const models = data.data.map(model => ({
id: model.id,
name: model.name,
description: model.description,
context_length: model.context_length,
pricing: model.pricing,
architecture: model.architecture,
supported_parameters: model.supported_parameters
}));
// 更新缓存
modelsCache = {
models,
timestamp: now
};
console.log(`Fetched ${models.length} OpenRouter models`);
return models;
} catch (error) {
console.error('Failed to fetch OpenRouter models:', error);
// 返回缓存的模型(如果有)或空数组
return modelsCache?.models || [];
}
}
export async function getOpenRouterModelsByCategory(category?: string): Promise<OpenRouterModel[]> {
try {
const url = category
? `https://openrouter.ai/api/v1/models?category=${encodeURIComponent(category)}`
: 'https://openrouter.ai/api/v1/models';
const response = await fetch(url);
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data: OpenRouterModelsResponse = await response.json();
return data.data;
} catch (error) {
console.error(`Failed to fetch OpenRouter models for category ${category}:`, error);
return [];
}
}
// 清除缓存的函数
export function clearOpenRouterCache(): void {
modelsCache = null;
}
// 获取模型的简化信息,用于下拉框显示
export function getSimplifiedModels(models: OpenRouterModel[]): { id: string; name: string; pricing?: string }[] {
return models.map(model => ({
id: model.id,
name: model.name,
pricing: model.pricing ? `$${model.pricing.prompt}/1K` : undefined
}));
}

View File

@ -6,6 +6,7 @@ import { getClient } from "../mcp/connect.service.js";
import { abortMessageService, streamingChatCompletion } from "./llm.service.js"; import { abortMessageService, streamingChatCompletion } from "./llm.service.js";
import { OpenAI } from "openai"; import { OpenAI } from "openai";
import { axiosFetch } from "src/hook/axios-fetch.js"; import { axiosFetch } from "src/hook/axios-fetch.js";
import { fetchOpenRouterModels, getSimplifiedModels } from "../hook/openrouter.js";
export class LlmController { export class LlmController {
@Controller('llm/chat/completions') @Controller('llm/chat/completions')
@ -57,4 +58,66 @@ export class LlmController {
msg: models.data msg: models.data
} }
} }
@Controller('llm/models/openrouter')
async getOpenRouterModels(data: RequestData, webview: PostMessageble) {
try {
const models = await fetchOpenRouterModels();
const simplifiedModels = getSimplifiedModels(models);
// 转换为标准格式与其他模型API保持一致
const standardModels = simplifiedModels.map(model => ({
id: model.id,
object: 'model',
name: model.name,
pricing: model.pricing
}));
return {
code: 200,
msg: standardModels
};
} catch (error) {
console.error('Failed to fetch OpenRouter models:', error);
return {
code: 500,
msg: `Failed to fetch OpenRouter models: ${error instanceof Error ? error.message : String(error)}`
};
}
}
@Controller('llm/models/dynamic')
async getDynamicModels(data: RequestData, webview: PostMessageble) {
const { providerId } = data;
try {
if (providerId === 'openrouter') {
const models = await fetchOpenRouterModels();
const simplifiedModels = getSimplifiedModels(models);
const standardModels = simplifiedModels.map(model => ({
id: model.id,
object: 'model',
name: model.name,
pricing: model.pricing
}));
return {
code: 200,
msg: standardModels
};
} else {
return {
code: 400,
msg: `Unsupported dynamic provider: ${providerId}`
};
}
} catch (error) {
console.error(`Failed to fetch dynamic models for ${providerId}:`, error);
return {
code: 500,
msg: `Failed to fetch models: ${error instanceof Error ? error.message : String(error)}`
};
}
}
} }

View File

@ -35,9 +35,17 @@ export async function streamingChatCompletion(
}); });
// 构建OpenRouter特定的请求头
const defaultHeaders: Record<string, string> = {};
if (baseURL && baseURL.includes('openrouter.ai')) {
defaultHeaders['HTTP-Referer'] = 'https://github.com/openmcp/openmcp-client';
defaultHeaders['X-Title'] = 'OpenMCP Client';
}
const client = new OpenAI({ const client = new OpenAI({
baseURL, baseURL,
apiKey, apiKey,
defaultHeaders: Object.keys(defaultHeaders).length > 0 ? defaultHeaders : undefined
}); });
const seriableTools = (tools.length === 0) ? undefined: tools; const seriableTools = (tools.length === 0) ? undefined: tools;

View File

@ -53,9 +53,28 @@ export function loadSetting(): IConfig {
try { try {
const configData = fs.readFileSync(configPath, 'utf-8'); const configData = fs.readFileSync(configPath, 'utf-8');
const config = JSON.parse(configData) as IConfig; const config = JSON.parse(configData) as IConfig;
if (!config.LLM_INFO || (Array.isArray(config.LLM_INFO) && config.LLM_INFO.length === 0)) { if (!config.LLM_INFO || (Array.isArray(config.LLM_INFO) && config.LLM_INFO.length === 0)) {
config.LLM_INFO = llms; config.LLM_INFO = llms;
} else {
// 自动同步新的提供商:检查默认配置中是否有新的提供商未在用户配置中
const existingIds = new Set(config.LLM_INFO.map((llm: any) => llm.id));
const newProviders = llms.filter((llm: any) => !existingIds.has(llm.id));
if (newProviders.length > 0) {
console.log(`Adding ${newProviders.length} new providers:`, newProviders.map(p => p.name));
config.LLM_INFO.push(...newProviders);
// 自动保存更新后的配置
try {
fs.writeFileSync(configPath, JSON.stringify(config, null, 2), 'utf-8');
console.log('Configuration updated with new providers');
} catch (saveError) {
console.error('Failed to save updated configuration:', saveError);
}
}
} }
return config; return config;
} catch (error) { } catch (error) {
console.error('Error loading config file, creating new one:', error); console.error('Error loading config file, creating new one:', error);