Compare commits

..

No commits in common. "567a9c7cb096920389feee895e52c55d001e1260" and "84b01a07a2a905e3f8565b131a05dadbeefff6f5" have entirely different histories.

42 changed files with 128 additions and 2387 deletions

10
.gitignore vendored
View File

@ -1,10 +0,0 @@
# Python 字节码缓存
__pycache__/
*.py[cod]
*$py.class
# 项目特定的存储/缓存文件夹
.storage/
# 环境变量文件(通常包含敏感信息)
.env

View File

@ -1 +0,0 @@
3.12

Binary file not shown.

Before

Width:  |  Height:  |  Size: 355 KiB

View File

@ -1,6 +0,0 @@
{
"provider": "AIHubMix",
"api_key": "sk-yd8Tik0nFW5emKYcBdFc433b7c8b4dC182848f76819bBe73",
"base_url": "https://aihubmix.com/v1",
"language": "Chinese"
}

View File

@ -1,16 +0,0 @@
{
"id": "1767772490",
"timestamp": 1767772490,
"date": "2026-01-07 15:54:50",
"type": "council",
"topic": "轻小说",
"content": "\n\n[错误: Error code: 401 - {'error': {'message': 'Invalid token: ca812c913baa474182f6d4e83e078302 (tid: 2026010707545042546382958168401)', 'type': 'Aihubmix_api_error'}}]",
"metadata": {
"rounds": 1,
"experts": [
"Expert 1",
"Expert 2"
],
"language": "Chinese"
}
}

View File

@ -1,17 +0,0 @@
{
"id": "1767772724",
"timestamp": 1767772724,
"date": "2026-01-07 15:58:44",
"type": "council",
"topic": "轻小说",
"content": "\n\n[错误: Error code: 400 - {'error': {'message': 'Model Not Exist', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}]",
"metadata": {
"rounds": 2,
"experts": [
"Expert 1",
"Expert 2",
"Expert 3 (Synthesizer)"
],
"language": "Chinese"
}
}

View File

@ -1,16 +0,0 @@
{
"id": "1767772840",
"timestamp": 1767772840,
"date": "2026-01-07 16:00:40",
"type": "council",
"topic": "轻小说",
"content": "好的。作为Expert 2我将基于与Expert 1的讨论整合我们双方的见解形成一个关于构建可持续、有深度的“轻小说”生态系统的最终综合计划。\n\n### **关于“轻小说”与“玩乐关系”的综合分析与行动计划**\n\n经过讨论我们Expert 1 与 Expert 2的核心共识在于**“轻小说”的本质是一种以“玩乐关系”为核心驱动力的文化产品。** 这种关系超越了简单的阅读,构建了一个由 **“创作者-作品-读者社群-产业生态”** 共同参与的、动态的“叙事游乐场”。Expert 1 精准地指出了其“共玩”特性与边界模糊化,而我的补充则聚焦于这种关系背后的“玩家心态”与意义生产的“流动性”。\n\n我们共同认为轻小说的挑战如套路化、深度疑虑与机遇如强大的社群凝聚力、跨媒体潜力皆根植于此。因此我们的行动计划并非要否定或削弱这种“玩乐性”而是旨在**引导、深化和结构化这种关系,使其朝着更健康、更具创造力与可持续性的方向发展。**\n\n以下是我们提出的三位一体具体行动计划\n\n---\n\n### **行动计划:构建“共创式叙事游乐场”生态系统**\n\n本计划围绕三大核心参与方展开由一个核心原则统御旨在将松散的“玩乐关系”升级为有活力的“共创生态”。\n\n#### **核心原则:从“单向供给”到“有规则的共玩”**\n确立“作者拥有主叙事权社群享有拓展权”的共识。官方作品提供坚实、自洽、富有弹性的“核心设定与初始剧情”即主游戏同时明确欢迎并预留空间给社群的二次创作与解读即玩家自创模组。这既保护了作品的原创引力又激发了社群活力。\n\n#### **一、 对创作者:成为“世界架构师”与“游戏设计师”**\n1. **设计“开放世界”而非“线性赛道”**\n * **行动**:在构思阶段,除了主线,需有意识地搭建具有扩展性的世界观底层规则(如魔力系统、社会结构)和留有“空白”的角色背景。这为读者的想象与二次创作提供了合法的“土壤”。\n * **目标**:将作品从封闭的故事,转变为可供探索的“世界”,",
"metadata": {
"rounds": 1,
"experts": [
"Expert 1",
"Expert 2"
],
"language": "Chinese"
}
}

127
README.md
View File

@ -1,91 +1,82 @@
# 🍎 智能决策工作坊 (Multi-Agent Council V4)
# Multi-Agent Decision Workshop & Deep Research
AI驱动的多智能体决策分析系统 - 基于多模型智囊团
这是一个基于多智能体Multi-Agent的决策辅助和深度研究系统。它包含两个核心模式
1. **Deep Research Mode (Deep Research)**: 模仿 Gemini 研究模式,通过规划、执行、撰写三个阶段进行深度分析。
2. **Debate Workshop (辩论工作坊)**: 让多个 AI 角色从不同视角辩论,帮助你做出更全面的决策。
## ✨ 核心功能
## ✨ 功能特性
### 🧪 Multi-Model Council V4 (智囊团模式)
- **多轮对话讨论**: 专家像真实会议一样进行多轮对话,互相批判、补充观点
- **动态专家组建**: 自定义 2-5 位专家,为每位指定最擅长的模型
- **🪄 智能专家生成**: AI 根据主题自动推荐最合适的专家角色
- **最终决策合成**: 最后一位专家综合全场观点,生成方案并绘制 Mermaid 路线图
- **双模式切换**: 侧边栏一键切换 "Deep Research" 和 "Debate Workshop"。
- **自定义模型角色**:
- 在 Deep Research 模式下,可以分别指定 `Planner` (规划者), `Researcher` (研究员), `Writer` (作家) 使用不同的 LLM。
- **多模型支持**: 支持 OpenAI (GPT-4o), Anthropic (Claude 3.5), Gemini 等主流模型。
- **交互式研究**: 生成研究计划后,用户可以介入修改,确保研究方向正确。
- **流式输出**: 实时展示研究进度和辩论过程。
### 🎯 内置决策场景
系统预置 4 大典型决策场景,每个场景都配置了专业的典型问题:
## 🛠️ 安装与使用
| 场景 | 描述 |
|------|------|
| 🚀 新产品发布评审 | 评估产品可行性、市场潜力和实施计划 |
| 💰 投资审批决策 | 分析投资项目的 ROI、风险和战略价值 |
| 🤝 合作伙伴评估 | 评估合作伙伴的匹配度和合作价值 |
| 📦 供应商评估 | 对比分析供应商的综合能力 |
### 🎭 Debate Workshop (辩论工作坊)
让 AI 扮演不同立场角色,通过辩论帮助厘清复杂决策的利弊
### 💬 用户反馈
内置用户反馈系统,收集功能建议和使用体验
### 🌐 多平台支持
- **DeepSeek**: V3, R1, Coder
- **OpenAI**: GPT-4o, GPT-4o-mini
- **Anthropic**: Claude 3.5 Sonnet
- **Google**: Gemini 1.5/2.0
- **SiliconFlow / AIHubMix / Deepseek**
---
## 🛠️ 安装
### 1. 克隆项目
```bash
# 克隆项目
git clone https://github.com/HomoDeusss/multi-agent.git
cd multi-agent
# 初始化 uv 项目(如首次使用)
uv init
# 安装依赖
uv add streamlit openai anthropic python-dotenv
# 或者同步现有依赖
uv sync
```
## 🚀 快速开始
### 2. 安装依赖
确保你安装了 Python 3.8+。
```bash
uv run streamlit run app.py
pip install -r requirements.txt
```
### 使用步骤
### 3. 配置 API Key
1. **配置 API**: 在侧边栏选择 Provider 并输入 API Key
2. **选择场景**: 点击预置的决策场景或自定义主题
3. **生成专家**: 点击 "🪄 根据主题自动生成专家" 或手动配置
4. **开始决策**: 观察专家们如何互相对话,生成综合方案
你可以通过以下两种方式配置 API Key
---
## 📁 项目结构
**方式 A: 创建 `.env` 文件 (推荐)**
复制 `.env.example``.env`,并填入你的 API Key。
```bash
cp .env.example .env
```
multi_agent_workshop/
├── app.py # Streamlit 主应用
├── config.py # 配置文件
├── agents/ # Agent 定义
│ ├── agent_profiles.py # 预设角色配置
│ ├── base_agent.py # 基础 Agent 类
│ └── research_agent.py # 研究型 Agent
├── orchestrator/ # 编排器
│ ├── debate_manager.py # 辩论管理
│ └── research_manager.py # 智囊团管理
├── utils/
│ ├── llm_client.py # LLM 客户端封装
│ ├── storage.py # 存储管理
│ └── auto_agent_generator.py # 智能专家生成
└── report/ # 报告生成
编辑 `.env` 文件:
```env
AIHUBMIX_API_KEY=your_api_key_here
```
**方式 B: 在 UI 中输入**
启动应用后,在侧边栏的 "设置" -> "API Key" 输入框中填入。
### 4. 启动应用
运行 Streamlit 应用:
```bash
streamlit run app.py
```
会自动在浏览器打开 `http://localhost:8501`
## 📖 使用指南
### 🧪 Deep Research Mode (深度研究模式)
1. 在侧边栏选择模式为 **"Deep Research"**。
2. 在 "研究模型配置" 中,为 Planner, Researcher, Writer 选择合适的模型(推荐分别使用 GPT-4o, Gemini-1.5-pro, Claude-3.5-sonnet
3. 输入你的**研究主题** (例如: "2025年量子计算商业化前景")。
4. 点击 **"生成研究计划"**。
5. 系统生成计划后,你可以直接在文本框中**修改计划步骤**。
6. 点击 **"开始深度研究"**,观察 Agent 逐步执行研究任务。
7. 下载最终生成的 Markdown 报告。
### 🎭 Debate Workshop (辩论工作坊)
1. 在侧边栏选择模式为 **"Debate Workshop"**。
2. 输入**决策议题** (例如: "我是否应该辞职创业?")。
3. 选择参与辩论的 **AI 角色** (如: CEO, 风险控制专家, 职业顾问)。
4. 点击 **"开始辩论"**。
5. 观看不同角色之间的唇枪舌战,最后生成综合决策建议。
## 📝 License
[MIT License](LICENSE)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -19,18 +19,16 @@ class AgentMessage:
class BaseAgent:
"""Agent 基类"""
def __init__(self, agent_id: str, llm_client, language: str = "Chinese"):
def __init__(self, agent_id: str, llm_client):
"""
初始化 Agent
Args:
agent_id: Agent 标识符 ( 'ceo', 'cto')
llm_client: LLM 客户端实例
language: 输出语言
"""
self.agent_id = agent_id
self.llm_client = llm_client
self.language = language
profile = get_agent_profile(agent_id)
if not profile:
@ -40,7 +38,7 @@ class BaseAgent:
self.emoji = profile["emoji"]
self.perspective = profile["perspective"]
self.focus_areas = profile["focus_areas"]
self.system_prompt = f"{profile['system_prompt']}\n\nIMPORTANT: You MUST output your response in {self.language}."
self.system_prompt = profile["system_prompt"]
# 存储对话历史
self.conversation_history = []

View File

@ -5,28 +5,26 @@ import config
class ResearchAgent:
"""研究模式专用 Agent"""
def __init__(self, role: str, llm_client: LLMClient, name: str = None, language: str = "Chinese"):
def __init__(self, role: str, llm_client: LLMClient, name: str = None):
self.role = role
self.llm_client = llm_client
self.role_config = config.RESEARCH_MODEL_ROLES.get(role, {})
self.name = name if name else self.role_config.get("name", role.capitalize())
self.language = language
@property
def model_name(self) -> str:
return self.llm_client.model
def _get_system_prompt(self, context: str = "") -> str:
base_prompt = ""
if self.role == "council_member":
base_prompt = f"""You are {self.name}, a member of the Multi-Model Decision Council.
return f"""You are {self.name}, a member of the Multi-Model Decision Council.
Your goal is to participate in a round-table discussion to solve the user's problem.
Be conversational, insightful, and constructive.
Build upon others' ideas or respectfully disagree with valid reasoning.
Context: {context}"""
elif self.role == "expert_a":
base_prompt = f"""You are Expert A, a Senior Analyst.
return f"""You are Expert A, a Senior Analyst.
You are participating in a round-table discussion.
Your goal is to analyze the topic and propose solutions.
Be conversational, direct, and responsive to other experts.
@ -34,23 +32,21 @@ Do not write a full final report; focus on the current discussion turn.
Context: {context}"""
elif self.role == "expert_b":
base_prompt = f"""You are Expert B, a Critical Reviewer.
return f"""You are Expert B, a Critical Reviewer.
You are participating in a round-table discussion.
Your goal is to critique Expert A's points and offer alternative perspectives.
Be conversational and constructive. Challenge assumptions directly.
Context: {context}"""
elif self.role == "expert_c":
base_prompt = f"""You are Expert C, a Senior Strategist and Visual Thinker.
return f"""You are Expert C, a Senior Strategist and Visual Thinker.
Your goal is to synthesize the final output.
Combine the structural strength of Expert A with the critical insights of Expert B.
Produce a final, polished, comprehensive plan or report.
CRITICAL: You MUST include a Mermaid.js diagram (using ```mermaid code block) to visualize the timeline, process, or architecture."""
else:
base_prompt = "You are a helpful assistant."
return f"{base_prompt}\n\nIMPORTANT: You MUST output your response in {self.language}."
return "You are a helpful assistant."
def generate(self, prompt: str, context: str = "") -> Generator[str, None, None]:
"""Generate response stream"""

828
app.py

File diff suppressed because it is too large Load Diff

View File

@ -9,7 +9,7 @@ load_dotenv()
# API 配置
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY", "")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
AIHUBMIX_API_KEY = os.getenv("AIHUBMIX_API_KEY", "")
AIHUBMIX_API_KEY = os.getenv("AIHUBMIX_API_KEY", "sk-yd8Tik0nFW5emKYcBdFc433b7c8b4dC182848f76819bBe73")
DEEPSEEK_API_KEY = os.getenv("DEEPSEEK_API_KEY", "")
SILICONFLOW_API_KEY = os.getenv("SILICONFLOW_API_KEY", "")
@ -93,12 +93,6 @@ AVAILABLE_MODELS = {
MAX_DEBATE_ROUNDS = 3 # 最大辩论轮数
MAX_AGENTS = 6 # 最大参与 Agent 数量
# 支持的输出语言
SUPPORTED_LANGUAGES = ["Chinese", "English", "Japanese", "Spanish", "French", "German"]
# 生成配置
MAX_OUTPUT_TOKENS = 300 # 限制单次回复长度,保持精简
# 研究模式模型角色配置
RESEARCH_MODEL_ROLES = {
"expert_a": {

View File

@ -1,6 +0,0 @@
def main():
print("Hello from multi-agent!")
if __name__ == "__main__":
main()

View File

@ -18,7 +18,6 @@ class DebateConfig:
agent_ids: List[str] = None
max_rounds: int = 2
agent_clients: dict = None # Map[agent_id, LLMClient]
language: str = "Chinese"
@dataclass
@ -65,7 +64,7 @@ class DebateManager:
if hasattr(debate_config, 'agent_clients') and debate_config.agent_clients and agent_id in debate_config.agent_clients:
client = debate_config.agent_clients[agent_id]
agent = BaseAgent(agent_id, client, language=debate_config.language)
agent = BaseAgent(agent_id, client)
self.agents.append(agent)
def run_debate_stream(

View File

@ -10,7 +10,6 @@ class ResearchConfig:
context: str = ""
# Dynamic list of experts: [{"name": "Expert 1", "model": "gpt-4o", "role": "analyst"}, ...]
experts: List[Dict[str, str]] = None
language: str = "Chinese"
class ResearchManager:
"""Manages the Multi-Model Council workflow"""
@ -34,12 +33,19 @@ class ResearchManager:
self.agents = []
if config.experts:
for idx, expert_conf in enumerate(config.experts):
# Assign role based on position or config
# First agents are discussion members, last one is Synthesizer usually,
# but for equality we treat them all as members until the end.
# We'll assign a generic "member" role or specific if provided.
role_type = "council_member"
# If it's the last one, maybe give them synthesizer duty?
# For now, all are members, and we explicitly pick one for synthesis.
agent = ResearchAgent(
role=role_type,
llm_client=self._get_client(expert_conf["model"]),
name=expert_conf.get("name", f"Expert {idx+1}"),
language=config.language
name=expert_conf.get("name", f"Expert {idx+1}")
)
self.agents.append(agent)

View File

@ -1,13 +0,0 @@
[project]
name = "multi-agent"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"anthropic>=0.75.0",
"openai>=2.14.0",
"pydantic>=2.12.5",
"python-dotenv>=1.2.1",
"streamlit>=1.52.2",
]

View File

@ -1,108 +0,0 @@
"""
Auto Agent Generator - 根据主题自动生成专家配置
Uses LLM to analyze the topic and suggest appropriate expert agents.
"""
import json
import re
from typing import List, Dict
from utils.llm_client import LLMClient
EXPERT_GENERATION_PROMPT = """You are an expert team composition advisor. Given a research/decision topic, you need to suggest the most appropriate team of experts to analyze it.
Instructions:
1. Analyze the topic carefully to understand its domain and key aspects
2. Generate {num_experts} distinct expert roles that would provide the most valuable perspectives
3. Each expert should have a unique focus area relevant to the topic
4. The LAST expert should always be a "Synthesizer" role who can integrate all perspectives
Output Format (MUST be valid JSON array):
[
{{"name": "Expert Name", "perspective": "Brief description of their viewpoint", "focus": "Key areas they analyze"}},
...
]
Examples of good expert names based on topic:
- For "Should we launch an e-commerce platform?": "市场渠道分析师", "电商运营专家", "供应链顾问", "数字化转型综合师"
- For "Career transition to AI field": "职业发展顾问", "AI行业专家", "技能评估分析师", "综合规划师"
IMPORTANT:
- Use {language} for all names and descriptions
- Make names specific to the topic, not generic like "Expert 1"
- The last expert MUST be a synthesizer/integrator type
Topic: {topic}
Generate exactly {num_experts} experts as a JSON array:"""
def generate_experts_for_topic(
topic: str,
num_experts: int,
llm_client: LLMClient,
language: str = "Chinese"
) -> List[Dict[str, str]]:
"""
Use LLM to generate appropriate expert configurations based on the topic.
Args:
topic: The research/decision topic
num_experts: Number of experts to generate (2-5)
llm_client: LLM client instance for API calls
language: Output language (Chinese/English)
Returns:
List of expert dicts: [{"name": "...", "perspective": "...", "focus": "..."}, ...]
"""
if not topic.strip():
return []
prompt = EXPERT_GENERATION_PROMPT.format(
topic=topic,
num_experts=num_experts,
language=language
)
try:
response = llm_client.chat(
system_prompt="You are a helpful assistant that generates JSON output only. No markdown, no explanation.",
user_prompt=prompt,
max_tokens=800
)
# Extract JSON from response (handle potential markdown wrapping)
json_match = re.search(r'\[[\s\S]*\]', response)
if json_match:
experts = json.loads(json_match.group())
# Validate structure
if isinstance(experts, list) and len(experts) >= 1:
validated = []
for exp in experts[:num_experts]:
if isinstance(exp, dict) and "name" in exp:
validated.append({
"name": exp.get("name", "Expert"),
"perspective": exp.get("perspective", ""),
"focus": exp.get("focus", "")
})
return validated
except (json.JSONDecodeError, Exception) as e:
print(f"[AutoAgentGenerator] Error parsing LLM response: {e}")
# Fallback: return generic experts
fallback = []
for i in range(num_experts):
if i == num_experts - 1:
fallback.append({"name": f"综合分析师", "perspective": "整合视角", "focus": "综合决策"})
else:
fallback.append({"name": f"专家 {i+1}", "perspective": "分析视角", "focus": "专业分析"})
return fallback
def get_default_model_for_expert(expert_index: int, total_experts: int, available_models: list) -> str:
"""
Assign a default model to an expert based on their position.
Spreads experts across available models for diversity.
"""
if not available_models:
return "gpt-4o"
return available_models[expert_index % len(available_models)]

View File

@ -5,8 +5,6 @@ from typing import Generator
import os
import config
class LLMClient:
"""LLM API 统一客户端"""
@ -64,7 +62,7 @@ class LLMClient:
self,
system_prompt: str,
user_prompt: str,
max_tokens: int = config.MAX_OUTPUT_TOKENS
max_tokens: int = 1024
) -> Generator[str, None, None]:
"""
流式对话

View File

@ -1,184 +0,0 @@
"""
Storage Manager - Handle local persistence of configuration, history/reports, and assets.
"""
import os
import json
import time
from typing import List, Dict, Any
from pathlib import Path
# Constants
STORAGE_DIR = ".storage"
CONFIG_FILE = "config.json"
HISTORY_DIR = "history"
ASSETS_DIR = "assets"
class StorageManager:
def __init__(self):
self.root_dir = Path(STORAGE_DIR)
self.config_path = self.root_dir / CONFIG_FILE
self.history_dir = self.root_dir / HISTORY_DIR
self.assets_dir = self.root_dir / ASSETS_DIR
# Ensure directories exist
self.root_dir.mkdir(exist_ok=True)
self.history_dir.mkdir(exist_ok=True)
self.assets_dir.mkdir(exist_ok=True)
def save_config(self, config_data: Dict[str, Any]):
"""Save UI configuration to file"""
try:
with open(self.config_path, 'w', encoding='utf-8') as f:
json.dump(config_data, f, indent=2, ensure_ascii=False)
except Exception as e:
print(f"Error saving config: {e}")
def load_config(self) -> Dict[str, Any]:
"""Load UI configuration from file"""
if not self.config_path.exists():
return {}
try:
with open(self.config_path, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
print(f"Error loading config: {e}")
return {}
def save_asset(self, uploaded_file) -> str:
"""Save an uploaded file (e.g., background image) into assets directory.
Args:
uploaded_file: a file-like object (Streamlit UploadedFile) or bytes-like
Returns:
The saved file path as string, or None on failure.
"""
try:
# Determine filename
if hasattr(uploaded_file, 'name'):
filename = uploaded_file.name
else:
filename = f"asset_{int(time.time())}"
# sanitize
safe_name = "".join([c for c in filename if c.isalnum() or c in (' ', '.', '_', '-')]).strip().replace(' ', '_')
dest = self.assets_dir / f"{int(time.time())}_{safe_name}"
# Write bytes
with open(dest, 'wb') as out:
# Streamlit UploadedFile has getbuffer()
if hasattr(uploaded_file, 'getbuffer'):
out.write(uploaded_file.getbuffer())
else:
# try reading
data = uploaded_file.read()
if isinstance(data, str):
data = data.encode('utf-8')
out.write(data)
return str(dest)
except Exception as e:
print(f"Error saving asset: {e}")
return None
def save_history(self, session_type: str, topic: str, content: str, metadata: Dict[str, Any] = None):
"""
Save a session report/history
Args:
session_type: 'council' or 'debate'
topic: The main topic
content: The full markdown report or content
metadata: Additional info (model used, date, etc)
"""
timestamp = int(time.time())
date_str = time.strftime("%Y-%m-%d %H:%M:%S")
# Create a safe filename
safe_topic = "".join([c for c in topic[:20] if c.isalnum() or c in (' ', '_', '-')]).strip().replace(' ', '_')
filename = f"{timestamp}_{session_type}_{safe_topic}.json"
data = {
"id": str(timestamp),
"timestamp": timestamp,
"date": date_str,
"type": session_type,
"topic": topic,
"content": content,
"metadata": metadata or {}
}
try:
with open(self.history_dir / filename, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
return True
except Exception as e:
print(f"Error saving history: {e}")
return False
def list_history(self) -> List[Dict[str, Any]]:
"""List all history items (metadata only)"""
items = []
if not self.history_dir.exists():
return []
for file in self.history_dir.glob("*.json"):
try:
with open(file, 'r', encoding='utf-8') as f:
data = json.load(f)
# Return summary info
items.append({
"id": data.get("id"),
"date": data.get("date"),
"type": data.get("type"),
"topic": data.get("topic"),
"filename": file.name
})
except Exception:
continue
# Sort by timestamp desc
return sorted(items, key=lambda x: x.get("date", ""), reverse=True)
def load_history_item(self, filename: str) -> Dict[str, Any]:
"""Load full content of a history item"""
path = self.history_dir / filename
if not path.exists():
return None
try:
with open(path, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception:
return None
# ==================== Session Cache (Resume Functionality) ====================
def save_session_state(self, key: str, data: Dict[str, Any]):
"""Save temporary session state for recovery"""
try:
# We use a dedicated cache file per key
cache_file = self.root_dir / f"{key}_cache.json"
data["_timestamp"] = int(time.time())
with open(cache_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
except Exception as e:
print(f"Error saving session cache: {e}")
def load_session_state(self, key: str) -> Dict[str, Any]:
"""Load temporary session state"""
cache_file = self.root_dir / f"{key}_cache.json"
if not cache_file.exists():
return None
try:
with open(cache_file, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception:
return None
def clear_session_state(self, key: str):
"""Clear temporary session state"""
cache_file = self.root_dir / f"{key}_cache.json"
if cache_file.exists():
try:
os.remove(cache_file)
except Exception:
pass

1126
uv.lock generated

File diff suppressed because it is too large Load Diff