feat: Implement Multi-Model Council mode with 3-expert workflow
This commit is contained in:
parent
5931e9280f
commit
9dda930868
83
README.md
83
README.md
@ -1 +1,82 @@
|
||||
# multi-agent
|
||||
# Multi-Agent Decision Workshop & Deep Research
|
||||
|
||||
这是一个基于多智能体(Multi-Agent)的决策辅助和深度研究系统。它包含两个核心模式:
|
||||
1. **Deep Research Mode (Deep Research)**: 模仿 Gemini 研究模式,通过规划、执行、撰写三个阶段进行深度分析。
|
||||
2. **Debate Workshop (辩论工作坊)**: 让多个 AI 角色从不同视角辩论,帮助你做出更全面的决策。
|
||||
|
||||
## ✨ 功能特性
|
||||
|
||||
- **双模式切换**: 侧边栏一键切换 "Deep Research" 和 "Debate Workshop"。
|
||||
- **自定义模型角色**:
|
||||
- 在 Deep Research 模式下,可以分别指定 `Planner` (规划者), `Researcher` (研究员), `Writer` (作家) 使用不同的 LLM。
|
||||
- **多模型支持**: 支持 OpenAI (GPT-4o), Anthropic (Claude 3.5), Gemini 等主流模型。
|
||||
- **交互式研究**: 生成研究计划后,用户可以介入修改,确保研究方向正确。
|
||||
- **流式输出**: 实时展示研究进度和辩论过程。
|
||||
|
||||
## 🛠️ 安装与使用
|
||||
|
||||
### 1. 克隆项目
|
||||
|
||||
```bash
|
||||
git clone https://github.com/HomoDeusss/multi-agent.git
|
||||
cd multi-agent
|
||||
```
|
||||
|
||||
### 2. 安装依赖
|
||||
|
||||
确保你安装了 Python 3.8+。
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 3. 配置 API Key
|
||||
|
||||
你可以通过以下两种方式配置 API Key:
|
||||
|
||||
**方式 A: 创建 `.env` 文件 (推荐)**
|
||||
复制 `.env.example` 为 `.env`,并填入你的 API Key。
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
编辑 `.env` 文件:
|
||||
```env
|
||||
AIHUBMIX_API_KEY=your_api_key_here
|
||||
```
|
||||
|
||||
**方式 B: 在 UI 中输入**
|
||||
启动应用后,在侧边栏的 "设置" -> "API Key" 输入框中填入。
|
||||
|
||||
### 4. 启动应用
|
||||
|
||||
运行 Streamlit 应用:
|
||||
|
||||
```bash
|
||||
streamlit run app.py
|
||||
```
|
||||
|
||||
会自动在浏览器打开 `http://localhost:8501`。
|
||||
|
||||
## 📖 使用指南
|
||||
|
||||
### 🧪 Deep Research Mode (深度研究模式)
|
||||
1. 在侧边栏选择模式为 **"Deep Research"**。
|
||||
2. 在 "研究模型配置" 中,为 Planner, Researcher, Writer 选择合适的模型(推荐分别使用 GPT-4o, Gemini-1.5-pro, Claude-3.5-sonnet)。
|
||||
3. 输入你的**研究主题** (例如: "2025年量子计算商业化前景")。
|
||||
4. 点击 **"生成研究计划"**。
|
||||
5. 系统生成计划后,你可以直接在文本框中**修改计划步骤**。
|
||||
6. 点击 **"开始深度研究"**,观察 Agent 逐步执行研究任务。
|
||||
7. 下载最终生成的 Markdown 报告。
|
||||
|
||||
### 🎭 Debate Workshop (辩论工作坊)
|
||||
1. 在侧边栏选择模式为 **"Debate Workshop"**。
|
||||
2. 输入**决策议题** (例如: "我是否应该辞职创业?")。
|
||||
3. 选择参与辩论的 **AI 角色** (如: CEO, 风险控制专家, 职业顾问)。
|
||||
4. 点击 **"开始辩论"**。
|
||||
5. 观看不同角色之间的唇枪舌战,最后生成综合决策建议。
|
||||
|
||||
## 📝 License
|
||||
|
||||
[MIT License](LICENSE)
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -42,6 +42,13 @@ class BaseAgent:
|
||||
|
||||
# 存储对话历史
|
||||
self.conversation_history = []
|
||||
|
||||
@property
|
||||
def model_name(self) -> str:
|
||||
"""获取当前使用的模型名称"""
|
||||
if hasattr(self.llm_client, "model"):
|
||||
return self.llm_client.model
|
||||
return "Unknown Model"
|
||||
|
||||
def generate_response(
|
||||
self,
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
from typing import Generator, List, Dict
|
||||
from typing import Generator
|
||||
from utils.llm_client import LLMClient
|
||||
import config
|
||||
|
||||
@ -11,25 +11,28 @@ class ResearchAgent:
|
||||
self.role_config = config.RESEARCH_MODEL_ROLES.get(role, {})
|
||||
self.name = self.role_config.get("name", role.capitalize())
|
||||
|
||||
@property
|
||||
def model_name(self) -> str:
|
||||
return self.llm_client.model
|
||||
|
||||
def _get_system_prompt(self, context: str = "") -> str:
|
||||
if self.role == "planner":
|
||||
return f"""You are a Senior Research Planner.
|
||||
Your goal is to break down a complex user topic into a structured research plan.
|
||||
You must create a clear, step-by-step plan that covers different angles of the topic.
|
||||
Format your output as a Markdown list of steps.
|
||||
if self.role == "expert_a":
|
||||
return f"""You are Expert A, a Senior Analyst.
|
||||
Your goal is to provide a deep, foundational analysis of the user's topic.
|
||||
Structure your thinking clearly. Propose a solid initial framework or solution.
|
||||
Context: {context}"""
|
||||
|
||||
elif self.role == "researcher":
|
||||
return f"""You are a Deep Researcher.
|
||||
Your goal is to execute a specific research step and provide detailed, in-depth analysis.
|
||||
Use your vast knowledge to provide specific facts, figures, and logical reasoning.
|
||||
Do not be superficial. Go deep.
|
||||
elif self.role == "expert_b":
|
||||
return f"""You are Expert B, a Critical Reviewer.
|
||||
Your goal is to find flaws, risks, and missed opportunities in Expert A's analysis.
|
||||
Be constructive but rigorous. Don't just agree; add value by challenging assumptions.
|
||||
Context: {context}"""
|
||||
|
||||
elif self.role == "writer":
|
||||
return f"""You are a Senior Report Writer.
|
||||
Your goal is to synthesize multiple research findings into a cohesive, high-quality report.
|
||||
The report should be well-structured, easy to read, and provide actionable insights.
|
||||
elif self.role == "expert_c":
|
||||
return f"""You are Expert C, a Senior Strategist.
|
||||
Your goal is to synthesize the final output.
|
||||
Combine the structural strength of Expert A with the critical insights of Expert B.
|
||||
Produce a final, polished, comprehensive plan or report.
|
||||
Context: {context}"""
|
||||
|
||||
else:
|
||||
|
||||
193
app.py
193
app.py
@ -66,23 +66,7 @@ st.markdown("""
|
||||
DEFAULT_API_KEY = os.getenv("AIHUBMIX_API_KEY", "")
|
||||
|
||||
# 支持的模型列表
|
||||
AVAILABLE_MODELS = {
|
||||
"gpt-4o": "GPT-4o (推荐)",
|
||||
"gpt-4o-mini": "GPT-4o Mini (快速)",
|
||||
"gpt-4-turbo": "GPT-4 Turbo",
|
||||
"gpt-3.5-turbo": "GPT-3.5 Turbo (经济)",
|
||||
"claude-3-5-sonnet-20241022": "Claude 3.5 Sonnet",
|
||||
"claude-3-opus-20240229": "Claude 3 Opus",
|
||||
"claude-3-haiku-20240307": "Claude 3 Haiku (快速)",
|
||||
"deepseek-chat": "DeepSeek Chat",
|
||||
"deepseek-coder": "DeepSeek Coder",
|
||||
"gemini-1.5-pro": "Gemini 1.5 Pro",
|
||||
"gemini-1.5-flash": "Gemini 1.5 Flash",
|
||||
"qwen-turbo": "通义千问 Turbo",
|
||||
"qwen-plus": "通义千问 Plus",
|
||||
"glm-4": "智谱 GLM-4",
|
||||
"moonshot-v1-8k": "Moonshot (月之暗面)",
|
||||
}
|
||||
from config import AVAILABLE_MODELS, RESEARCH_MODEL_ROLES
|
||||
|
||||
# 决策类型
|
||||
DECISION_TYPES = {
|
||||
@ -221,17 +205,17 @@ with st.sidebar:
|
||||
# ==================== 主界面逻辑 ====================
|
||||
|
||||
if mode == "Deep Research":
|
||||
st.title("🧪 Deep Research Mode")
|
||||
st.markdown("*深度研究模式:规划 -> 研究 -> 报告*")
|
||||
st.title("🧪 Multi-Model Council")
|
||||
st.markdown("*多模型智囊团:分析 (Expert A) -> 批判 (Expert B) -> 决策 (Expert C)*")
|
||||
|
||||
# Input
|
||||
research_topic = st.text_area("研究主题", placeholder="请输入你想深入研究的主题...", height=100)
|
||||
research_topic = st.text_area("研究/决策主题", placeholder="请输入你想深入研究或决策的主题...", height=100)
|
||||
research_context = st.text_area("补充背景 (可选)", placeholder="任何额外的背景信息...", height=80)
|
||||
|
||||
generate_plan_btn = st.button("📝 生成研究计划", type="primary", disabled=not research_topic)
|
||||
start_research_btn = st.button("🚀 开始多模型协作", type="primary", disabled=not research_topic)
|
||||
|
||||
if generate_plan_btn and research_topic:
|
||||
st.session_state.research_started = False
|
||||
if start_research_btn and research_topic:
|
||||
st.session_state.research_started = True
|
||||
st.session_state.research_output = ""
|
||||
st.session_state.research_steps_output = []
|
||||
|
||||
@ -239,84 +223,70 @@ if mode == "Deep Research":
|
||||
config_obj = ResearchConfig(
|
||||
topic=research_topic,
|
||||
context=research_context,
|
||||
planner_model=roles_config['planner'],
|
||||
researcher_model=roles_config['researcher'],
|
||||
writer_model=roles_config['writer']
|
||||
expert_a_model=roles_config['expert_a'],
|
||||
expert_b_model=roles_config['expert_b'],
|
||||
expert_c_model=roles_config['expert_c']
|
||||
)
|
||||
manager.create_agents(config_obj)
|
||||
|
||||
with st.spinner("正在制定研究计划..."):
|
||||
plan_text = ""
|
||||
for chunk in manager.generate_plan(research_topic, research_context):
|
||||
plan_text += chunk
|
||||
st.session_state.research_plan = plan_text
|
||||
|
||||
# Plan Review & Edit
|
||||
if st.session_state.research_plan:
|
||||
st.divider()
|
||||
st.subheader("📋 研究计划确认")
|
||||
st.subheader("🧠 智囊团思考中...")
|
||||
|
||||
edited_plan = st.text_area("请审查并编辑计划 (Markdown格式)", value=st.session_state.research_plan, height=300)
|
||||
st.session_state.research_plan = edited_plan
|
||||
# Collaborative Execution
|
||||
current_step_name = ""
|
||||
current_step_content = ""
|
||||
step_placeholder = st.empty()
|
||||
status_container = st.status("正在初始化...", expanded=True)
|
||||
|
||||
start_research_btn = st.button("🚀 开始深度研究", type="primary")
|
||||
|
||||
if start_research_btn:
|
||||
st.session_state.research_started = True
|
||||
st.session_state.research_steps_output = [] # Reset steps
|
||||
try:
|
||||
for event in manager.collaborate(research_topic, research_context):
|
||||
if event["type"] == "step_start":
|
||||
current_step_name = event["step"]
|
||||
current_agent = event["agent"]
|
||||
current_model = event["model"]
|
||||
status_container.update(label=f"🔄 {current_step_name} [{current_agent}] ({current_model})", state="running")
|
||||
step_placeholder = st.empty()
|
||||
current_step_content = ""
|
||||
|
||||
elif event["type"] == "content":
|
||||
current_step_content += event["content"]
|
||||
step_placeholder.markdown(f"**Thinking...**\n\n{current_step_content}")
|
||||
|
||||
elif event["type"] == "step_end":
|
||||
# Save step result
|
||||
st.session_state.research_steps_output.append({
|
||||
"step": current_step_name,
|
||||
"output": event["output"]
|
||||
})
|
||||
status_container.write(f"### {current_step_name}\n{event['output']}")
|
||||
status_container.update(label=f"✅ {current_step_name} 完成", state="running")
|
||||
|
||||
# Parse plan lines to get steps (simple heuristic: lines starting with - or 1.)
|
||||
steps = [line.strip() for line in edited_plan.split('\n') if line.strip().startswith(('-', '*', '1.', '2.', '3.', '4.', '5.'))]
|
||||
if not steps:
|
||||
steps = [edited_plan] # Fallback if no list format
|
||||
status_container.update(label="✅ 所有步骤完成", state="complete", expanded=False)
|
||||
|
||||
manager = ResearchManager(api_key=api_key)
|
||||
config_obj = ResearchConfig(
|
||||
topic=research_topic,
|
||||
context=research_context,
|
||||
planner_model=roles_config['planner'],
|
||||
researcher_model=roles_config['researcher'],
|
||||
writer_model=roles_config['writer']
|
||||
)
|
||||
manager.create_agents(config_obj)
|
||||
|
||||
# Execute Steps
|
||||
previous_findings = ""
|
||||
st.divider()
|
||||
st.subheader("🔍 研究进行中...")
|
||||
|
||||
step_progress = st.container()
|
||||
|
||||
for i, step in enumerate(steps):
|
||||
with step_progress:
|
||||
with st.status(f"正在研究: {step}", expanded=True):
|
||||
findings_text = ""
|
||||
placeholder = st.empty()
|
||||
for chunk in manager.execute_step(step, previous_findings):
|
||||
findings_text += chunk
|
||||
placeholder.markdown(findings_text)
|
||||
|
||||
st.session_state.research_steps_output.append(f"### {step}\n{findings_text}")
|
||||
previous_findings += f"\n\nFinding for '{step}':\n{findings_text}"
|
||||
|
||||
# Final Report
|
||||
st.divider()
|
||||
st.subheader("📄 最终报告生成中...")
|
||||
report_placeholder = st.empty()
|
||||
final_report = ""
|
||||
for chunk in manager.generate_report(research_topic, previous_findings):
|
||||
final_report += chunk
|
||||
report_placeholder.markdown(final_report)
|
||||
# The last step output is the final plan
|
||||
if st.session_state.research_steps_output:
|
||||
final_plan = st.session_state.research_steps_output[-1]["output"]
|
||||
st.session_state.research_output = final_plan
|
||||
st.success("✅ 综合方案生成完毕")
|
||||
|
||||
st.session_state.research_output = final_report
|
||||
st.success("✅ 研究完成")
|
||||
|
||||
except Exception as e:
|
||||
st.error(f"发生错误: {str(e)}")
|
||||
import traceback
|
||||
st.code(traceback.format_exc())
|
||||
|
||||
# Show Final Report if available
|
||||
if st.session_state.research_output:
|
||||
st.divider()
|
||||
st.subheader("📄 最终研究报告")
|
||||
st.subheader("📄 最终综合方案")
|
||||
st.markdown(st.session_state.research_output)
|
||||
st.download_button("📥 下载报告", st.session_state.research_output, "research_report.md")
|
||||
st.download_button("📥 下载方案", st.session_state.research_output, "comprehensive_plan.md")
|
||||
|
||||
# Show breakdown history
|
||||
with st.expander("查看完整思考过程"):
|
||||
for step in st.session_state.research_steps_output:
|
||||
st.markdown(f"### {step['step']}")
|
||||
st.markdown(step['output'])
|
||||
st.divider()
|
||||
|
||||
|
||||
elif mode == "Debate Workshop":
|
||||
@ -386,6 +356,23 @@ elif mode == "Debate Workshop":
|
||||
):
|
||||
selected_agents.append(agent_id)
|
||||
|
||||
# 自定义模型配置 (Advanced)
|
||||
agent_model_map = {}
|
||||
with st.expander("🛠️ 为每个角色指定模型 (可选)"):
|
||||
for agent_id in selected_agents:
|
||||
# Find agent name
|
||||
agent_name = next((a['name'] for a in all_agents if a['id'] == agent_id), agent_id)
|
||||
if agent_id in st.session_state.custom_agents:
|
||||
agent_name = st.session_state.custom_agents[agent_id]['name']
|
||||
|
||||
agent_model = st.selectbox(
|
||||
f"{agent_name} 的模型",
|
||||
options=list(AVAILABLE_MODELS.keys()),
|
||||
index=list(AVAILABLE_MODELS.keys()).index(model) if model in AVAILABLE_MODELS else 0,
|
||||
key=f"model_for_{agent_id}"
|
||||
)
|
||||
agent_model_map[agent_id] = agent_model
|
||||
|
||||
# 角色数量提示
|
||||
if len(selected_agents) < 2:
|
||||
st.warning("请至少选择 2 个角色")
|
||||
@ -434,22 +421,25 @@ elif mode == "Debate Workshop":
|
||||
agent_profiles.AGENT_PROFILES.update(st.session_state.custom_agents)
|
||||
|
||||
try:
|
||||
# 初始化客户端和管理器
|
||||
provider_val = "aihubmix" # Debate mode default to aihubmix or logic needs to be robust.
|
||||
# Note: in sidebar "model" and "api_key" were set. "provider" variable is now inside the Sidebar logic block if mode==Debate.
|
||||
# But wait, I removed the "Advanced Settings" block from the global scope and put it into sub-scope?
|
||||
# Let's check my sidebar logic above.
|
||||
|
||||
# Refactoring check:
|
||||
# I removed the provider selection logic from the global sidebar. I should probably add it back or assume a default.
|
||||
# In the original code, provider selection was in "Advanced Settings".
|
||||
|
||||
# 初始化默认客户端
|
||||
llm_client = LLMClient(
|
||||
provider="aihubmix",
|
||||
api_key=api_key,
|
||||
base_url="https://aihubmix.com/v1",
|
||||
model=model
|
||||
)
|
||||
|
||||
# 初始化特定角色的客户端
|
||||
agent_clients = {}
|
||||
for ag_id, ag_model in agent_model_map.items():
|
||||
if ag_model != model: # Only create new client if different from default
|
||||
agent_clients[ag_id] = LLMClient(
|
||||
provider="aihubmix",
|
||||
api_key=api_key,
|
||||
base_url="https://aihubmix.com/v1",
|
||||
model=ag_model
|
||||
)
|
||||
|
||||
debate_manager = DebateManager(llm_client)
|
||||
|
||||
# 配置辩论
|
||||
@ -457,7 +447,8 @@ elif mode == "Debate Workshop":
|
||||
topic=topic,
|
||||
context=context,
|
||||
agent_ids=selected_agents,
|
||||
max_rounds=max_rounds
|
||||
max_rounds=max_rounds,
|
||||
agent_clients=agent_clients
|
||||
)
|
||||
debate_manager.setup_debate(debate_config)
|
||||
|
||||
@ -474,7 +465,9 @@ elif mode == "Debate Workshop":
|
||||
)
|
||||
|
||||
elif event["type"] == "speech_start":
|
||||
st.markdown(f"**{event['emoji']} {event['agent_name']}**")
|
||||
# 显示模型名称
|
||||
model_display = f" <span style='font-size:0.8em; color:gray'>({event.get('model_name', 'Unknown')})</span>"
|
||||
st.markdown(f"**{event['emoji']} {event['agent_name']}**{model_display}", unsafe_allow_html=True)
|
||||
speech_placeholder = st.empty()
|
||||
current_content = ""
|
||||
|
||||
|
||||
33
config.py
33
config.py
@ -18,26 +18,41 @@ AIHUBMIX_BASE_URL = "https://aihubmix.com/v1"
|
||||
DEFAULT_MODEL = "gpt-4o" # AIHubMix 支持的模型
|
||||
LLM_PROVIDER = "aihubmix" # 默认使用 AIHubMix
|
||||
|
||||
# 支持的模型列表
|
||||
AVAILABLE_MODELS = {
|
||||
"gpt-4o": "GPT-4o (OpenAI)",
|
||||
"gpt-4o-mini": "GPT-4o Mini (OpenAI)",
|
||||
"claude-3-5-sonnet-20241022": "Claude 3.5 Sonnet (Anthropic)",
|
||||
"claude-3-opus-20240229": "Claude 3 Opus (Anthropic)",
|
||||
"gemini-1.5-pro": "Gemini 1.5 Pro (Google)",
|
||||
"gemini-1.5-flash": "Gemini 1.5 Flash (Google)",
|
||||
"deepseek-chat": "DeepSeek V3 (DeepSeek)",
|
||||
"deepseek-reasoner": "DeepSeek R1 (DeepSeek)",
|
||||
"llama-3.3-70b-instruct": "Llama 3.3 70B (Meta)",
|
||||
"qwen-2.5-72b-instruct": "Qwen 2.5 72B (Alibaba)",
|
||||
"mistral-large-latest": "Mistral Large (Mistral)",
|
||||
}
|
||||
|
||||
# 辩论配置
|
||||
MAX_DEBATE_ROUNDS = 3 # 最大辩论轮数
|
||||
MAX_AGENTS = 6 # 最大参与 Agent 数量
|
||||
|
||||
# 研究模式模型角色配置
|
||||
RESEARCH_MODEL_ROLES = {
|
||||
"planner": {
|
||||
"name": "Planner",
|
||||
"expert_a": {
|
||||
"name": "Expert A (Analyst)",
|
||||
"default_model": "gpt-4o",
|
||||
"description": "负责拆解问题,制定研究计划"
|
||||
"description": "负责初步分析,提出核心观点和方案"
|
||||
},
|
||||
"researcher": {
|
||||
"name": "Researcher",
|
||||
"expert_b": {
|
||||
"name": "Expert B (Critique)",
|
||||
"default_model": "gemini-1.5-pro",
|
||||
"description": "负责执行具体的研究步骤,深度分析"
|
||||
"description": "负责批判性分析,指出潜在问题和漏洞"
|
||||
},
|
||||
"writer": {
|
||||
"name": "Writer",
|
||||
"expert_c": {
|
||||
"name": "Expert C (Synthesizer)",
|
||||
"default_model": "claude-3-5-sonnet-20241022",
|
||||
"description": "负责汇总信息,撰写最终报告"
|
||||
"description": "负责综合各方观点,生成最终决策方案"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Binary file not shown.
Binary file not shown.
@ -17,6 +17,7 @@ class DebateConfig:
|
||||
context: str = ""
|
||||
agent_ids: List[str] = None
|
||||
max_rounds: int = 2
|
||||
agent_clients: dict = None # Map[agent_id, LLMClient]
|
||||
|
||||
|
||||
@dataclass
|
||||
@ -58,7 +59,12 @@ class DebateManager:
|
||||
|
||||
# 创建参与的 Agent
|
||||
for agent_id in debate_config.agent_ids:
|
||||
agent = BaseAgent(agent_id, self.llm_client)
|
||||
# Check if specific client is provided in config, else use default
|
||||
client = self.llm_client
|
||||
if hasattr(debate_config, 'agent_clients') and debate_config.agent_clients and agent_id in debate_config.agent_clients:
|
||||
client = debate_config.agent_clients[agent_id]
|
||||
|
||||
agent = BaseAgent(agent_id, client)
|
||||
self.agents.append(agent)
|
||||
|
||||
def run_debate_stream(
|
||||
@ -106,6 +112,7 @@ class DebateManager:
|
||||
"agent_id": agent.agent_id,
|
||||
"agent_name": agent.name,
|
||||
"emoji": agent.emoji,
|
||||
"model_name": agent.model_name,
|
||||
"round": round_num
|
||||
}
|
||||
|
||||
|
||||
@ -8,12 +8,12 @@ import config
|
||||
class ResearchConfig:
|
||||
topic: str
|
||||
context: str = ""
|
||||
planner_model: str = "gpt-4o"
|
||||
researcher_model: str = "gemini-1.5-pro"
|
||||
writer_model: str = "claude-3-5-sonnet-20241022"
|
||||
expert_a_model: str = "gpt-4o"
|
||||
expert_b_model: str = "gemini-1.5-pro"
|
||||
expert_c_model: str = "claude-3-5-sonnet-20241022"
|
||||
|
||||
class ResearchManager:
|
||||
"""Manages the Deep Research workflow"""
|
||||
"""Manages the Multi-Model Council workflow"""
|
||||
|
||||
def __init__(self, api_key: str, base_url: str = None, provider: str = "aihubmix"):
|
||||
self.api_key = api_key
|
||||
@ -31,21 +31,41 @@ class ResearchManager:
|
||||
|
||||
def create_agents(self, config: ResearchConfig):
|
||||
"""Initialize agents with specific models"""
|
||||
self.agents["planner"] = ResearchAgent("planner", self._get_client(config.planner_model))
|
||||
self.agents["researcher"] = ResearchAgent("researcher", self._get_client(config.researcher_model))
|
||||
self.agents["writer"] = ResearchAgent("writer", self._get_client(config.writer_model))
|
||||
self.agents["expert_a"] = ResearchAgent("expert_a", self._get_client(config.expert_a_model))
|
||||
self.agents["expert_b"] = ResearchAgent("expert_b", self._get_client(config.expert_b_model))
|
||||
self.agents["expert_c"] = ResearchAgent("expert_c", self._get_client(config.expert_c_model))
|
||||
|
||||
def generate_plan(self, topic: str, context: str) -> Generator[str, None, None]:
|
||||
"""Step 1: Generate Research Plan"""
|
||||
prompt = f"Please create a comprehensive research plan for the topic: '{topic}'.\nBreak it down into 3-5 distinct, actionable steps."
|
||||
yield from self.agents["planner"].generate(prompt, context)
|
||||
def collaborate(self, topic: str, context: str) -> Generator[Dict[str, str], None, None]:
|
||||
"""
|
||||
Execute the collaborative research process:
|
||||
1. Expert A: Propose Analysis
|
||||
2. Expert B: Critique
|
||||
3. Expert C: Synthesis & Final Plan
|
||||
"""
|
||||
|
||||
# Step 1: Expert A Analysis
|
||||
findings_a = ""
|
||||
yield {"type": "step_start", "step": "Expert A Analysis", "agent": self.agents["expert_a"].name, "model": self.agents["expert_a"].model_name}
|
||||
prompt_a = f"Please provide a comprehensive analysis and initial proposal for the topic: '{topic}'.\nContext: {context}"
|
||||
for chunk in self.agents["expert_a"].generate(prompt_a, context):
|
||||
findings_a += chunk
|
||||
yield {"type": "content", "content": chunk}
|
||||
yield {"type": "step_end", "output": findings_a}
|
||||
|
||||
def execute_step(self, step: str, previous_findings: str) -> Generator[str, None, None]:
|
||||
"""Step 2: Execute a single research step"""
|
||||
prompt = f"Execute this research step: '{step}'.\nPrevious findings: {previous_findings}"
|
||||
yield from self.agents["researcher"].generate(prompt)
|
||||
# Step 2: Expert B Critique
|
||||
findings_b = ""
|
||||
yield {"type": "step_start", "step": "Expert B Critique", "agent": self.agents["expert_b"].name, "model": self.agents["expert_b"].model_name}
|
||||
prompt_b = f"Review Expert A's proposal on '{topic}'. Critique it, find gaps, and suggest improvements.\nExpert A's Proposal:\n{findings_a}"
|
||||
for chunk in self.agents["expert_b"].generate(prompt_b, context):
|
||||
findings_b += chunk
|
||||
yield {"type": "content", "content": chunk}
|
||||
yield {"type": "step_end", "output": findings_b}
|
||||
|
||||
def generate_report(self, topic: str, all_findings: str) -> Generator[str, None, None]:
|
||||
"""Step 3: Generate Final Report"""
|
||||
prompt = f"Write a final comprehensive report on '{topic}' based on these findings:\n{all_findings}"
|
||||
yield from self.agents["writer"].generate(prompt)
|
||||
# Step 3: Expert C Synthesis
|
||||
findings_c = ""
|
||||
yield {"type": "step_start", "step": "Expert C Synthesis", "agent": self.agents["expert_c"].name, "model": self.agents["expert_c"].model_name}
|
||||
prompt_c = f"Synthesize a final comprehensive plan for '{topic}' based on Expert A's proposal and Expert B's critique.\nExpert A:\n{findings_a}\nExpert B:\n{findings_b}"
|
||||
for chunk in self.agents["expert_c"].generate(prompt_c, context):
|
||||
findings_c += chunk
|
||||
yield {"type": "content", "content": chunk}
|
||||
yield {"type": "step_end", "output": findings_c}
|
||||
|
||||
Loading…
Reference in New Issue
Block a user