<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>智能体 on Ringi's Log</title><link>https://lilinji.github.io/tags/%E6%99%BA%E8%83%BD%E4%BD%93/</link><description>Recent content in 智能体 on Ringi's Log</description><generator>Hugo -- 0.152.2</generator><language>zh-cn</language><lastBuildDate>Sat, 02 May 2026 00:00:00 +0800</lastBuildDate><atom:link href="https://lilinji.github.io/tags/%E6%99%BA%E8%83%BD%E4%BD%93/index.xml" rel="self" type="application/rss+xml"/><item><title>Claude Subagents vs. Agent Teams，一文讲透</title><link>https://lilinji.github.io/2026/05/claude-subagents-vs-agent-teams/</link><pubDate>Sat, 02 May 2026 00:00:00 +0800</pubDate><guid>https://lilinji.github.io/2026/05/claude-subagents-vs-agent-teams/</guid><description>大多数人一觉得任务复杂，就会选择多智能体系统，但那几乎总是错误的直觉。正确的问题是：这项任务实际上需要什么样的协调？Claude 提供了两种截然不同的多智能体范式：Sub-agents 和 Agent Teams。</description></item><item><title>2026 年顶尖 AI 实验室如何构建 RL 智能体（Karpathy 的系统提示学习思想）</title><link>https://lilinji.github.io/2026/04/top-ai-labs-rl-agents-2026/</link><pubDate>Tue, 28 Apr 2026 00:00:00 +0800</pubDate><guid>https://lilinji.github.io/2026/04/top-ai-labs-rl-agents-2026/</guid><description>Anthropic、OpenAI 和 DeepSeek 正在围绕同一个核心思想趋同：用系统提示作为奖励函数。本文完整解析从 RLHF 到 RULER 的 RL 进化之路，附带代码。</description></item></channel></rss>