Skip to content

test: add tests for platform metadata and manager#5360

Open
whatevertogo wants to merge 2 commits intoAstrBotDevs:masterfrom
whatevertogo:test/platform-metadata
Open

test: add tests for platform metadata and manager#5360
whatevertogo wants to merge 2 commits intoAstrBotDevs:masterfrom
whatevertogo:test/platform-metadata

Conversation

@whatevertogo
Copy link
Contributor

@whatevertogo whatevertogo commented Feb 23, 2026

Add tests for platform metadata and enhance platform base/manager test coverage to improve reliability of platform infrastructure.

Modifications / 改动点

  • Added tests/unit/test_platform_metadata.py for metadata structure tests
  • Enhanced platform base tests in tests/unit/test_platform_base.py
  • Updated platform manager tests in tests/unit/test_platform_manager.py

Test Coverage:

  • Platform metadata validation

  • Platform base class functionality

  • Platform manager registration and discovery

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

# Verification: Platform metadata tests pass
$ pytest tests/unit/test_platform_metadata.py -v
...
collected 8 items

tests/unit/test_platform_metadata.py::TestPlatformMetadata::test_metadata_creation PASSED
tests/unit/test_platform_metadata.py::TestPlatformMetadata::test_platform_info PASSED
tests/unit/test_platform_metadata.py::TestPlatformMetadata::test_command_support PASSED
...

# Verification: Platform base tests pass
$ pytest tests/unit/test_platform_base.py -v
...
collected 12 items

# Verification: Platform manager tests pass
$ pytest tests/unit/test_platform_manager.py -v
...
collected 14 items

Checklist / 检查清单

  • 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
  • 👀 我的更改经过了良好的测试,并已在上方提供了"验证步骤"和"运行截图"。/ My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
  • 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

Summary by Sourcery

为平台元数据、Platform 基类以及平台管理器/注册表添加单元测试以提升覆盖率。

Tests:

  • PlatformMetadata 引入全面测试,覆盖默认值、可选字段以及配置/i18n 元数据。
  • 添加一个具体的 Platform 测试实现,用于验证生命周期、错误处理、统计上报、webhook 行为以及消息辅助方法。
  • 添加针对平台适配器注册和反注册的测试,以确保注册表的一致性以及基于模块的清理逻辑。
  • 添加基于子进程的测试,用于关键的 PlatformManager 辅助方法,例如 ID 校验、统计聚合以及实例获取。
Original summary in English

Summary by Sourcery

Add unit tests to improve coverage for platform metadata, the Platform base class, and the platform manager/registry.

Tests:

  • Introduce comprehensive tests for PlatformMetadata covering defaults, optional fields, and configuration/i18n metadata.
  • Add a concrete Platform test implementation to validate lifecycle, error handling, stats reporting, webhook behavior, and messaging helpers.
  • Add tests for platform adapter registration and unregistration to ensure registry consistency and module-based cleanup logic.
  • Add subprocess-based tests for key PlatformManager helper methods such as ID validation, stats aggregation, and instance retrieval.

- Add platform metadata unit tests
- Enhance platform base tests
- Update platform manager tests

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings February 23, 2026 00:58
@auto-assign auto-assign bot requested review from Raven95676 and anka-afk February 23, 2026 00:58
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Feb 23, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @whatevertogo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on bolstering the reliability and robustness of the core platform infrastructure by substantially increasing test coverage. It introduces new unit tests for platform metadata, enhances existing tests for the base platform class, and adds comprehensive validation for the platform manager's registration and discovery functionalities. These additions aim to ensure foundational components behave as expected, reducing potential bugs and improving overall system stability and maintainability.

Highlights

  • New Platform Metadata Tests: Added a dedicated test file, tests/unit/test_platform_metadata.py, to thoroughly validate the structure and behavior of the PlatformMetadata dataclass, ensuring correct handling of all its fields and default values.
  • Enhanced Platform Base Class Tests: Significantly expanded the test coverage for the Platform base class in tests/unit/test_platform_base.py, including tests for initialization, status transitions, error recording and clearing, webhook functionality, statistics retrieval, and event handling.
  • Improved Platform Manager Tests: Introduced new tests in tests/unit/test_platform_manager.py to cover platform registration and unregistration mechanisms, as well as the PlatformManager's internal helper functions for ID validation, sanitization, and comprehensive statistics aggregation.
Changelog
  • tests/unit/test_platform_base.py
    • Added comprehensive unit tests for the Platform base class.
    • Verified initialization, status changes (PENDING, RUNNING, ERROR, STOPPED), and started_at timestamp logic.
    • Tested error recording, last_error retrieval, and clear_errors functionality.
    • Included tests for unified_webhook configuration, get_stats output, commit_event queueing, terminate method, get_client default behavior, and send_by_session default implementation.
    • Validated PlatformStatus enum values.
  • tests/unit/test_platform_manager.py
    • Added tests for register_platform_adapter decorator, covering basic registration, config template handling, display name, logo path, streaming support, i18n resources, and config metadata.
    • Verified error handling for duplicate adapter registration.
    • Tested unregister_platform_adapters_by_module for correct adapter removal based on module path.
    • Confirmed consistency of platform_registry and platform_cls_map.
    • Included tests for PlatformManager helper functions like _is_valid_platform_id and _sanitize_platform_id.
    • Tested PlatformManager initialization, get_all_stats (empty and with mock platforms), and get_insts methods.
  • tests/unit/test_platform_metadata.py
    • Added unit tests for the PlatformMetadata dataclass.
    • Verified basic creation with required fields.
    • Confirmed default values for optional fields.
    • Tested creation with all fields populated, including default_config_tmpl, adapter_display_name, logo_path, support_streaming_message, support_proactive_message, module_path, i18n_resources, and config_metadata.
    • Ensured acceptance of empty strings for identity fields and non-standard i18n resources.
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request since its creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot
Copy link

dosubot bot commented Feb 23, 2026

Related Documentation

Checked 1 published document(s) in 1 knowledge base(s). No updates required.

How did I do? Any feedback?  Join Discord

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 1 个问题,并留下了一些整体性的反馈:

  • 目前 PlatformManager 的 helper 测试依赖于启动一个 Python 子进程并对 case 参数进行字符串化,这样的方式看起来既脆弱又偏慢;可以考虑重构当前的循环依赖导入链,这样这些逻辑就可以在同一进程内直接测试,而不需要通过 _run_python / _assert_platform_manager_case 这样的间接调用。
  • 有一些断言依赖于精确匹配中文错误信息的子串(例如:"已经注册过""未实现统一 Webhook 模式");如果这些错误信息将来被本地化或重写,测试就会失败,因此你可能更希望基于更稳定的信号进行断言(例如异常类型,或者更不依赖具体语言表述的片段)。
给 AI Agents 的提示词
Please address the comments from this code review:

## Overall Comments
- The PlatformManager helper tests relying on spawning a Python subprocess and stringifying the `case` argument look a bit brittle and slow; consider refactoring the circular import chain so these can be exercised directly in-process without `_run_python`/`_assert_platform_manager_case` indirection.
- Several assertions depend on exact Chinese error-message substrings (e.g., `"已经注册过"`, `"未实现统一 Webhook 模式"`); if these messages are ever localized or rephrased the tests will break, so you might want to assert on a more stable signal (e.g., exception type or a less language-specific fragment).

## Individual Comments

### Comment 1
<location> `tests/unit/test_platform_manager.py:332` </location>
<code_context>
+        # Manually set module path for testing
+        platform_registry[1].module_path = "plugins.other_plugin.adapter"
+
+        # Unregister by module prefix
+        unregistered = unregister_platform_adapters_by_module("plugins.test_plugin")
+
</code_context>

<issue_to_address>
**suggestion (testing):** Consider asserting that `platform_registry` is also updated when unregistering by module prefix.

In `test_unregister_by_module_prefix`, you verify `platform_cls_map` after calling `unregister_platform_adapters_by_module`. To fully cover the behavior, also assert that `platform_registry` no longer contains metadata for the removed adapter (e.g., filter by `name`/`id` for `adapter_to_remove` and expect nothing). This confirms registry and class map stay in sync on removal.

Suggested implementation:

```python
        # Manually set module path for testing
        platform_registry[1].module_path = "plugins.other_plugin.adapter"

        # Unregister by module prefix
        unregistered = unregister_platform_adapters_by_module("plugins.test_plugin")

        # Ensure the registry no longer contains metadata for the removed adapter
        remaining_registry_entries = [
            meta
            for meta in platform_registry.values()
            if getattr(meta, "name", None) == "adapter_to_remove"
        ]
        assert remaining_registry_entries == []

```

If the metadata object stored in `platform_registry` uses a different attribute than `name` for the adapter identifier (e.g., `adapter_name` or `id`), update the `getattr(meta, "name", None)` call to use the correct attribute so that the filter correctly targets `adapter_to_remove`.
</issue_to_address>

Sourcery 对开源项目是免费的——如果你觉得我们的评审有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进以后的评审。
Original comment in English

Hey - I've found 1 issue, and left some high level feedback:

  • The PlatformManager helper tests relying on spawning a Python subprocess and stringifying the case argument look a bit brittle and slow; consider refactoring the circular import chain so these can be exercised directly in-process without _run_python/_assert_platform_manager_case indirection.
  • Several assertions depend on exact Chinese error-message substrings (e.g., "已经注册过", "未实现统一 Webhook 模式"); if these messages are ever localized or rephrased the tests will break, so you might want to assert on a more stable signal (e.g., exception type or a less language-specific fragment).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The PlatformManager helper tests relying on spawning a Python subprocess and stringifying the `case` argument look a bit brittle and slow; consider refactoring the circular import chain so these can be exercised directly in-process without `_run_python`/`_assert_platform_manager_case` indirection.
- Several assertions depend on exact Chinese error-message substrings (e.g., `"已经注册过"`, `"未实现统一 Webhook 模式"`); if these messages are ever localized or rephrased the tests will break, so you might want to assert on a more stable signal (e.g., exception type or a less language-specific fragment).

## Individual Comments

### Comment 1
<location> `tests/unit/test_platform_manager.py:332` </location>
<code_context>
+        # Manually set module path for testing
+        platform_registry[1].module_path = "plugins.other_plugin.adapter"
+
+        # Unregister by module prefix
+        unregistered = unregister_platform_adapters_by_module("plugins.test_plugin")
+
</code_context>

<issue_to_address>
**suggestion (testing):** Consider asserting that `platform_registry` is also updated when unregistering by module prefix.

In `test_unregister_by_module_prefix`, you verify `platform_cls_map` after calling `unregister_platform_adapters_by_module`. To fully cover the behavior, also assert that `platform_registry` no longer contains metadata for the removed adapter (e.g., filter by `name`/`id` for `adapter_to_remove` and expect nothing). This confirms registry and class map stay in sync on removal.

Suggested implementation:

```python
        # Manually set module path for testing
        platform_registry[1].module_path = "plugins.other_plugin.adapter"

        # Unregister by module prefix
        unregistered = unregister_platform_adapters_by_module("plugins.test_plugin")

        # Ensure the registry no longer contains metadata for the removed adapter
        remaining_registry_entries = [
            meta
            for meta in platform_registry.values()
            if getattr(meta, "name", None) == "adapter_to_remove"
        ]
        assert remaining_registry_entries == []

```

If the metadata object stored in `platform_registry` uses a different attribute than `name` for the adapter identifier (e.g., `adapter_name` or `id`), update the `getattr(meta, "name", None)` call to use the correct attribute so that the filter correctly targets `adapter_to_remove`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

# Manually set module path for testing
platform_registry[1].module_path = "plugins.other_plugin.adapter"

# Unregister by module prefix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Consider asserting that platform_registry is also updated when unregistering by module prefix.

test_unregister_by_module_prefix 中,你在调用 unregister_platform_adapters_by_module 之后验证了 platform_cls_map。为了更全面地覆盖该行为,还可以断言 platform_registry 中不再包含被移除适配器的元数据(例如根据 adapter_to_removename / id 进行过滤,并期望结果为空)。这样可以确认在移除时注册表和类映射保持同步。

建议实现如下:

        # Manually set module path for testing
        platform_registry[1].module_path = "plugins.other_plugin.adapter"

        # Unregister by module prefix
        unregistered = unregister_platform_adapters_by_module("plugins.test_plugin")

        # Ensure the registry no longer contains metadata for the removed adapter
        remaining_registry_entries = [
            meta
            for meta in platform_registry.values()
            if getattr(meta, "name", None) == "adapter_to_remove"
        ]
        assert remaining_registry_entries == []

如果存储在 platform_registry 中的元数据对象使用的适配器标识字段不是 name(例如 adapter_nameid),请将 getattr(meta, "name", None) 调整为正确的属性名称,以便筛选逻辑能准确匹配 adapter_to_remove

Original comment in English

suggestion (testing): Consider asserting that platform_registry is also updated when unregistering by module prefix.

In test_unregister_by_module_prefix, you verify platform_cls_map after calling unregister_platform_adapters_by_module. To fully cover the behavior, also assert that platform_registry no longer contains metadata for the removed adapter (e.g., filter by name/id for adapter_to_remove and expect nothing). This confirms registry and class map stay in sync on removal.

Suggested implementation:

        # Manually set module path for testing
        platform_registry[1].module_path = "plugins.other_plugin.adapter"

        # Unregister by module prefix
        unregistered = unregister_platform_adapters_by_module("plugins.test_plugin")

        # Ensure the registry no longer contains metadata for the removed adapter
        remaining_registry_entries = [
            meta
            for meta in platform_registry.values()
            if getattr(meta, "name", None) == "adapter_to_remove"
        ]
        assert remaining_registry_entries == []

If the metadata object stored in platform_registry uses a different attribute than name for the adapter identifier (e.g., adapter_name or id), update the getattr(meta, "name", None) call to use the correct attribute so that the filter correctly targets adapter_to_remove.

@dosubot dosubot bot added the area:platform The bug / feature is about IM platform adapter, such as QQ, Lark, Telegram, WebChat and so on. label Feb 23, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the test coverage for the platform infrastructure by adding comprehensive unit tests for PlatformMetadata, Platform base class, and PlatformManager. The tests are well-structured and cover a good range of functionality. My feedback focuses on a few areas to further enhance test quality and code clarity, such as strengthening an assertion in one test, removing a misleading comment, and consolidating a couple of redundant tests.

Comment on lines +331 to +335
with patch(
"astrbot.core.platform.platform.Metric.upload", new_callable=AsyncMock
):
# Should not raise any exception
await platform.send_by_session(mock_session, mock_message_chain)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The test currently only checks that send_by_session doesn't raise an exception. To make the test more robust, you should also assert that the Metric.upload method was called with the expected arguments. This ensures the default implementation's behavior is correctly verified.

Suggested change
with patch(
"astrbot.core.platform.platform.Metric.upload", new_callable=AsyncMock
):
# Should not raise any exception
await platform.send_by_session(mock_session, mock_message_chain)
with patch(
"astrbot.core.platform.platform.Metric.upload", new_callable=AsyncMock
) as mock_upload:
await platform.send_by_session(mock_session, mock_message_chain)
mock_upload.assert_awaited_once_with(
msg_event_tick=1, adapter_name="test_platform"
)

Comment on lines +382 to +388
# NOTE: The following tests are skipped due to circular import issues
# when importing PlatformManager from astrbot.core.platform.manager.
# This is a known issue that should be addressed in the future.
# The circular import chain is:
# manager.py -> star_handler -> star_tools -> api.platform -> star.register -> star_handler -> astr_agent_context -> context -> manager
#
# Skipping these tests as per task requirements to only record issues, not fix them.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment block is misleading. The tests for PlatformManager are not actually skipped; they are implemented using a subprocess workaround (_assert_platform_manager_case) to avoid the circular import issue. Please remove this comment block as it does not reflect the current state of the code.

Comment on lines +9 to +37
def test_platform_metadata_creation_basic(self):
"""Test creating PlatformMetadata with required fields."""
meta = PlatformMetadata(
name="test_platform",
description="A test platform",
id="test_platform_id",
)

assert meta.name == "test_platform"
assert meta.description == "A test platform"
assert meta.id == "test_platform_id"

def test_platform_metadata_default_values(self):
"""Test PlatformMetadata default values."""
meta = PlatformMetadata(
name="test_platform",
description="A test platform",
id="test_platform_id",
)

# Default values
assert meta.default_config_tmpl is None
assert meta.adapter_display_name is None
assert meta.logo_path is None
assert meta.support_streaming_message is True
assert meta.support_proactive_message is True
assert meta.module_path is None
assert meta.i18n_resources is None
assert meta.config_metadata is None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tests test_platform_metadata_creation_basic and test_platform_metadata_default_values both instantiate PlatformMetadata with the same arguments to test different aspects. You can merge them into a single test to improve conciseness and avoid redundant object creation.

    def test_platform_metadata_creation_and_defaults(self):
        """Test creating PlatformMetadata with required fields and checking defaults."""
        meta = PlatformMetadata(
            name="test_platform",
            description="A test platform",
            id="test_platform_id",
        )

        assert meta.name == "test_platform"
        assert meta.description == "A test platform"
        assert meta.id == "test_platform_id"

        # Default values
        assert meta.default_config_tmpl is None
        assert meta.adapter_display_name is None
        assert meta.logo_path is None
        assert meta.support_streaming_message is True
        assert meta.support_proactive_message is True
        assert meta.module_path is None
        assert meta.i18n_resources is None
        assert meta.config_metadata is None

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds unit tests around the platform infrastructure to improve confidence in metadata, base Platform behavior, and registration/manager helpers.

Changes:

  • Added PlatformMetadata dataclass tests.
  • Added unit tests for Platform base class behaviors (status/errors/stats/webhook helpers).
  • Added tests for platform adapter registration/unregistration and some PlatformManager helper methods (via subprocess isolation).

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
tests/unit/test_platform_metadata.py Verifies PlatformMetadata construction and default/optional fields.
tests/unit/test_platform_base.py Covers Platform base behaviors: status lifecycle, error recording/clearing, stats, webhook helpers, and default method behavior.
tests/unit/test_platform_manager.py Tests platform adapter registration/unregistration globals and exercises PlatformManager helper/stats behaviors using subprocess isolation.

Comment on lines +20 to +28
def _run_python(code: str) -> subprocess.CompletedProcess[str]:
repo_root = Path(__file__).resolve().parents[2]
return subprocess.run(
[sys.executable, "-c", textwrap.dedent(code)],
cwd=repo_root,
capture_output=True,
text=True,
check=False,
)
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PlatformManager tests run via subprocess, but this introduces significant overhead (spawning a new interpreter per case) and makes failures harder to debug. Consider importing PlatformManager directly in-process and isolating side effects with monkeypatch/fixtures (or, if subprocess isolation is required, consolidate multiple cases into a single subprocess run or mark these as integration tests to keep unit test runtime low).

Copilot uses AI. Check for mistakes.
Comment on lines +382 to +388
# NOTE: The following tests are skipped due to circular import issues
# when importing PlatformManager from astrbot.core.platform.manager.
# This is a known issue that should be addressed in the future.
# The circular import chain is:
# manager.py -> star_handler -> star_tools -> api.platform -> star.register -> star_handler -> astr_agent_context -> context -> manager
#
# Skipping these tests as per task requirements to only record issues, not fix them.
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment block says the following tests are "skipped due to circular import issues", but the tests below are not actually skipped—they run via _assert_platform_manager_case(...). Please update the comment to reflect the current approach (subprocess-based isolation), or add an explicit pytest.mark.skip/xfail if they truly should be skipped.

Suggested change
# NOTE: The following tests are skipped due to circular import issues
# when importing PlatformManager from astrbot.core.platform.manager.
# This is a known issue that should be addressed in the future.
# The circular import chain is:
# manager.py -> star_handler -> star_tools -> api.platform -> star.register -> star_handler -> astr_agent_context -> context -> manager
#
# Skipping these tests as per task requirements to only record issues, not fix them.
# NOTE: The following tests previously ran into circular import issues
# when importing PlatformManager directly from astrbot.core.platform.manager.
# To avoid this, they exercise PlatformManager behavior in a separate
# subprocess via `_assert_platform_manager_case(...)`, which imports
# PlatformManager in isolation and prevents circular imports in this process.
# The historical circular import chain was:
# manager.py -> star_handler -> star_tools -> api.platform -> star.register -> star_handler -> astr_agent_context -> context -> manager

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:platform The bug / feature is about IM platform adapter, such as QQ, Lark, Telegram, WebChat and so on. size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants