Compare commits

...

121 Commits

Author SHA1 Message Date
abb864ec70 修复单元测试问题 2026-03-07 17:51:49 +08:00
b38dde1b70 修正若干拼写错误,增强相关逻辑 2026-03-07 17:50:35 +08:00
8f40572a38 修复拼写错误并完成文档 2026-03-07 17:43:37 +08:00
230705f689 完成权限系统 2026-03-07 17:35:59 +08:00
e605527900 补充 README 2026-03-07 16:25:15 +08:00
9064b31fe9 添加显示 coverage 的工具 2026-03-07 16:21:49 +08:00
27e53c7acd 提高代码覆盖率并提供显示代码覆盖率的工具 2026-03-07 16:17:14 +08:00
ca1db103b5 通过了单元测试嗯 2026-03-07 15:53:13 +08:00
7f1035ff43 创建获取权限的基础方法 2026-03-07 15:19:49 +08:00
5e0e39bfc3 创建基本的表结构 2026-03-07 13:52:16 +08:00
88861f4264 修复坏枪从来没有运行过的单元测试,为项目引入单元测试框架(终于。。) 2026-03-07 13:16:24 +08:00
a1c9f9bccb 添加给 AI 阅读的 AGENTS.md
All checks were successful
continuous-integration/drone/push Build is passing
2026-03-07 12:06:37 +08:00
f6601f807a 添加 krg 表情差分
All checks were successful
continuous-integration/drone/push Build is passing
2026-03-02 13:50:09 +08:00
f7cea196ec krgsay
All checks were successful
continuous-integration/drone/push Build is passing
2026-02-28 15:31:12 +08:00
d4826e9e8b 配置两个 AI 功能都使用默认模型 2026-02-28 13:50:52 +08:00
33934ef7b5 让回复 celeste 允许不用带有前缀 2026-02-28 13:49:28 +08:00
f9f8ae4e67 调整 celeste 过度反应的 bug 2026-02-28 12:42:23 +08:00
94db34037b Merge pull request 'Enhancement: 为 man 和 textfx 指令添加图片渲染和文本 fallback' (#54) from enhancement/man-and-textfx into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #54
2026-02-25 16:26:13 +08:00
df409a13a9 把 timeout 调长一点 2026-02-25 16:24:19 +08:00
34175e8c17 添加错误捕获范围,调整日志注入参数方式 2026-02-25 16:20:44 +08:00
c66576e12b Merge pull request 'Feature: 创建启动通知' (#53) from feature/startup-notification into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #53
2026-02-25 16:13:01 +08:00
91769f93ae 添加渲染错误信息为图片 2026-02-25 16:11:23 +08:00
27841b8422 添加 man 指令的渲染 Fallback 2026-02-25 16:11:11 +08:00
48282ceb6c 添加启动通知 2026-02-25 15:08:23 +08:00
00c0202720 Merge pull request 'Feature: 支持响应更多类型的喵' (#52) from feature/nya-more into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #52
2026-02-25 13:57:36 +08:00
3ddf81e7de 修复变量名遮蔽问题 2026-02-25 13:52:25 +08:00
ba15841836 修复缺少「喵」字匹配的问题 2026-02-25 13:49:15 +08:00
014e9c9a71 创建更多喵的响应 2026-02-25 13:40:45 +08:00
32cabc9452 Merge pull request '添加节气报告功能' (#51) from feature/24-solar-terms-notification into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #51
2026-02-21 23:54:24 +08:00
19e83dea01 考虑循环边界条件的风险 2026-02-21 23:51:03 +08:00
9210f85300 让节气静态内置而不是实时 LLM 生成 2026-02-21 23:45:01 +08:00
74052594c3 添加节气查询指令 2026-02-21 23:36:42 +08:00
31ad8dac3e 添加节气报告功能 2026-02-21 22:43:09 +08:00
c46b88060b 宾几人功能调整;bilibili fetch 调整
All checks were successful
continuous-integration/drone/push Build is passing
2026-02-19 17:57:00 +08:00
02018cd11d 添加宾几人功能
All checks were successful
continuous-integration/drone/push Build is passing
2026-02-18 16:43:35 +08:00
d4cde42bdc Vibe Coding: textfx 若干 issue 更新
All checks were successful
continuous-integration/drone/push Build is passing
2026-02-16 19:36:24 +08:00
58ff8f02da 更新 Celeste Classic 功能
All checks were successful
continuous-integration/drone/push Build is passing
2026-02-02 19:06:24 +08:00
b32ddcaf38 调整 PPI
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-18 12:45:58 +08:00
1eb7e62cfe 添加 Typst 支持
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-18 12:25:17 +08:00
c44e29a907 添加 AI
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-12 23:12:28 +08:00
24457ff7cd 添加文档
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-12 22:21:04 +08:00
0d36bea3ca morse
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-10 00:49:33 +08:00
bf8504d432 忘记加 sort 了
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-10 00:44:10 +08:00
16a55ae69a 添加好多好多 crypto
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-10 00:39:45 +08:00
3adbd38d65 fix
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-09 23:42:34 +08:00
420630e35c 添加文本处理功能
All checks were successful
continuous-integration/drone/push Build is passing
2026-01-09 23:39:13 +08:00
36a564547c 不回复
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-31 15:21:18 +08:00
eb8bf16346 能不能给此方bot加上这个 by 蜡笔
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-31 15:02:08 +08:00
67884f7133 甲骨文跳过错误修复
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-30 22:25:50 +08:00
f18d94670e 二简字
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-29 22:11:27 +08:00
6e86a6987f 错误提示
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-16 21:15:22 +08:00
9c9496efbd new
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-16 21:13:21 +08:00
770d7567fb 甲骨文
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-16 16:13:55 +08:00
7026337a43 甲骨文 2025-12-16 15:40:18 +08:00
ef617e1c85 删除「黑白」的文档
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-15 18:52:51 +08:00
bd71a8d75f 黑白子说要 at 才能用太麻烦了
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-15 18:51:27 +08:00
605407549b 阴影透明度、颜色读取修复
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-13 20:35:13 +08:00
5e01e086f2 形状描边修复 2025-12-13 20:22:13 +08:00
1f887aeaf6 设置遮罩
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-13 18:37:15 +08:00
5de4b72a6b 大量滤镜转为RGBA以避免问题
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-13 18:34:15 +08:00
1861cd4f1a 阴影修复
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-13 18:25:43 +08:00
9148073095 完善形状描边,新增文本图层、空白图层生成
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-10 22:45:33 +08:00
ef3404b096 添加 shapely 库
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-10 19:34:13 +08:00
14feae943e 设置通道
Some checks failed
continuous-integration/drone/push Build is failing
2025-12-10 19:29:30 +08:00
1d763dfc3c 设置通道 2025-12-10 19:28:34 +08:00
a829f035b3 new fx
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-10 17:22:12 +08:00
9904653cc6 new fx 2025-12-10 17:17:52 +08:00
de04fcbec1 Merge branch 'master' of https://gitea.service.jazzwhom.top/mttu-developers/konabot
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-09 20:33:20 +08:00
70e3565e44 文档 2025-12-09 20:32:45 +08:00
6b10c99c7a 完整版 2025-12-09 20:21:36 +08:00
54fae88914 待完善 2025-12-09 00:02:26 +08:00
cdfb822f42 把所有代办都换成待办
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-07 23:11:21 +08:00
73aad89f57 避开访问百度的过程
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-04 22:23:11 +08:00
e1b5f9cfc9 添加对多行的匹配
Some checks failed
continuous-integration/drone/push Build is failing
2025-12-04 22:16:57 +08:00
35f411fb3a 提醒UI的GIF实现与参数传递
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-04 16:20:09 +08:00
eed21e6223 new
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-03 22:38:50 +08:00
bf5c10b7a7 notice test
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-03 22:23:44 +08:00
274ca0fa9a 初步尝试UI化 2025-12-03 22:00:44 +08:00
c72cdd6a6b 新滤镜,新修复
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-03 17:20:46 +08:00
16b0451133 删掉黑白
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-03 13:32:11 +08:00
cb34813c4b Merge branch 'master' of ssh://gitea.service.jazzwhom.top:2221/mttu-developers/konabot 2025-12-03 13:22:09 +08:00
2de3be271e 最新最热模糊
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-03 13:10:16 +08:00
f7d2168dac 最新最热 2025-12-03 12:25:39 +08:00
40be5ce335 fx 完善
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-02 14:45:07 +08:00
8e6131473d fximage 2025-12-02 12:17:11 +08:00
26e10be4ec 修复相关文档 2025-11-28 17:10:18 +08:00
78bda5fc0a 添加云盾
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-28 16:59:58 +08:00
97658a6c56 补充文档
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-28 16:54:29 +08:00
3fedc685a9 没有人需要的提取首字母功能 2025-11-28 16:51:16 +08:00
d1a3e44c45 调整日志等级和内容
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-26 13:09:37 +08:00
f637778173 完成排行榜部分
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-26 13:02:26 +08:00
145bfedf67 Merge branch 'master' of ssh://gitea.service.jazzwhom.top:2221/mttu-developers/konabot
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-25 14:29:46 +08:00
61b9d733a5 添加阿里绿网云盾 API 2025-11-25 14:29:26 +08:00
ae59c20e2f 添加对 pyrightconfig 的 ignore,方便使用其他 IDE 的人使用 config 文件指定虚拟环境位置等
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-24 18:36:06 +08:00
0b7d21aeb0 新增一条示例,以便处理几百年的(
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-22 01:22:42 +08:00
d6ede3e6cd 标准化时间的解析
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-21 16:56:00 +08:00
07ace8e6e9 就是加个 Y 的事情(
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-21 16:19:19 +08:00
6f08c22b5b LLM 胜利了!!!!!!
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-21 16:13:38 +08:00
3e5c1941c8 重构 ptimeparse 模块
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-11-21 06:03:28 +08:00
f6e7dfcd93 空调最高峰,空调数据库挂载再优化
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-19 16:24:24 +08:00
1233677eea 将成语接龙还原为内存存储,空调优化为部分内存存储且具有过期期限,避免频繁数据库查询
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-19 11:04:13 +08:00
00bdb90e3c Merge pull request '为此方 Bot 接入数据库' (#49) from database into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #49
2025-11-19 00:52:01 +08:00
988965451b 坏坏 AI 怎么把 diff 文件交上去了 2025-11-19 00:47:24 +08:00
f6fadb7226 Qwen 说让我再改点东西所以改了,强化了数据库相关的事情 2025-11-19 00:44:44 +08:00
0d540eea4c 我拿 AI 改坏枪代码! 2025-11-18 23:55:31 +08:00
f21da657db database 接入 2025-11-18 19:36:05 +08:00
a8a7b62f76 修复 playwright 在不同源的版本不同导致的问题
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-18 02:22:35 +08:00
789500842c wocao1
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-15 20:31:38 +08:00
2f22f11d57 调整 Gif 图渲染策略
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-15 20:16:42 +08:00
eff25435e3 让 MAN 使用坏枪的 Markdown 处理器
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-11-11 01:47:10 +08:00
df28fad697 调整信息 2025-11-11 01:24:29 +08:00
561f6981aa 答题必须 At bot
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-11 01:20:24 +08:00
2632215af9 补充 MAN
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-11 01:06:11 +08:00
bfde559892 添加一个可供管理的订阅制模块,并且接入 KonaPH 2025-11-11 00:53:17 +08:00
857f8c5955 Merge branch 'master' of ssh://gitea.service.jazzwhom.top:2221/mttu-developers/konabot 2025-11-10 22:12:33 +08:00
500053e630 更稳定的 MarkDown 和 LaTeX 生成!
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-10 21:59:45 +08:00
30cfb4cadd 添加 Justfile 相关库,简化项目启动流程 2025-11-10 21:23:41 +08:00
e2f99af73b 将浏览器依赖放在最最前面安装,以保证依赖更新时,尽可能不用重装浏览器
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-10 05:00:51 +08:00
e09de9eeb6 更改使用 uv 而非 poetry 管理 Docker 内部依赖
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-10 04:41:05 +08:00
4a3b49ce79 德摩根律(
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-09 23:47:26 +08:00
03900f4416 成语接龙接入 LLM 和 MarkDown、LaTeX 接入
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2025-11-09 23:12:04 +08:00
160 changed files with 24807 additions and 2493 deletions

View File

@ -13,7 +13,7 @@ steps:
- name: submodules
image: alpine/git
commands:
- git submodule update --init --recursive
- git submodule update --init --recursive
- name: 构建 Docker 镜像
image: plugins/docker:latest
privileged: true
@ -30,7 +30,7 @@ steps:
volumes:
- name: docker-socket
path: /var/run/docker.sock
- name: 在容器中测试插件加载
- name: 在容器中进行若干测试
image: docker:dind
privileged: true
volumes:
@ -38,6 +38,8 @@ steps:
path: /var/run/docker.sock
commands:
- docker run --rm gitea.service.jazzwhom.top/mttu-developers/konabot:nightly-${DRONE_COMMIT_SHA} python scripts/test_plugin_load.py
- docker run --rm gitea.service.jazzwhom.top/mttu-developers/konabot:nightly-${DRONE_COMMIT_SHA} python scripts/test_playwright.py
- docker run --rm gitea.service.jazzwhom.top/mttu-developers/konabot:nightly-${DRONE_COMMIT_SHA} python -m pytest --cov-report term-missing:skip-covered
- name: 发送构建结果到 ntfy
image: parrazam/drone-ntfy
when:
@ -68,7 +70,7 @@ steps:
- name: submodules
image: alpine/git
commands:
- git submodule update --init --recursive
- git submodule update --init --recursive
- name: 构建并推送 Release Docker 镜像
image: plugins/docker:latest
privileged: true

View File

@ -1,4 +1,4 @@
ENVIRONMENT=dev
PORT=21333
DATABASE_PATH="./data/database.db"
ENABLE_CONSOLE=true

14
.gitignore vendored
View File

@ -1,4 +1,16 @@
# 基本的数据文件,以及环境用文件
/.env
/data
/pyrightconfig.json
/pyrightconfig.toml
__pycache__
# 缓存文件
__pycache__
# 可能会偶然生成的 diff 文件
/*.diff
# 代码覆盖报告
/.coverage
/.coverage.db
/htmlcov

3
.gitmodules vendored
View File

@ -1,3 +1,6 @@
[submodule "assets/lexicon/THUOCL"]
path = assets/lexicon/THUOCL
url = https://github.com/thunlp/THUOCL.git
[submodule "assets/oracle"]
path = assets/oracle
url = https://gitea.service.jazzwhom.top/mttu-developers/oracle-source.git

6
.sqls.yml Normal file
View File

@ -0,0 +1,6 @@
lowercaseKeywords: false
connections:
- driver: sqlite
dataSourceName: "./data/database.db"
- driver: sqlite
dataSourceName: "./data/perm.sqlite3"

188
AGENTS.md Normal file
View File

@ -0,0 +1,188 @@
# AGENTS.md
本文件面向两类协作者:
- 手写代码的人类朋友
- 会在此仓库中协助开发的 AI Agents
这个项目以手写为主,欢迎协作,但请先理解这里的约束和结构,再开始改动。
## 项目定位
- 这是一个娱乐性质的、私域使用的 QQ Bot 项目。
- 虽然主要用于熟人环境,但依然要按“不信任输入”的标准写代码。
- 不要因为使用场景偏内部,就默认消息内容、安全边界、调用参数一定可靠。
## 基本原则
### 1. 默认不信任用户输入
所有来自聊天消息、命令参数、平台事件等的输入,都应视为不可信。
开发时至少注意以下几点:
- 不假设输入类型正确,先校验再使用。
- 不假设输入长度合理,注意超长文本、大量参数、异常嵌套结构。
- 不假设输入内容安全避免直接拼接到文件路径、SQL、shell 参数、HTML 或模板中。
- 不假设用户一定按预期使用命令,错误输入要能优雅失败。
- 对任何外部请求、文件读写、渲染、执行型逻辑,都要先考虑滥用风险。
### 2. 优先保持现有风格
- 这是一个以人工维护为主的项目,改动应尽量贴近现有写法。
- 除非有明确收益,不要为了“看起来更现代”而大规模重构。
- 新增能力时,优先复用已有通用模块,而不是重复造轮子。
### 3. 小步修改,影响清晰
- 尽量做局部、明确、可解释的改动。
- 修改插件时,避免顺手改动无关插件。
- 如果要调整公共模块,先确认是否会影响大量插件行为。
## 仓库结构
### `konabot/`
核心代码目录。
#### `konabot/common/`
通用模块目录。
- 放置可复用的基础能力、工具模块、公共逻辑。
- 如果某段逻辑可能被多个插件共享,应优先考虑放到这里。
- 修改这里的代码时,要额外关注兼容性,因为它可能被很多插件依赖。
#### `konabot/docs/`
Bot 内部文档系统使用的文档目录。
- 这是给用户看的文档来源。
- 文档会通过 `man` 指令被触发和展示。
- 虽然文档文件通常使用 `.txt` 后缀,但内容可以按 markdown 风格书写。
- `.md` 后缀文件会被忽略,因此 `.md` 更适合只留给仓库维护者阅读的附加说明。
- 文档文件名就是用户查询时使用的指令名,应保持简洁、稳定、易理解。
补充说明:
- `konabot/docs/user/` 是直接面向用户检索的文档。
- `konabot/docs/lib/` 更偏向维护者参考。
- `konabot/docs/concepts/` 用于记录概念。
- `konabot/docs/sys/` 用于特定范围可见的系统文档。
#### `konabot/plugins/`
插件目录。
- 插件数量很多,是本项目最主要的功能承载位置。
- 插件可以是单文件,也可以是文件夹形式。
- 新增插件或修改插件时,请先观察相邻插件的组织方式,再决定采用单文件还是目录结构。
- 如果逻辑已经明显超出单文件可维护范围,应拆成目录插件,不要把一个文件堆得过大。
## 根目录文档
### `docs/`
仓库根目录下的 `docs/` 主要用于记录一些可以通用的模块说明和开发文档。
- 这里的内容主要面向开发和维护。
- 适合放公共模块说明、集成说明、配置说明、开发笔记。
- 不要把面向 `man` 指令直接展示给用户的文档放到这里;那类内容应放在 `konabot/docs/` 下。
## 对 AI Agents 的具体要求
如果你是 AI Agent请遵守以下约定
### 修改前
- 先阅读将要修改的文件以及相关上下文,不要只凭文件名猜用途。
- 先判断目标逻辑属于公共模块、用户文档,还是某个具体插件。
- 如果需求可以在局部完成,就不要扩大改动范围。
### 修改时
- 优先延续现有命名、目录结构和编码风格。
- 不要因为“顺手”而批量格式化整个项目。
- 不要擅自重命名大量文件、移动目录、替换现有架构。
- 涉及用户输入、路径、网络、数据库、渲染时,主动补上必要的校验与防御。
- 如果要新增 `konabot/common/` 或其他会被多处依赖的模块,优先考虑 NoneBot2 框架下的依赖注入方式,而不是把全局状态或硬编码依赖散落到调用方。
- 写文档时,区分清楚是给 `man` 系统看的,还是给仓库维护者看的。
### 修改后
- 检查改动是否误伤其他插件或公共模块。
- 如果新增了用户可见功能,考虑是否需要补充 `konabot/docs/` 下对应文档。
- 如果新增或调整了通用能力,考虑是否需要补充根目录 `docs/` 下的说明。
## 插件开发建议
- 单个插件内部优先保持自洽,不要把特定业务逻辑过早抽成公共模块。
- 当多个插件开始重复同类逻辑时,再考虑上移到 `konabot/common/`
- 插件应尽量对异常输入有稳定反馈,而不是直接抛出难理解的错误。
- 如果插件会访问外部服务,要考虑超时、失败降级和返回内容校验。
### 最基本的用户交互书写建议
- 先用清晰、可收敛的规则匹配消息,再进入处理逻辑,不要一上来就在 handler 里兜底解析所有输入。
- 在 handler 里尽早提取纯文本、拆分命令和参数,并对缺失参数、非法参数、异常格式给出稳定反馈。
- 如果用户输入只允许有限枚举值,先定义允许集合,再进行归一化和校验。
- 输出优先保持简单直接;能一句话说明问题时,不要返回难懂的异常堆栈或过度技术化提示。
- 涉及渲染、网络请求、图片生成等较重操作时,先确认输入合理,再执行昂贵逻辑。
- 如果插件只是做单一交互,优先保持 handler 简短,把渲染、请求、转换等逻辑拆成独立函数。
- 倾向于使用 `UniMessage` / `UniMsg` 这一套消息抽象来组织收发消息,而不是把平台细节和文本拼接散落在各处。
- 倾向于显式构造返回消息并发送,而不是大量依赖 NoneBot2 原生的 `.finish()` 作为主要输出路径,除非该场景确实更简单清晰。
### 关于公共能力的依赖方式
- 新建通用能力时,优先设计成可注入、可替换、可测试的接口。
- 如果一个模块未来可能被多个插件依赖,优先考虑 NoneBot2 的依赖注入,而不是让调用方手动维护重复的初始化流程。
- 除非确有必要,不要让插件直接依赖隐藏的全局副作用。
- 如果使用单例、缓存或全局管理器,要明确其生命周期、并发行为以及关闭时机。
## 运行环境与部署限制
这个项目默认会跑在 Docker 环境里,修改功能时请先意识到运行环境不是一台“什么都有”的开发机。
### 容器环境
- 运行时基础镜像是 `python:3.13-slim`,不是完整桌面 Linux很多系统库默认不存在。
- 项目运行依赖 Playwright Chromium、字体库、图形相关库以及部分额外二进制工具。
- 构建阶段和运行阶段是分离的;不要假设在 builder 里装过的系统包runtime 里也一定可用。
- 额外制品目前通过多阶段构建放进镜像,例如 `typst`
### Docker 相关要求
- 如果你新增的 Python 依赖背后还需要 Linux 动态库、字体、图形库、编译工具或其他系统包,必须同步检查并在 `Dockerfile` 中补齐。
- 不要只让本地虚拟环境能跑;要默认以容器可运行作为完成标准之一。
- 如果新功能依赖系统命令、共享库、浏览器能力或字体,请在提交说明里明确写出原因。
- `.dockerignore` 当前会排除 `/.env``/.git``/data` 等内容;不要依赖这些文件被复制进镜像。
- 关于额外制品的管理,优先先阅读根目录文档 `docs/artifact.md`;适合统一管理的二进制或外部资源,倾向于复用 `konabot/common/artifact.py`,而不是在各插件里各自处理下载和校验。
### 本地运行
- 本地开发可参考 `justfile`,当前主要入口是 `just watch`
- 如果你的改动影响启动方式、依赖准备方式或运行命令,记得同步更新对应文档或脚本。
## 分支与协作流程
- 本项目托管在个人 Gitea 实例:`https://gitea.service.jazzwhom.top/mttu-developers/konabot`
- 如果需要创建 Pull Request优先倾向使用 `tea` CLI`https://gitea.com/gitea/tea`
- Pull Request 创建后,当前主要会有自动机器人做初步评审,项目维护者会手动查看;不要催促立即合并,也不要默认会马上进主分支。
- 如果当前是在仓库本体上直接开发、而不是在 fork 上工作,尽量提醒用户不要直接在主分支持续改动,优先使用功能分支。
- 除非用户明确要求,否则不要擅自把改动直接合并到主分支。
## 文档编写建议
### 面向 `man` 的文档
- 放在 `konabot/docs/` 对应子目录。
- 文件名直接对应用户查询名称。
- 建议内容简洁,优先说明“做什么、怎么用、示例、注意事项”。
- 使用 `.txt` 后缀;内容可以写成接近 markdown 的可读格式。
### 面向开发者的文档
- 放在仓库根目录 `docs/`
- 主要描述公共模块、配置方法、设计说明、维护经验。
- 可以使用 `.md`

View File

@ -1,8 +1,21 @@
FROM alpine:latest AS artifacts
RUN apk add --no-cache curl xz
WORKDIR /tmp
RUN mkdir -p /artifacts
RUN curl -L -o typst.tar.xz "https://github.com/typst/typst/releases/download/v0.14.2/typst-x86_64-unknown-linux-musl.tar.xz" \
&& tar -xJf typst.tar.xz \
&& mv typst-x86_64-unknown-linux-musl/typst /artifacts
RUN chmod -R +x /artifacts/
FROM python:3.13-slim AS base
ENV VIRTUAL_ENV=/app/.venv \
PATH="/app/.venv/bin:$PATH" \
PLAYWRIGHT_BROWSERS_PATH=0
PLAYWRIGHT_BROWSERS_PATH=/usr/lib/pw-browsers
# 安装所有都需要的底层依赖
RUN apt-get update && \
@ -19,7 +32,6 @@ RUN apt-get update && \
&& rm -rf /var/lib/apt/lists/*
FROM base AS builder
# 安装构建依赖
@ -27,23 +39,19 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential cmake git \
&& rm -rf /var/lib/apt/lists/*
ENV POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_IN_PROJECT=1 \
POETRY_VIRTUALENVS_CREATE=1 \
POETRY_CACHE_DIR=/tmp/poetry_cache
RUN pip install --no-cache-dir uv
WORKDIR /app
RUN pip install --no-cache-dir poetry
COPY pyproject.toml poetry.lock ./
RUN python -m poetry install --no-root && rm -rf $POETRY_CACHE_DIR
RUN uv sync --no-install-project
FROM base AS runtime
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY --from=artifacts /artifacts/ /usr/local/bin/
WORKDIR /app

View File

@ -71,12 +71,16 @@ code .
详见[konabot-web 配置文档](/docs/konabot-web.md)
#### 数据库配置
本项目使用SQLite作为数据库默认数据库文件位于`./data/database.db`。可以通过设置`DATABASE_PATH`环境变量来指定其他位置。
### 运行
使用命令行手动启动 Bot
```bash
poetry run watchfiles bot.main . --filter scripts.watch_filter.filter
poetry run just watch
```
如果你不希望自动重载,只是想运行 Bot可以直接运行
@ -91,3 +95,22 @@ poetry run python bot.py
- [事件响应器](https://nonebot.dev/docs/tutorial/matcher)
- [事件处理](https://nonebot.dev/docs/tutorial/handler)
- [Alconna 插件](https://nonebot.dev/docs/best-practice/alconna/)
## 代码测试
本项目使用 pytest 进行自动化测试,你可以把你的测试代码放在 `./tests` 目录下。
使用命令行执行测试:
```bash
poetry run just test
```
使用命令行,在浏览器查看测试覆盖率报告:
```bash
poetry run just coverage
# 此时会打开一个 :8000 端口的 Web 服务器
# 你可以在 http://localhost:8000 查看覆盖率报告
# 在控制台使用 Ctrl+C 关闭这个 Web 服务器
```

9856
assets/old_font/symtable.csv Normal file

File diff suppressed because it is too large Load Diff

1
assets/oracle Submodule

Submodule assets/oracle added at 9f3c08c5d2

34
bot.py
View File

@ -10,6 +10,8 @@ from nonebot.adapters.onebot.v11 import Adapter as OnebotAdapter
from konabot.common.log import init_logger
from konabot.common.nb.exc import BotExceptionMessage
from konabot.common.path import LOG_PATH
from konabot.common.database import get_global_db_manager
dotenv.load_dotenv()
env = os.environ.get("ENVIRONMENT", "prod")
@ -20,19 +22,25 @@ env_enable_minecraft = os.environ.get("ENABLE_MINECRAFT", "none")
def main():
if env.upper() == 'DEBUG' or env.upper() == 'DEV':
console_log_level = 'DEBUG'
if env.upper() == "DEBUG" or env.upper() == "DEV":
console_log_level = "DEBUG"
else:
console_log_level = 'INFO'
init_logger(LOG_PATH, [
BotExceptionMessage,
], console_log_level=console_log_level)
console_log_level = "INFO"
init_logger(
LOG_PATH,
[
BotExceptionMessage,
],
console_log_level=console_log_level,
)
nonebot.init()
driver = nonebot.get_driver()
if (env != "prod" and env != "test" and env_enable_console.upper() != "FALSE") or (env_enable_console.upper() == "TRUE"):
if (env != "prod" and env != "test" and env_enable_console.upper() != "FALSE") or (
env_enable_console.upper() == "TRUE"
):
driver.register_adapter(ConsoleAdapter)
if env_enable_qq.upper() == "TRUE":
@ -48,7 +56,19 @@ def main():
nonebot.load_plugins("konabot/plugins")
nonebot.load_plugin("nonebot_plugin_analysis_bilibili")
from konabot.common import permsys
permsys.create_startup()
# 注册关闭钩子
@driver.on_shutdown
async def _():
# 关闭全局数据库管理器
db_manager = get_global_db_manager()
await db_manager.close_all_connections()
nonebot.run()
if __name__ == "__main__":
main()

26
docs/artifact.md Normal file
View File

@ -0,0 +1,26 @@
# artifact 模块说明
`konabot/common/artifact.py` 用于管理项目运行过程中依赖的额外制品,尤其是二进制文件、外部工具和按平台区分的运行时资源。
## 适用场景
- 某个插件或公共模块依赖额外下载的可执行文件或二进制资源。
- 依赖需要按操作系统或架构区分。
- 希望在启动时统一检测、按需下载并校验哈希。
如果额外制品适合在镜像构建阶段直接打包进 Docker 镜像,也可以在 `Dockerfile` 中通过多阶段构建处理;但对于需要在运行环境按平台管理、懒下载或统一校验的资源,优先考虑复用 `artifact.py`
## 推荐做法
- 新增额外制品时,先判断它更适合放进镜像构建阶段,还是更适合交给 `artifact.py` 管理。
- 如果该资源会被多个插件或环境复用,倾向于统一通过 `ArtifactDepends``register_artifacts(...)` 管理。
- 为下载资源提供稳定来源,并填写 `sha256` 校验值,不要只校验“能不能下载下来”。
- 使用 `required_os``required_arch` 限制平台,避免无意义下载。
- 需要代理时,确认其行为与当前 NoneBot2 配置兼容。
## 注意事项
- 不要把是否存在额外制品的判断散落到多个插件里各自实现。
- 不要跳过哈希校验,除非该资源确实无法提供稳定校验值,并且有明确理由。
- 如果一个新能力除了额外制品,还依赖 Linux 动态库、字体、浏览器或系统命令,仍然需要同步检查并更新 `Dockerfile`
- 如果镜像构建和运行阶段都依赖该制品,要分别确认 builder 和 runtime 的可用性。

223
docs/database.md Normal file
View File

@ -0,0 +1,223 @@
# 数据库系统使用文档
本文档详细介绍了本项目中使用的异步数据库系统,包括其架构设计、使用方法和最佳实践。
## 系统概述
本项目的数据库系统基于 `aiosqlite` 库构建,提供了异步的 SQLite 数据库访问接口。系统主要特性包括:
1. **异步操作**:完全支持异步/await模式适配NoneBot2框架
2. **连接池**:内置连接池机制,提高数据库访问性能
3. **参数化查询**支持安全的参数化查询防止SQL注入
4. **SQL文件支持**可以直接执行SQL文件中的脚本
5. **类型支持**:支持 `pathlib.Path``str` 类型的路径参数
## 核心类和方法
### DatabaseManager 类
`DatabaseManager` 是数据库操作的核心类,提供了以下主要方法:
#### 初始化
```python
from konabot.common.database import DatabaseManager
from pathlib import Path
# 使用默认数据库路径
db = DatabaseManager()
# 指定了义数据库路径
db = DatabaseManager("./data/myapp.db")
db = DatabaseManager(Path("./data/myapp.db"))
```
#### 查询操作
```python
# 执行查询语句并返回结果
results = await db.query("SELECT * FROM users WHERE age > ?", (18,))
# 从SQL文件执行查询
results = await db.query_by_sql_file("./sql/get_users.sql", (18,))
```
#### 执行操作
```python
# 执行非查询语句
await db.execute("INSERT INTO users (name, email) VALUES (?, ?)", ("张三", "zhangsan@example.com"))
# 执行SQL脚本不带参数
await db.execute_script("""
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE
);
INSERT INTO users (name, email) VALUES ('测试用户', 'test@example.com');
""")
# 从SQL文件执行非查询语句
await db.execute_by_sql_file("./sql/create_tables.sql")
# 带参数执行SQL文件
await db.execute_by_sql_file("./sql/insert_user.sql", ("张三", "zhangsan@example.com"))
# 执行多条语句(每条语句使用相同参数)
await db.execute_many("INSERT INTO users (name, email) VALUES (?, ?)", [
("张三", "zhangsan@example.com"),
("李四", "lisi@example.com"),
("王五", "wangwu@example.com")
])
# 从SQL文件执行多条语句每条语句使用相同参数
await db.execute_many_values_by_sql_file("./sql/batch_insert.sql", [
("张三", "zhangsan@example.com"),
("李四", "lisi@example.com")
])
```
## SQL文件处理机制
### 单语句SQL文件
```sql
-- insert_user.sql
INSERT INTO users (name, email) VALUES (?, ?);
```
```python
# 使用方式
await db.execute_by_sql_file("./sql/insert_user.sql", ("张三", "zhangsan@example.com"))
```
### 多语句SQL文件
```sql
-- setup.sql
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE
);
CREATE TABLE IF NOT EXISTS profiles (
user_id INTEGER,
age INTEGER,
FOREIGN KEY (user_id) REFERENCES users(id)
);
```
```python
# 使用方式
await db.execute_by_sql_file("./sql/setup.sql")
```
### 多语句带不同参数的SQL文件
```sql
-- batch_operations.sql
INSERT INTO users (name, email) VALUES (?, ?);
INSERT INTO profiles (user_id, age) VALUES (?, ?);
```
```python
# 使用方式
await db.execute_by_sql_file("./sql/batch_operations.sql", [
("张三", "zhangsan@example.com"), # 第一条语句的参数
(1, 25) # 第二条语句的参数
])
```
## 最佳实践
### 1. 数据库表设计
```sql
-- 推荐的表设计实践
CREATE TABLE IF NOT EXISTS example_table (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
```
### 2. SQL文件组织
建议按照功能模块组织SQL文件
```
plugin/
├── sql/
│ ├── create_tables.sql
│ ├── insert_data.sql
│ ├── update_data.sql
│ └── query_data.sql
└── __init__.py
```
### 3. 错误处理
```python
try:
results = await db.query("SELECT * FROM users WHERE id = ?", (user_id,))
except Exception as e:
logger.error(f"数据库查询失败: {e}")
# 处理错误情况
```
### 4. 连接管理
```python
# 在应用启动时初始化
db_manager = DatabaseManager()
# 在应用关闭时清理连接
async def shutdown():
await db_manager.close_all_connections()
```
## 高级特性
### 连接池配置
```python
class DatabaseManager:
def __init__(self, db_path: Optional[Union[str, Path]] = None):
# 连接池大小配置
self._pool_size = 5 # 可根据需要调整
```
### 事务支持
```python
# 通过execute方法的自动提交机制支持事务
await db.execute("BEGIN TRANSACTION")
try:
await db.execute("INSERT INTO users (name) VALUES (?)", ("张三",))
await db.execute("INSERT INTO profiles (user_id, age) VALUES (?, ?)", (1, 25))
await db.execute("COMMIT")
except Exception:
await db.execute("ROLLBACK")
raise
```
## 注意事项
1. **异步环境**:所有数据库操作都必须在异步环境中执行
2. **参数安全**始终使用参数化查询避免SQL注入
3. **资源管理**:确保在应用关闭时调用 `close_all_connections()`
4. **SQL解析**:使用 `sqlparse` 库准确解析SQL语句正确处理包含分号的字符串和注释
5. **错误处理**:适当处理数据库操作可能抛出的异常
## 常见问题
### Q: 如何处理数据库约束错误?
A: 确保SQL语句中的字段名正确引用特别是保留字需要使用双引号包围
```sql
CREATE TABLE air_conditioner (
id VARCHAR(128) PRIMARY KEY,
"on" BOOLEAN NOT NULL, -- 使用双引号包围保留字
temperature REAL NOT NULL
);
```
### Q: 如何处理多个语句和参数的匹配?
A: 当SQL文件包含多个语句时参数应该是参数列表每个语句对应一个参数元组
```python
await db.execute_by_sql_file("./sql/batch.sql", [
("参数1", "参数2"), # 第一个语句的参数
("参数3", "参数4") # 第二个语句的参数
])
```
通过遵循这些指南和最佳实践,您可以充分利用本项目的异步数据库系统,构建高性能、安全的数据库应用。

235
docs/permsys.md Normal file
View File

@ -0,0 +1,235 @@
# 权限系统 `konabot.common.permsys`
本文档面向维护者,说明 `konabot/common/permsys` 模块的职责、数据模型、权限解析规则,以及在插件中接入的推荐方式。
## 模块目标
`permsys` 提供了一套简单的、可继承的权限系统,用于回答两个问题:
1. 某个事件对应的主体是谁。
2. 该主体是否拥有某项权限。
它适合处理 bot 内部的功能开关、管理权限、平台级授权等场景。
当前模块由以下几部分组成:
- `konabot/common/permsys/__init__.py`
- 暴露 `PermManager``DepPermManager``require_permission`
- 负责数据库初始化、启动迁移、超级管理员默认授权
- `konabot/common/permsys/entity.py`
- 定义 `PermEntity`
- 将事件转换为可查询的实体链
- `konabot/common/permsys/repo.py`
- 封装 SQLite 读写
- `konabot/common/permsys/migrates/`
- 存放迁移 SQL
- `konabot/common/permsys/sql/`
- 存放查询与更新 SQL
## 核心概念
### 1. `PermEntity`
`PermEntity` 是权限系统中的最小主体标识:
```python
PermEntity(platform: str, entity_type: str, external_id: str)
```
示例:
- `PermEntity("sys", "global", "global")`
- `PermEntity("ob11", "group", "123456")`
- `PermEntity("ob11", "user", "987654")`
其中:
- `platform` 表示来源平台,如 `sys``ob11``discord`
- `entity_type` 表示主体类型,如 `global``group``user`
- `external_id` 表示平台侧的外部标识
### 2. 实体链
权限判断不是只看单个实体,而是看一条“实体链”。
`get_entity_chain_of_entity()` 为例,传入一个具体实体时,返回的链为:
```python
[
PermEntity(platform, entity_type, external_id),
PermEntity(platform, "global", "global"),
PermEntity("sys", "global", "global"),
]
```
这意味着权限会优先读取更具体的主体,再回退到平台全局,最后回退到系统全局。
`get_entity_chain(event)` 则会根据事件类型自动构造链。例如:
- OneBot V11 群消息:用户 -> 群 -> 平台全局 -> 系统全局
- OneBot V11 私聊:用户 -> 平台全局 -> 系统全局
- Discord 频道消息:用户/频道/服务器 -> 平台全局 -> 系统全局
- Console控制台用户/频道 -> 平台全局 -> 系统全局
注意:当前 `entity.py` 中的具体链顺序与字段命名应以实现为准;修改这里时要评估现有权限继承是否会被破坏。
### 3. 权限键
权限键使用点分结构,例如:
- `admin`
- `plugin.weather`
- `plugin.weather.use`
检查时会自动做前缀回退。以 `plugin.weather.use` 为例,查询顺序是:
1. `plugin.weather.use`
2. `plugin.weather`
3. `plugin`
4. `*`
因此,`*` 可以看作兜底总权限。
## 权限解析规则
`PermManager.check_has_permission_info()` 的逻辑可以概括为:
1. 先把输入转换成实体链。
2. 对权限键做逐级回退,同时追加 `*`
3. 在数据库中批量查出链上所有实体、所有候选键的显式记录。
4. 按“实体越具体越优先、权限键越具体越优先”的顺序,返回第一条命中的记录。
若没有任何显式记录:
- `check_has_permission_info()` 返回 `None`
- `check_has_permission()` 返回 `False`
这表示本系统默认是“未授权即拒绝”。
## 数据存储
模块使用 SQLite默认数据库文件位于
- `data/perm.sqlite3`
启动时会执行迁移:
- `create_startup()` 在 NoneBot 启动事件中调用 `execute_migration()`
权限值支持三态:
- `True`:显式允许
- `False`:显式拒绝
- `None`:删除/清空该层的显式设置,让判断重新回退到继承链
`repo.py` 中的 `update_perm_info()` 会将这个三态直接写入数据库。
## 超级管理员注入
在启动阶段,`create_startup()` 会读取 `konabot.common.nb.is_admin.cfg.admin_qq_account`,并为这些 QQ 账号写入:
```python
PermEntity("ob11", "user", str(account)), "*", True
```
也就是说,配置中的超级管理员会直接拥有全部权限。
这属于启动时自动灌入的保底策略,不依赖手工授权命令。
## 在插件中使用
### 1. 直接做权限检查
```python
from konabot.common.permsys import DepPermManager
async def handler(pm: DepPermManager, event):
ok = await pm.check_has_permission(event, "plugin.example.use")
if not ok:
return
```
适合需要在处理流程中动态决定权限键的场景。
### 2. 挂到 Rule 上做准入控制
```python
from nonebot_plugin_alconna import Alconna, on_alconna
from konabot.common.permsys import require_permission
cmd = on_alconna(
Alconna("example"),
rule=require_permission("plugin.example.use"),
)
```
适合命令入口明确、未通过时直接拦截的场景。
### 3. 更新权限
```python
from konabot.common.permsys import DepPermManager
from konabot.common.permsys.entity import PermEntity
await pm.update_permission(
PermEntity("ob11", "group", "123456"),
"plugin.example.use",
True,
)
```
建议只在专门的管理插件中开放写权限,避免普通功能插件到处分散改表。
## `perm_manage` 插件与本模块的关系
`konabot/plugins/perm_manage/__init__.py` 是本模块当前的管理入口,提供:
- `konaperm list`:列出实体链上已有的显式权限记录
- `konaperm get`:查看某个权限最终命中的记录
- `konaperm set`:写入 allow/deny/null
这个插件本身使用 `require_permission("admin")` 保护,因此只有拥有 `admin` 权限的主体才能管理权限。
## 接入建议
### 权限键命名
建议使用稳定、可扩展的分层键名:
- 推荐:`plugin.xxx``plugin.xxx.action`
- 不推荐:含糊的单词或临时字符串
这样才能利用前缀回退机制做批量授权。
### 输入安全
虽然这个项目偏内部使用,但权限键、实体类型、外部 ID 仍然应视为不可信输入:
- 不要把聊天输入直接拼到 SQL 中
- 不要让任意用户可随意构造高权限写入
- 对可写命令至少做权限保护和必要校验
### 改动兼容性
以下改动都可能影响全局权限行为,修改前应充分评估:
- 更改实体链顺序
- 更改默认兜底键 `*` 的语义
- 更改 `None` 的处理方式
- 更改启动时超级管理员注入逻辑
## 调试建议
- 先用 `konaperm get ...` 确认某个权限最终命中了哪一层
- 再用 `konaperm list ...` 查看该实体链上有哪些显式记录
- 若表现异常,检查是否是更上层实体或更宽泛权限键提前命中
## 相关文件
- `konabot/common/permsys/__init__.py`
- `konabot/common/permsys/entity.py`
- `konabot/common/permsys/repo.py`
- `konabot/plugins/perm_manage/__init__.py`

9
justfile Normal file
View File

@ -0,0 +1,9 @@
watch:
poetry run watchfiles bot.main . --filter scripts.watch_filter.filter
test:
poetry run pytest --cov-report term-missing:skip-covered
coverage:
poetry run pytest --cov-report html
python -m http.server -d htmlcov

View File

@ -0,0 +1,92 @@
import asyncio
import json
from alibabacloud_green20220302.client import Client as AlibabaGreenClient
from alibabacloud_green20220302.models import TextModerationPlusRequest
from alibabacloud_tea_openapi.models import Config as AlibabaTeaConfig
from loguru import logger
from pydantic import BaseModel
import nonebot
class AlibabaGreenPluginConfig(BaseModel):
module_aligreen_enable: bool = False
module_aligreen_access_key_id: str = ""
module_aligreen_access_key_secret: str = ""
module_aligreen_region_id: str = "cn-shenzhen"
module_aligreen_endpoint: str = "green-cip.cn-shenzhen.aliyuncs.com"
module_aligreen_service: str = "llm_query_moderation"
class AlibabaGreen:
_client: AlibabaGreenClient | None = None
_config: AlibabaGreenPluginConfig | None = None
@staticmethod
def get_client() -> AlibabaGreenClient:
assert AlibabaGreen._client is not None
return AlibabaGreen._client
@staticmethod
def get_config() -> AlibabaGreenPluginConfig:
assert AlibabaGreen._config is not None
return AlibabaGreen._config
@staticmethod
def init():
config = nonebot.get_plugin_config(AlibabaGreenPluginConfig)
AlibabaGreen._config = config
if not config.module_aligreen_enable:
logger.info("该环境未启用阿里内容审查,跳过初始化")
return
AlibabaGreen._client = AlibabaGreenClient(AlibabaTeaConfig(
access_key_id=config.module_aligreen_access_key_id,
access_key_secret=config.module_aligreen_access_key_secret,
connect_timeout=10000,
read_timeout=3000,
region_id=config.module_aligreen_region_id,
endpoint=config.module_aligreen_endpoint,
))
@staticmethod
def _detect_sync(content: str) -> bool:
if len(content) == 0:
return True
if not AlibabaGreen.get_config().module_aligreen_enable:
logger.debug("该环境未启用阿里内容审查,直接跳过")
return True
client = AlibabaGreen.get_client()
try:
response = client.text_moderation_plus(TextModerationPlusRequest(
service=AlibabaGreen.get_config().module_aligreen_service,
service_parameters=json.dumps({
"content": content,
}),
))
if response.status_code == 200:
result = response.body
logger.info(f"检测违规内容 API 调用成功:{result}")
risk_level: str = result.data.risk_level or "none"
if risk_level == "high":
return False
return True
logger.error(f"检测违规内容 API 调用失败:{response}")
return True
except Exception as e:
logger.error("检测违规内容 API 调用失败")
logger.exception(e)
return True
@staticmethod
async def detect(content: str) -> bool:
return await asyncio.to_thread(AlibabaGreen._detect_sync, content)
driver = nonebot.get_driver()
@driver.on_startup
async def _():
AlibabaGreen.init()

112
konabot/common/artifact.py Normal file
View File

@ -0,0 +1,112 @@
import asyncio
import aiohttp
import hashlib
import platform
from dataclasses import dataclass
from pathlib import Path
import nonebot
from loguru import logger
from nonebot.adapters.discord.config import Config as DiscordConfig
from pydantic import BaseModel
@dataclass
class ArtifactDepends:
url: str
sha256: str
target: Path
required_os: str | None = None
"示例值Windows, Linux, Darwin"
required_arch: str | None = None
"示例值AMD64, x86_64, arm64"
use_proxy: bool = True
"网络问题,赫赫;使用的是 Discord 模块配置的 proxy"
def is_corresponding_platform(self) -> bool:
if self.required_os is not None:
if self.required_os.lower() != platform.system().lower():
return False
if self.required_arch is not None:
if self.required_arch.lower() != platform.machine().lower():
return False
return True
class Config(BaseModel):
prefetch_artifact: bool = False
"是否提前下载好二进制依赖"
artifact_list = []
driver = nonebot.get_driver()
config = nonebot.get_plugin_config(Config)
@driver.on_startup
async def _():
if config.prefetch_artifact:
logger.info("启动检测中:正在检测需求的二进制是否下载")
semaphore = asyncio.Semaphore(10)
async def _task(artifact: ArtifactDepends):
async with semaphore:
await ensure_artifact(artifact)
tasks: set[asyncio.Task] = set()
for a in artifact_list:
tasks.add(asyncio.Task(_task(a)))
await asyncio.gather(*tasks, return_exceptions=False)
logger.info("检测好了")
async def download_artifact(artifact: ArtifactDepends):
proxy = None
if artifact.use_proxy:
discord_config = nonebot.get_plugin_config(DiscordConfig)
proxy = discord_config.discord_proxy
if proxy is not None:
logger.info(f"正在使用 Proxy 下载 TARGET={artifact.target} PROXY={proxy}")
else:
logger.info(f"正在下载 TARGET={artifact.target}")
async with aiohttp.ClientSession(proxy=proxy) as client:
result = await client.get(artifact.url)
if result.status != 200:
logger.warning(f"已经下载了二进制,但是注意服务器没有返回 200 URL={artifact.url} TARGET={artifact.target} CODE={result.status}")
data = await result.read()
artifact.target.write_bytes(data)
if not platform.system().lower() == 'windows':
artifact.target.chmod(0o755)
logger.info(f"下载好了 TARGET={artifact.target} URL={artifact.url}")
m = hashlib.sha256(artifact.target.read_bytes())
if m.hexdigest().lower() != artifact.sha256.lower():
logger.warning(f"下载到的二进制的 sha256 与需求不同 TARGET={artifact.target} REQUESTED={artifact.sha256} ACTUAL={m.hexdigest()}")
async def ensure_artifact(artifact: ArtifactDepends):
if not artifact.is_corresponding_platform():
return
if not artifact.target.exists():
logger.info(f"二进制依赖 {artifact.target} 不存在")
if not artifact.target.parent.exists():
artifact.target.parent.mkdir(parents=True, exist_ok=True)
await download_artifact(artifact)
else:
m = hashlib.sha256(artifact.target.read_bytes())
if m.hexdigest().lower() != artifact.sha256.lower():
logger.info(f"二进制依赖 {artifact.target} 的哈希无法对应需求的哈希,准备重新下载")
artifact.target.unlink()
await download_artifact(artifact)
def register_artifacts(*artifacts: ArtifactDepends):
artifact_list.extend(artifacts)

View File

@ -0,0 +1,231 @@
from contextlib import asynccontextmanager
import os
import asyncio
import sqlparse
from pathlib import Path
from typing import List, Dict, Any, Optional, Union, TYPE_CHECKING
import aiosqlite
if TYPE_CHECKING:
from . import DatabaseManager
# 全局数据库管理器实例
_global_db_manager: Optional["DatabaseManager"] = None
def get_global_db_manager() -> "DatabaseManager":
"""获取全局数据库管理器实例"""
global _global_db_manager
if _global_db_manager is None:
from . import DatabaseManager
_global_db_manager = DatabaseManager()
return _global_db_manager
def close_global_db_manager() -> None:
"""关闭全局数据库管理器实例"""
global _global_db_manager
if _global_db_manager is not None:
# 注意这个函数应该在async环境中调用close_all_connections
_global_db_manager = None
class DatabaseManager:
"""异步数据库管理器"""
def __init__(self, db_path: Optional[Union[str, Path]] = None, pool_size: int = 5):
"""
初始化数据库管理器
Args:
db_path: 数据库文件路径支持str和Path类型
pool_size: 连接池大小
"""
if db_path is None:
self.db_path = os.environ.get("DATABASE_PATH", "./data/database.db")
else:
self.db_path = str(db_path) if isinstance(db_path, Path) else db_path
# 连接池
self._connection_pool = []
self._pool_size = pool_size
self._lock = asyncio.Lock()
self._in_use = set() # 跟踪正在使用的连接
async def _get_connection(self) -> aiosqlite.Connection:
"""从连接池获取连接"""
async with self._lock:
# 尝试从池中获取现有连接
while self._connection_pool:
conn = self._connection_pool.pop()
# 检查连接是否仍然有效
try:
await conn.execute("SELECT 1")
self._in_use.add(conn)
return conn
except:
# 连接已失效,关闭它
try:
await conn.close()
except:
pass
# 如果连接池为空,创建新连接
conn = await aiosqlite.connect(self.db_path)
await conn.execute("PRAGMA foreign_keys = ON")
self._in_use.add(conn)
return conn
async def _return_connection(self, conn: aiosqlite.Connection) -> None:
"""将连接返回到连接池"""
async with self._lock:
self._in_use.discard(conn)
if len(self._connection_pool) < self._pool_size:
self._connection_pool.append(conn)
else:
# 池已满,直接关闭连接
try:
await conn.close()
except:
pass
@asynccontextmanager
async def get_conn(self):
conn = await self._get_connection()
yield conn
await self._return_connection(conn)
async def query(
self, query: str, params: Optional[tuple] = None
) -> List[Dict[str, Any]]:
"""执行查询语句并返回结果"""
conn = await self._get_connection()
try:
cursor = await conn.execute(query, params or ())
columns = [description[0] for description in cursor.description]
rows = await cursor.fetchall()
results = [dict(zip(columns, row)) for row in rows]
await cursor.close()
return results
except Exception as e:
# 记录错误但重新抛出,让调用者处理
raise Exception(f"数据库查询失败: {str(e)}") from e
finally:
await self._return_connection(conn)
async def query_by_sql_file(
self, file_path: Union[str, Path], params: Optional[tuple] = None
) -> List[Dict[str, Any]]:
"""从 SQL 文件中读取查询语句并执行"""
path = str(file_path) if isinstance(file_path, Path) else file_path
with open(path, "r", encoding="utf-8") as f:
query = f.read()
return await self.query(query, params)
async def execute(self, command: str, params: Optional[tuple] = None) -> None:
"""执行非查询语句"""
conn = await self._get_connection()
try:
await conn.execute(command, params or ())
await conn.commit()
except Exception as e:
# 记录错误但重新抛出,让调用者处理
raise Exception(f"数据库执行失败: {str(e)}") from e
finally:
await self._return_connection(conn)
async def execute_script(self, script: str) -> None:
"""执行SQL脚本"""
conn = await self._get_connection()
try:
await conn.executescript(script)
await conn.commit()
except Exception as e:
# 记录错误但重新抛出,让调用者处理
raise Exception(f"数据库脚本执行失败: {str(e)}") from e
finally:
await self._return_connection(conn)
def _parse_sql_statements(self, script: str) -> List[str]:
"""解析SQL脚本分割成独立的语句"""
# 使用sqlparse库更准确地分割SQL语句
parsed = sqlparse.split(script)
statements = []
for statement in parsed:
statement = statement.strip()
if statement:
statements.append(statement)
return statements
async def execute_by_sql_file(
self,
file_path: Union[str, Path],
params: Optional[Union[tuple, List[tuple]]] = None,
) -> None:
"""从 SQL 文件中读取非查询语句并执行"""
path = str(file_path) if isinstance(file_path, Path) else file_path
with open(path, "r", encoding="utf-8") as f:
script = f.read()
# 如果有参数且是元组使用execute执行整个脚本
if params is not None and isinstance(params, tuple):
await self.execute(script, params)
# 如果有参数且是列表,分别执行每个语句
elif params is not None and isinstance(params, list):
# 使用sqlparse准确分割SQL语句
statements = self._parse_sql_statements(script)
if len(statements) != len(params):
raise ValueError(
f"语句数量({len(statements)})与参数组数量({len(params)})不匹配"
)
for statement, stmt_params in zip(statements, params):
if statement:
await self.execute(statement, stmt_params)
# 如果无参数使用executescript
else:
await self.execute_script(script)
async def execute_many(self, command: str, seq_of_params: List[tuple]) -> None:
"""执行多条非查询语句"""
conn = await self._get_connection()
try:
await conn.executemany(command, seq_of_params)
await conn.commit()
except Exception as e:
# 记录错误但重新抛出,让调用者处理
raise Exception(f"数据库批量执行失败: {str(e)}") from e
finally:
await self._return_connection(conn)
async def execute_many_values_by_sql_file(
self, file_path: Union[str, Path], seq_of_params: List[tuple]
) -> None:
"""从 SQL 文件中读取一条语句,但是被不同值同时执行"""
path = str(file_path) if isinstance(file_path, Path) else file_path
with open(path, "r", encoding="utf-8") as f:
command = f.read()
await self.execute_many(command, seq_of_params)
async def close_all_connections(self) -> None:
"""关闭所有连接"""
async with self._lock:
# 关闭池中的连接
for conn in self._connection_pool:
try:
await conn.close()
except:
pass
self._connection_pool.clear()
# 关闭正在使用的连接
for conn in self._in_use.copy():
try:
await conn.close()
except:
pass
self._in_use.clear()

View File

@ -1,4 +1,4 @@
from typing import Any
from typing import Any, cast
import openai
from loguru import logger
@ -26,14 +26,14 @@ class LLMInfo(BaseModel):
async def chat(
self,
messages: list[ChatCompletionMessageParam],
messages: list[ChatCompletionMessageParam] | list[dict[str, Any]],
timeout: float | None = 30.0,
max_tokens: int | None = None,
**kwargs: Any,
) -> ChatCompletionMessage:
logger.info(f"调用 LLM: BASE_URL={self.base_url} MODEL_NAME={self.model_name}")
completion: ChatCompletion = await self.get_openai_client().chat.completions.create(
messages=messages,
messages=cast(Any, messages),
model=self.model_name,
max_tokens=max_tokens,
timeout=timeout,
@ -59,6 +59,9 @@ def get_llm(llm_model: str | None = None):
if llm_model is None:
llm_model = llm_config.default_llm
if llm_model not in llm_config.llms:
raise NotImplementedError("LLM 未配置,该功能无法使用")
if llm_config.default_llm in llm_config.llms:
logger.warning(f"[LLM] 需求的 LLM 不存在,回退到默认模型 REQUIRED={llm_model}")
return llm_config.llms[llm_config.default_llm]
raise NotImplementedError("[LLM] LLM 未配置,该功能无法使用")
return llm_config.llms[llm_model]

View File

@ -1,4 +1,5 @@
from io import BytesIO
from pathlib import Path
from typing import Annotated
import httpx
@ -19,15 +20,21 @@ from PIL import UnidentifiedImageError
from pydantic import BaseModel
from returns.result import Failure, Result, Success
from konabot.common.path import ASSETS_PATH
discordConfig = nonebot.get_plugin_config(DiscordConfig)
class ExtractImageConfig(BaseModel):
module_extract_image_no_download: bool = False
"要不要算了,不下载了,直接爆炸算了,适用于一些比较奇怪的网络环境,无法从协议端下载文件"
"""
要不要算了,不下载了,直接爆炸算了,
适用于一些比较奇怪的网络环境,无法从协议端下载文件
"""
module_extract_image_target: str = './assets/img/other/boom.jpg'
"""
使用哪个图片呢
"""
module_config = nonebot.get_plugin_config(ExtractImageConfig)
@ -37,7 +44,7 @@ async def download_image_bytes(url: str, proxy: str | None = None) -> Result[byt
# if "/matcha/cache/" in url:
# url = url.replace('127.0.0.1', '10.126.126.101')
if module_config.module_extract_image_no_download:
return Success((ASSETS_PATH / "img" / "other" / "boom.jpg").read_bytes())
return Success(Path(module_config.module_extract_image_target).read_bytes())
logger.debug(f"开始从 {url} 下载图片")
async with httpx.AsyncClient(proxy=proxy) as c:
try:
@ -70,15 +77,22 @@ def bytes_to_pil(raw_data: bytes | BytesIO) -> Result[PIL.Image.Image, str]:
return Failure("图像无法读取可能是网络存在问题orz")
async def unimsg_img_to_pil(image: Image) -> Result[PIL.Image.Image, str]:
async def unimsg_img_to_bytes(image: Image) -> Result[bytes, str]:
if image.url is not None:
raw_result = await download_image_bytes(image.url)
elif image.raw is not None:
raw_result = Success(image.raw)
if isinstance(image.raw, bytes):
raw_result = Success(image.raw)
else:
raw_result = Success(image.raw.getvalue())
else:
return Failure("由于一些内部问题下载图片失败了orz")
return raw_result.bind(bytes_to_pil)
return raw_result
async def unimsg_img_to_pil(image: Image) -> Result[PIL.Image.Image, str]:
return (await unimsg_img_to_bytes(image)).bind(bytes_to_pil)
async def extract_image_from_qq_message(
@ -86,7 +100,7 @@ async def extract_image_from_qq_message(
evt: OnebotV11MessageEvent,
bot: OnebotV11Bot,
allow_reply: bool = True,
) -> Result[PIL.Image.Image, str]:
) -> Result[bytes, str]:
if allow_reply and (reply := evt.reply) is not None:
return await extract_image_from_qq_message(
reply.message,
@ -118,18 +132,17 @@ async def extract_image_from_qq_message(
url = seg.data.get("url")
if url is None:
return Failure("无法下载图片,可能有一些网络问题")
data = await download_image_bytes(url)
return data.bind(bytes_to_pil)
return await download_image_bytes(url)
return Failure("请在消息中包含图片,或者引用一个含有图片的消息")
async def extract_image_from_message(
async def extract_image_data_from_message(
msg: Message,
evt: Event,
bot: Bot,
allow_reply: bool = True,
) -> Result[PIL.Image.Image, str]:
) -> Result[bytes, str]:
if (
isinstance(bot, OnebotV11Bot)
and isinstance(msg, OnebotV11Message)
@ -145,18 +158,18 @@ async def extract_image_from_message(
if "image/" not in a.content_type:
continue
url = a.proxy_url
return (await download_image_bytes(url, discordConfig.discord_proxy)).bind(bytes_to_pil)
return await download_image_bytes(url, discordConfig.discord_proxy)
for seg in UniMessage.of(msg, bot):
logger.info(seg)
if isinstance(seg, Image):
return await unimsg_img_to_pil(seg)
return await unimsg_img_to_bytes(seg)
elif isinstance(seg, Reply) and allow_reply:
msg2 = seg.msg
logger.debug(f"深入搜索引用的消息:{msg2}")
if msg2 is None or isinstance(msg2, str):
continue
return await extract_image_from_message(msg2, evt, bot, False)
return await extract_image_data_from_message(msg2, evt, bot, False)
elif isinstance(seg, RefNode) and allow_reply:
if isinstance(bot, DiscordBot):
return Failure("暂时不支持在 Discord 中通过引用的方式获取图片")
@ -165,12 +178,12 @@ async def extract_image_from_message(
return Failure("请在消息中包含图片,或者引用一个含有图片的消息")
async def _ext_img(
async def _ext_img_data(
evt: Event,
bot: Bot,
matcher: Matcher,
) -> PIL.Image.Image | None:
match await extract_image_from_message(evt.get_message(), evt, bot):
) -> bytes | None:
match await extract_image_data_from_message(evt.get_message(), evt, bot):
case Success(img):
return img
case Failure(err):
@ -180,4 +193,35 @@ async def _ext_img(
assert False
PIL_Image = Annotated[PIL.Image.Image, nonebot.params.Depends(_ext_img)]
async def _ext_img(
evt: Event,
bot: Bot,
matcher: Matcher,
) -> PIL.Image.Image | None:
r = await _ext_img_data(evt, bot, matcher)
if r:
match bytes_to_pil(r):
case Success(img):
return img
case Failure(msg):
await matcher.send(await UniMessage.text(msg).export())
return None
async def _try_ext_img(
evt: Event,
bot: Bot,
matcher: Matcher,
) -> bytes | None:
match await extract_image_data_from_message(evt.get_message(), evt, bot):
case Success(img):
return img
case Failure(err):
# raise BotExceptionMessage(err)
# await matcher.send(await UniMessage().text(err).export())
return None
assert False
DepImageBytes = Annotated[bytes, nonebot.params.Depends(_ext_img_data)]
DepPILImage = Annotated[PIL.Image.Image, nonebot.params.Depends(_ext_img)]
DepImageBytesOrNone = Annotated[bytes | None, nonebot.params.Depends(_try_ext_img)]

76
konabot/common/pager.py Normal file
View File

@ -0,0 +1,76 @@
from dataclasses import dataclass
from math import ceil
from typing import Any, Callable
from nonebot_plugin_alconna import UniMessage
@dataclass
class PagerQuery:
page_index: int
page_size: int
def apply[T](self, ls: list[T]) -> "PagerResult[T]":
if self.page_size <= 0:
return PagerResult(
success=False,
message="每页元素数量应该大于 0",
data=[],
page_count=-1,
query=self,
)
page_count = ceil(len(ls) / self.page_size)
if self.page_index <= 0 or self.page_size <= 0:
return PagerResult(
success=False,
message="页数必须大于 0",
data=[],
page_count=page_count,
query=self,
)
data = ls[(self.page_index - 1) * self.page_size: self.page_index * self.page_size]
if len(data) > 0:
return PagerResult(
success=True,
message="",
data=data,
page_count=page_count,
query=self,
)
return PagerResult(
success=False,
message="指定的页数超过最大页数",
data=data,
page_count=page_count,
query=self,
)
@dataclass
class PagerResult[T]:
data: list[T]
success: bool
message: str
page_count: int
query: PagerQuery
def to_unimessage(
self,
formatter: Callable[[T], str | UniMessage[Any]] = str,
title: str = '查询结果',
list_indicator: str = '- ',
) -> UniMessage[Any]:
msg = UniMessage.text(f'===== {title} =====\n\n')
if not self.success:
msg = msg.text(f'⚠️ {self.message}\n')
else:
for obj in self.data:
msg = msg.text(list_indicator)
msg += formatter(obj)
msg += '\n'
msg = msg.text(f'\n===== 第 {self.query.page_index} 页,共 {self.page_count} 页 =====')
return msg

View File

@ -5,8 +5,10 @@ FONTS_PATH = ASSETS_PATH / "fonts"
SRC_PATH = Path(__file__).resolve().parent.parent
DATA_PATH = SRC_PATH.parent / "data"
TMP_PATH = DATA_PATH / "tmp"
LOG_PATH = DATA_PATH / "logs"
CONFIG_PATH = DATA_PATH / "config"
BINARY_PATH = DATA_PATH / "bin"
DOCS_PATH = SRC_PATH / "docs"
DOCS_PATH_MAN1 = DOCS_PATH / "user"
@ -21,4 +23,6 @@ if not LOG_PATH.exists():
LOG_PATH.mkdir()
CONFIG_PATH.mkdir(exist_ok=True)
TMP_PATH.mkdir(exist_ok=True)
BINARY_PATH.mkdir(exist_ok=True)

View File

@ -0,0 +1,108 @@
from typing import Annotated
import nonebot
from nonebot.adapters import Event
from nonebot.params import Depends
from nonebot.rule import Rule
from konabot.common.database import DatabaseManager
from konabot.common.pager import PagerQuery
from konabot.common.path import DATA_PATH
from konabot.common.permsys.entity import PermEntity, get_entity_chain
from konabot.common.permsys.migrates import execute_migration
from konabot.common.permsys.repo import PermRepo
db = DatabaseManager(DATA_PATH / "perm.sqlite3")
_EntityLike = Event | PermEntity | list[PermEntity]
async def _to_entity_chain(el: _EntityLike):
if isinstance(el, Event):
return await get_entity_chain(el) # pragma: no cover
if isinstance(el, PermEntity):
return [el]
return el
class PermManager:
def __init__(self, db: DatabaseManager) -> None:
self.db = db
async def check_has_permission_info(self, entities: _EntityLike, key: str):
entities = await _to_entity_chain(entities)
key = key.removesuffix("*").removesuffix(".")
key_split = key.split(".")
key_split = [s for s in key_split if len(s) > 0]
keys = [".".join(key_split[: i + 1]) for i in range(len(key_split))][::-1] + [
"*"
]
async with self.db.get_conn() as conn:
repo = PermRepo(conn)
data = await repo.get_perm_info_batch(entities, keys)
for entity in entities:
for k in keys:
p = data.get((entity, k))
if p is not None:
return (entity, k, p)
return None
async def check_has_permission(self, entities: _EntityLike, key: str) -> bool:
res = await self.check_has_permission_info(entities, key)
if res is None:
return False
return res[2]
async def update_permission(self, entity: PermEntity, key: str, perm: bool | None):
async with self.db.get_conn() as conn:
repo = PermRepo(conn)
await repo.update_perm_info(entity, key, perm)
async def list_permission(self, entities: _EntityLike, query: PagerQuery):
entities = await _to_entity_chain(entities)
async with self.db.get_conn() as conn:
repo = PermRepo(conn)
return await repo.list_perm_info_batch(entities, query)
def perm_manager(_db: DatabaseManager | None = None) -> PermManager: # pragma: no cover
if _db is None:
_db = db
return PermManager(_db)
def create_startup(): # pragma: no cover
from konabot.common.nb.is_admin import cfg
driver = nonebot.get_driver()
@driver.on_startup
async def _():
async with db.get_conn() as conn:
await execute_migration(conn)
pm = perm_manager(db)
for account in cfg.admin_qq_account:
# ^ 这里的是超级管理员!!用环境变量定义的。
# 咕嘿嘿嘿!!!夺取全部权限!!!
await pm.update_permission(
PermEntity("ob11", "user", str(account)), "*", True
)
@driver.on_shutdown
async def _():
try:
await db.close_all_connections()
except Exception:
pass
DepPermManager = Annotated[PermManager, Depends(perm_manager)]
def require_permission(perm: str) -> Rule: # pragma: no cover
async def check_permission(event: Event, pm: DepPermManager) -> bool:
return await pm.check_has_permission(event, perm)
return Rule(check_permission)

View File

@ -0,0 +1,69 @@
from dataclasses import dataclass
from nonebot.internal.adapter import Event
from nonebot.adapters.onebot.v11 import Event as OB11Event
from nonebot.adapters.onebot.v11.event import GroupMessageEvent as OB11GroupEvent
from nonebot.adapters.onebot.v11.event import PrivateMessageEvent as OB11PrivateEvent
from nonebot.adapters.discord.event import Event as DiscordEvent
from nonebot.adapters.discord.event import GuildMessageCreateEvent as DiscordGMEvent
from nonebot.adapters.discord.event import DirectMessageCreateEvent as DiscordDMEvent
from nonebot.adapters.minecraft.event import MessageEvent as MinecraftMessageEvent
from nonebot.adapters.console.event import MessageEvent as ConsoleEvent
@dataclass(frozen=True)
class PermEntity:
platform: str
entity_type: str
external_id: str
def get_entity_chain_of_entity(entity: PermEntity) -> list[PermEntity]:
return [
PermEntity("sys", "global", "global"),
PermEntity(entity.platform, "global", "global"),
entity,
][::-1]
async def get_entity_chain(event: Event) -> list[PermEntity]: # pragma: no cover
entities = [PermEntity("sys", "global", "global")]
if isinstance(event, OB11Event):
entities.append(PermEntity("ob11", "global", "global"))
if isinstance(event, OB11GroupEvent):
entities.append(PermEntity("ob11", "group", str(event.group_id)))
entities.append(PermEntity("ob11", "user", str(event.user_id)))
if isinstance(event, OB11PrivateEvent):
entities.append(PermEntity("ob11", "user", str(event.user_id)))
if isinstance(event, DiscordEvent):
entities.append(PermEntity("discord", "global", "global"))
if isinstance(event, DiscordGMEvent):
entities.append(PermEntity("discord", "guild", str(event.guild_id)))
entities.append(PermEntity("discord", "channel", str(event.channel_id)))
entities.append(PermEntity("discord", "user", str(event.user_id)))
if isinstance(event, DiscordDMEvent):
entities.append(PermEntity("discord", "channel", str(event.channel_id)))
entities.append(PermEntity("discord", "user", str(event.user_id)))
if isinstance(event, MinecraftMessageEvent):
entities.append(PermEntity("minecraft", "global", "global"))
entities.append(PermEntity("minecraft", "server", event.server_name))
player_uuid = event.player.uuid
if player_uuid is not None:
entities.append(PermEntity("minecraft", "player", player_uuid.hex))
if isinstance(event, ConsoleEvent):
entities.append(PermEntity("console", "global", "global"))
entities.append(PermEntity("console", "channel", event.channel.id))
entities.append(PermEntity("console", "user", event.user.id))
return entities[::-1]

View File

@ -0,0 +1,81 @@
from dataclasses import dataclass
from pathlib import Path
import aiosqlite
from loguru import logger
from konabot.common.database import DatabaseManager
from konabot.common.path import DATA_PATH
PATH_THISFOLDER = Path(__file__).parent
SQL_CHECK_EXISTS = (PATH_THISFOLDER / "./check_migrate_version_exists.sql").read_text()
SQL_CREATE_TABLE = (PATH_THISFOLDER / "./create_migrate_version_table.sql").read_text()
SQL_GET_MIGRATE_VERSION = (PATH_THISFOLDER / "get_migrate_version.sql").read_text()
SQL_UPDATE_VERSION = (PATH_THISFOLDER / "./update_migrate_version.sql").read_text()
db = DatabaseManager(DATA_PATH / "perm.sqlite3")
@dataclass
class Migration:
upgrade: str | Path
downgrade: str | Path
def get_upgrade_script(self) -> str:
if isinstance(self.upgrade, Path):
return self.upgrade.read_text()
return self.upgrade
def get_downgrade_script(self) -> str:
if isinstance(self.downgrade, Path):
return self.downgrade.read_text()
return self.downgrade
migrations = [
Migration(
PATH_THISFOLDER / "./mu1_create_permsys_table.sql",
PATH_THISFOLDER / "./md1_remove_permsys_table.sql",
),
]
TARGET_VERSION = len(migrations)
async def get_current_version(conn: aiosqlite.Connection) -> int:
cursor = await conn.execute(SQL_CHECK_EXISTS)
count = await cursor.fetchone()
assert count is not None
if count[0] < 1:
logger.info("权限系统数据表不存在,现在创建表")
await conn.executescript(SQL_CREATE_TABLE)
await conn.commit()
return 0
cursor = await conn.execute(SQL_GET_MIGRATE_VERSION)
row = await cursor.fetchone()
if row is None:
return 0
return row[0]
async def execute_migration(
conn: aiosqlite.Connection,
version: int = TARGET_VERSION,
migrations: list[Migration] = migrations,
):
now_version = await get_current_version(conn)
while now_version < version:
migration = migrations[now_version]
await conn.executescript(migration.get_upgrade_script())
now_version += 1
await conn.execute(SQL_UPDATE_VERSION, (now_version,))
await conn.commit()
while now_version > version:
migration = migrations[now_version - 1]
await conn.executescript(migration.get_downgrade_script())
now_version -= 1
await conn.execute(SQL_UPDATE_VERSION, (now_version,))
await conn.commit()

View File

@ -0,0 +1,7 @@
SELECT
COUNT(*)
FROM
sqlite_master
WHERE
type = 'table'
AND name = 'migrate_version'

View File

@ -0,0 +1,3 @@
CREATE TABLE migrate_version(version INT PRIMARY KEY);
INSERT INTO migrate_version(version)
VALUES(0);

View File

@ -0,0 +1,4 @@
SELECT
version
FROM
migrate_version;

View File

@ -0,0 +1,2 @@
DROP TABLE IF EXISTS perm_entity;
DROP TABLE IF EXISTS perm_info;

View File

@ -0,0 +1,30 @@
CREATE TABLE perm_entity(
id INTEGER PRIMARY KEY AUTOINCREMENT,
platform TEXT NOT NULL,
entity_type TEXT NOT NULL,
external_id TEXT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE UNIQUE INDEX idx_perm_entity_lookup
ON perm_entity(platform, entity_type, external_id);
CREATE TABLE perm_info(
entity_id INTEGER NOT NULL,
config_key TEXT NOT NULL,
value BOOLEAN,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
-- 联合主键
PRIMARY KEY (entity_id, config_key)
);
CREATE TRIGGER perm_entity_update AFTER UPDATE
ON perm_entity BEGIN
UPDATE perm_entity SET updated_at=CURRENT_TIMESTAMP WHERE id=old.id;
END;
CREATE TRIGGER perm_info_update AFTER UPDATE
ON perm_info BEGIN
UPDATE perm_info SET updated_at=CURRENT_TIMESTAMP WHERE entity_id=old.entity_id;
END;

View File

@ -0,0 +1,2 @@
UPDATE migrate_version
SET version = ?;

View File

@ -0,0 +1,242 @@
from dataclasses import dataclass
import math
from pathlib import Path
import aiosqlite
from konabot.common.pager import PagerQuery, PagerResult
from .entity import PermEntity
def s(p: str):
"""读取 SQL 文件内容。
Args:
p: SQL 文件名(相对于当前文件所在目录的 sql/ 子目录)。
Returns:
SQL 文件的内容字符串。
"""
return (Path(__file__).parent / "./sql/" / p).read_text()
@dataclass
class PermRepo:
"""权限实体存储库,负责与数据库交互管理权限实体。
Attributes:
conn: aiosqlite 数据库连接对象。
"""
conn: aiosqlite.Connection
async def create_entity(self, entity: PermEntity) -> int:
"""创建新的权限实体并返回其 ID。
Args:
entity: 要创建的权限实体对象。
Returns:
新创建实体的数据库 ID。
Raises:
AssertionError: 如果创建后无法获取实体 ID。
"""
await self.conn.execute(
s("create_entity.sql"),
(entity.platform, entity.entity_type, entity.external_id),
)
await self.conn.commit()
eid = await self._get_entity_id_or_none(entity)
assert eid is not None
return eid
async def _get_entity_id_or_none(self, entity: PermEntity) -> int | None:
"""查询实体 ID如果不存在则返回 None。
Args:
entity: 要查询的权限实体对象。
Returns:
实体 ID如果不存在则返回 None。
"""
res = await self.conn.execute(
s("get_entity_id.sql"),
(entity.platform, entity.entity_type, entity.external_id),
)
row = await res.fetchone()
if row is None:
return None
return row[0]
async def get_entity_id(self, entity: PermEntity) -> int:
"""获取实体 ID如果不存在则自动创建。
Args:
entity: 权限实体对象。
Returns:
实体的数据库 ID。
"""
eid = await self._get_entity_id_or_none(entity)
if eid is None:
return await self.create_entity(entity)
return eid
async def get_perm_info(self, entity: PermEntity, config_key: str) -> bool | None:
"""获取实体的权限配置信息。
Args:
entity: 权限实体对象。
config_key: 配置项的键名。
Returns:
配置值True/False如果不存在则返回 None。
"""
eid = await self.get_entity_id(entity)
res = await self.conn.execute(
s("get_perm_info.sql"),
(eid, config_key),
)
row = await res.fetchone()
if row is None:
return None
return bool(row[0])
async def update_perm_info(
self, entity: PermEntity, config_key: str, value: bool | None
):
"""更新实体的权限配置信息。
Args:
entity: 权限实体对象。
config_key: 配置项的键名。
value: 要设置的配置值True/False/None
"""
eid = await self.get_entity_id(entity)
await self.conn.execute(s("update_perm_info.sql"), (eid, config_key, value))
await self.conn.commit()
async def get_entity_id_batch(
self, entities: list[PermEntity]
) -> dict[PermEntity, int]:
"""批量获取 Entity 的 entity_id
Args:
entities: PermEntity 列表
Returns:
字典,键为 PermEntity值为对应的 ID
"""
# for entity in entities:
# await self.conn.execute(
# s("create_entity.sql"),
# (entity.platform, entity.entity_type, entity.external_id),
# )
await self.conn.executemany(
s("create_entity.sql"),
[(e.platform, e.entity_type, e.external_id) for e in entities],
)
await self.conn.commit()
val_placeholders = ", ".join(["(?, ?, ?)"] * len(entities))
params = []
for e in entities:
params.extend([e.platform, e.entity_type, e.external_id])
cursor = await self.conn.execute(
f"""
SELECT id, platform, entity_type, external_id
FROM perm_entity
WHERE (platform, entity_type, external_id) IN (VALUES {val_placeholders});
""",
params,
)
rows = await cursor.fetchall()
return {PermEntity(row[1], row[2], row[3]): row[0] for row in rows}
async def get_perm_info_batch(
self, entities: list[PermEntity], config_keys: list[str]
) -> dict[tuple[PermEntity, str], bool]:
"""批量获取权限信息
Args:
entities: PermEntity 列表
config_keys: 查询的键列表
Returns:
字典,键是 PermEntity 和 config_key 的元组,值是布尔,过滤掉所有空值
"""
entity_ids = {
v: k for k, v in (await self.get_entity_id_batch(entities)).items()
}
placeholders1 = ", ".join("?" * len(entity_ids))
placeholders2 = ", ".join("?" * len(config_keys))
sql = f"""
SELECT entity_id, config_key, value
FROM perm_info
WHERE entity_id IN ({placeholders1})
AND config_key IN ({placeholders2})
AND value IS NOT NULL;
"""
params = tuple(entity_ids.keys()) + tuple(config_keys)
cursor = await self.conn.execute(sql, params)
rows = await cursor.fetchall()
return {(entity_ids[row[0]], row[1]): bool(row[2]) for row in rows}
async def list_perm_info_batch(
self, entities: list[PermEntity], pager: PagerQuery
) -> PagerResult[tuple[PermEntity, str, bool]]:
"""批量获取某个实体的权限信息
Args:
entities: PermEntity 列表
pager: PagerQuery 对象,即分页要求
Returns:
字典,键是 PermEntity值是权限条目和布尔的元组过滤掉所有空值
"""
entity_to_id = await self.get_entity_id_batch(entities)
id_to_entity = {v: k for k, v in entity_to_id.items()}
ordered_ids = [entity_to_id[e] for e in entities if e in entity_to_id]
placeholders = ", ".join("?" * len(ordered_ids))
order_by_cases = " ".join([f"WHEN ? THEN {i}" for i in range(len(ordered_ids))])
pagecount_sql = f"SELECT COUNT(*) FROM perm_info WHERE entity_id IN ({placeholders}) AND value IS NOT NULL;"
count_cursor = await self.conn.execute(pagecount_sql, tuple(ordered_ids))
total_count = (await count_cursor.fetchone() or (0,))[0]
sql = f"""
SELECT entity_id, config_key, value
FROM perm_info
WHERE entity_id IN ({placeholders})
AND value IS NOT NULL
ORDER BY
(CASE entity_id {order_by_cases} END) ASC,
config_key ASC
LIMIT ?
OFFSET ?;
"""
params = (
tuple(ordered_ids)
+ tuple(ordered_ids)
+ (
pager.page_size,
(pager.page_index - 1) * pager.page_size,
)
)
cursor = await self.conn.execute(sql, params)
rows = await cursor.fetchall()
# return {entity_ids[row[0]]: (row[1], bool(row[2])) for row in rows}
return PagerResult(
data=[(id_to_entity[row[0]], row[1], row[2]) for row in rows],
success=True,
message="",
page_count=math.ceil(total_count / pager.page_size),
query=pager,
)

View File

@ -0,0 +1,11 @@
INSERT
OR IGNORE INTO perm_entity(
platform,
entity_type,
external_id
)
VALUES(
?,
?,
?
);

View File

@ -0,0 +1,8 @@
SELECT
id
FROM
perm_entity
WHERE
perm_entity.platform = ?
AND perm_entity.entity_type = ?
AND perm_entity.external_id = ?;

View File

@ -0,0 +1,7 @@
SELECT
VALUE
FROM
perm_info
WHERE
entity_id = ?
AND config_key = ?;

View File

@ -0,0 +1,4 @@
INSERT INTO perm_info (entity_id, config_key, value)
VALUES (?, ?, ?)
ON CONFLICT(entity_id, config_key)
DO UPDATE SET value=excluded.value;

View File

@ -0,0 +1,3 @@
# 已废弃
坏枪用简单的 LLM + 提示词工程,完成了这 200 块的 `qwen3-coder-plus` 都搞不定的 nb 功能

View File

@ -0,0 +1,58 @@
"""
Professional time parsing module for Chinese and English time expressions.
This module provides a robust parser for natural language time expressions,
supporting both Chinese and English formats with proper whitespace handling.
"""
import datetime
from typing import Optional
from .expression import TimeExpression
def parse(text: str, now: Optional[datetime.datetime] = None) -> datetime.datetime:
"""
Parse a time expression and return a datetime object.
Args:
text: The time expression to parse
now: The reference time (defaults to current time)
Returns:
A datetime object representing the parsed time
Raises:
TokenUnhandledException: If the input cannot be parsed
"""
return TimeExpression.parse(text, now)
class Parser:
"""
Parser for time expressions with backward compatibility.
Maintains the original interface:
>>> parser = Parser()
>>> result = parser.parse("10分钟后")
"""
def __init__(self, now: Optional[datetime.datetime] = None):
self.now = now or datetime.datetime.now()
def parse(self, text: str) -> datetime.datetime:
"""
Parse a time expression and return a datetime object.
This maintains backward compatibility with the original interface.
Args:
text: The time expression to parse
Returns:
A datetime object representing the parsed time
Raises:
TokenUnhandledException: If the input cannot be parsed
"""
return TimeExpression.parse(text, self.now)

View File

@ -0,0 +1,133 @@
"""
Chinese number parser for the time expression parser.
"""
import re
from typing import Tuple
class ChineseNumberParser:
"""Parser for Chinese numbers."""
def __init__(self):
self.digits = {"": 0, "": 1, "": 2, "": 3, "": 4,
"": 5, "": 6, "": 7, "": 8, "": 9}
self.units = {"": 10, "": 100, "": 1000, "": 10000, "亿": 100000000}
def digest(self, text: str) -> Tuple[str, int]:
"""
Parse a Chinese number from the beginning of text and return the rest and the parsed number.
Args:
text: Text that may start with a Chinese number
Returns:
Tuple of (remaining_text, parsed_number)
"""
if not text:
return text, 0
# Handle "两" at start
if text.startswith(""):
# Check if "两" is followed by a time unit
# Look ahead to see if we have a valid pattern like "两小时", "两分钟", etc.
if len(text) >= 2:
# Check for time units that start with the second character
time_units = ["小时", "分钟", ""]
for unit in time_units:
if text[1:].startswith(unit):
# Return the text starting from the time unit, not after it
# The parser will handle the time unit in the next step
return text[1:], 2
# Check for single character time units
next_char = text[1]
if next_char in "时分秒":
return text[1:], 2
# Check for Chinese number units
if next_char in "十百千万亿":
# This will be handled by the normal parsing below
pass
# If "两" is at the end of string, treat it as standalone
elif len(text) == 1:
return "", 2
# Also accept "两" followed by whitespace and then time units
elif next_char.isspace():
# Check if after whitespace we have time units
rest_after_space = text[2:].lstrip()
for unit in time_units:
if rest_after_space.startswith(unit):
# Return the text starting from the time unit
space_len = len(text[2:]) - len(rest_after_space)
return text[2+space_len:], 2
# Check single character time units after whitespace
if rest_after_space and rest_after_space[0] in "时分秒":
return text[2:], 2
else:
# Just "两" by itself
return "", 2
s = "零一二三四五六七八九"
i = 0
while i < len(text) and text[i] in s + "十百千万亿":
i += 1
if i == 0:
return text, 0
num_str = text[:i]
rest = text[i:]
return rest, self.parse(num_str)
def parse(self, text: str) -> int:
"""
Parse a Chinese number string and return its integer value.
Args:
text: Chinese number string
Returns:
Integer value of the Chinese number
"""
if not text:
return 0
if text == "":
return 0
if text == "":
return 2
# Handle special case for "十"
if text == "":
return 10
# Handle numbers with "亿"
if "亿" in text:
parts = text.split("亿", 1)
a, b = parts[0], parts[1]
return self.parse(a) * 100000000 + self.parse(b)
# Handle numbers with "万"
if "" in text:
parts = text.split("", 1)
a, b = parts[0], parts[1]
return self.parse(a) * 10000 + self.parse(b)
# Handle remaining numbers
result = 0
temp = 0
for char in text:
if char == "":
continue
elif char == "":
temp = 2
elif char in self.digits:
temp = self.digits[char]
elif char in self.units:
unit = self.units[char]
if unit == 10 and temp == 0:
# Special case for numbers like "十三"
temp = 1
result += temp * unit
temp = 0
result += temp
return result

View File

@ -0,0 +1,11 @@
class PTimeParseException(Exception):
...
class TokenUnhandledException(PTimeParseException):
...
class MultipleSpecificationException(PTimeParseException):
...
class OutOfRangeSpecificationException(PTimeParseException):
...

View File

@ -0,0 +1,63 @@
"""
Main time expression parser class that integrates all components.
"""
import datetime
from typing import Optional
from .lexer import Lexer
from .parser import Parser
from .semantic import SemanticAnalyzer
from .ptime_ast import TimeExpressionNode
from .err import TokenUnhandledException
class TimeExpression:
"""Main class for parsing time expressions."""
def __init__(self, text: str, now: Optional[datetime.datetime] = None):
self.text = text.strip()
self.now = now or datetime.datetime.now()
if not self.text:
raise TokenUnhandledException("Empty input")
# Initialize components
self.lexer = Lexer(self.text, self.now)
self.parser = Parser(self.text, self.now)
self.semantic_analyzer = SemanticAnalyzer(self.now)
# Parse the expression
self.ast = self._parse()
def _parse(self) -> TimeExpressionNode:
"""Parse the time expression and return the AST."""
try:
return self.parser.parse()
except Exception as e:
raise TokenUnhandledException(f"Failed to parse '{self.text}': {str(e)}")
def evaluate(self) -> datetime.datetime:
"""Evaluate the time expression and return the datetime."""
try:
return self.semantic_analyzer.evaluate(self.ast)
except Exception as e:
raise TokenUnhandledException(f"Failed to evaluate '{self.text}': {str(e)}")
@classmethod
def parse(cls, text: str, now: Optional[datetime.datetime] = None) -> datetime.datetime:
"""
Parse a time expression and return a datetime object.
Args:
text: The time expression to parse
now: The reference time (defaults to current time)
Returns:
A datetime object representing the parsed time
Raises:
TokenUnhandledException: If the input cannot be parsed
"""
expression = cls(text, now)
return expression.evaluate()

View File

@ -0,0 +1,225 @@
"""
Lexical analyzer for time expressions.
"""
import re
from typing import Iterator, Optional
import datetime
from .ptime_token import Token, TokenType
from .chinese_number import ChineseNumberParser
class Lexer:
"""Lexical analyzer for time expressions."""
def __init__(self, text: str, now: Optional[datetime.datetime] = None):
self.text = text
self.pos = 0
self.current_char = self.text[self.pos] if self.text else None
self.now = now or datetime.datetime.now()
self.chinese_parser = ChineseNumberParser()
# Define token patterns
self.token_patterns = [
# Whitespace
(r'^\s+', TokenType.WHITESPACE),
# Time separators
(r'^:', TokenType.TIME_SEPARATOR),
(r'^点', TokenType.TIME_SEPARATOR),
(r'^时', TokenType.TIME_SEPARATOR),
(r'^分', TokenType.TIME_SEPARATOR),
(r'^秒', TokenType.TIME_SEPARATOR),
# Special time markers
(r'^半', TokenType.HALF),
(r'^一刻', TokenType.QUARTER),
(r'^整', TokenType.ZHENG),
(r'^钟', TokenType.ZHONG),
# Period indicators (must come before relative time patterns to avoid conflicts)
(r'^(上午|早晨|早上|清晨|早(?!\d))', TokenType.PERIOD_AM),
(r'^(中午|下午|晚上|晚(?!\d)|凌晨|午夜)', TokenType.PERIOD_PM),
# Week scope (more specific patterns first)
(r'^本周', TokenType.WEEK_SCOPE_CURRENT),
(r'^上周', TokenType.WEEK_SCOPE_LAST),
(r'^下周', TokenType.WEEK_SCOPE_NEXT),
# Relative directions
(r'^(后|以后|之后)', TokenType.RELATIVE_DIRECTION_FORWARD),
(r'^(前|以前|之前)', TokenType.RELATIVE_DIRECTION_BACKWARD),
# Extended relative time
(r'^明年', TokenType.RELATIVE_NEXT),
(r'^去年', TokenType.RELATIVE_LAST),
(r'^今年', TokenType.RELATIVE_THIS),
(r'^下(?![午年月周])', TokenType.RELATIVE_NEXT),
(r'^(上|去)(?![午年月周])', TokenType.RELATIVE_LAST),
(r'^这', TokenType.RELATIVE_THIS),
(r'^本(?![周月年])', TokenType.RELATIVE_THIS), # Match "本" but not "本周", "本月", "本年"
# Week scope (fallback for standalone terms)
(r'^本', TokenType.WEEK_SCOPE_CURRENT),
(r'^上', TokenType.WEEK_SCOPE_LAST),
(r'^下(?![午年月周])', TokenType.WEEK_SCOPE_NEXT),
# Week days (order matters - longer patterns first)
(r'^周一', TokenType.WEEKDAY_MONDAY),
(r'^周二', TokenType.WEEKDAY_TUESDAY),
(r'^周三', TokenType.WEEKDAY_WEDNESDAY),
(r'^周四', TokenType.WEEKDAY_THURSDAY),
(r'^周五', TokenType.WEEKDAY_FRIDAY),
(r'^周六', TokenType.WEEKDAY_SATURDAY),
(r'^周日', TokenType.WEEKDAY_SUNDAY),
# Single character weekdays should be matched after numbers
# (r'^一', TokenType.WEEKDAY_MONDAY),
# (r'^二', TokenType.WEEKDAY_TUESDAY),
# (r'^三', TokenType.WEEKDAY_WEDNESDAY),
# (r'^四', TokenType.WEEKDAY_THURSDAY),
# (r'^五', TokenType.WEEKDAY_FRIDAY),
# (r'^六', TokenType.WEEKDAY_SATURDAY),
# (r'^日', TokenType.WEEKDAY_SUNDAY),
# Student-friendly time expressions
(r'^早(?=\d)', TokenType.EARLY_MORNING),
(r'^晚(?=\d)', TokenType.LATE_NIGHT),
# Relative today variants
(r'^今晚上', TokenType.RELATIVE_TODAY),
(r'^今晚', TokenType.RELATIVE_TODAY),
(r'^今早', TokenType.RELATIVE_TODAY),
(r'^今天早上', TokenType.RELATIVE_TODAY),
(r'^今天早晨', TokenType.RELATIVE_TODAY),
(r'^今天上午', TokenType.RELATIVE_TODAY),
(r'^今天下午', TokenType.RELATIVE_TODAY),
(r'^今天晚上', TokenType.RELATIVE_TODAY),
(r'^今天', TokenType.RELATIVE_TODAY),
# Relative days
(r'^明天', TokenType.RELATIVE_TOMORROW),
(r'^后天', TokenType.RELATIVE_DAY_AFTER_TOMORROW),
(r'^大后天', TokenType.RELATIVE_THREE_DAYS_AFTER_TOMORROW),
(r'^昨天', TokenType.RELATIVE_YESTERDAY),
(r'^前天', TokenType.RELATIVE_DAY_BEFORE_YESTERDAY),
(r'^大前天', TokenType.RELATIVE_THREE_DAYS_BEFORE_YESTERDAY),
# Digits
(r'^\d+', TokenType.INTEGER),
# Time units (must come after date separators to avoid conflicts)
(r'^年(?![月日号])', TokenType.YEAR),
(r'^月(?![日号])', TokenType.MONTH),
(r'^[日号](?![月年])', TokenType.DAY),
(r'^天', TokenType.DAY),
(r'^周', TokenType.WEEK),
(r'^小时', TokenType.HOUR),
(r'^分钟', TokenType.MINUTE),
(r'^秒', TokenType.SECOND),
# Date separators (fallback patterns)
(r'^年', TokenType.DATE_SEPARATOR),
(r'^月', TokenType.DATE_SEPARATOR),
(r'^[日号]', TokenType.DATE_SEPARATOR),
(r'^[-/]', TokenType.DATE_SEPARATOR),
]
def advance(self):
"""Advance the position pointer and set the current character."""
self.pos += 1
if self.pos >= len(self.text):
self.current_char = None
else:
self.current_char = self.text[self.pos]
def skip_whitespace(self):
"""Skip whitespace characters."""
while self.current_char is not None and self.current_char.isspace():
self.advance()
def integer(self) -> int:
"""Parse an integer from the input."""
result = ''
while self.current_char is not None and self.current_char.isdigit():
result += self.current_char
self.advance()
return int(result)
def chinese_number(self) -> int:
"""Parse a Chinese number from the input."""
# Find the longest prefix that can be parsed as a Chinese number
for i in range(len(self.text) - self.pos, 0, -1):
prefix = self.text[self.pos:self.pos + i]
try:
# Use digest to get both the remaining text and the parsed value
remaining, value = self.chinese_parser.digest(prefix)
# Check if we actually consumed part of the prefix
consumed_length = len(prefix) - len(remaining)
if consumed_length > 0:
# Advance position by the length of the consumed text
for _ in range(consumed_length):
self.advance()
return value
except ValueError:
continue
# If no Chinese number found, just return 0
return 0
def get_next_token(self) -> Token:
"""Lexical analyzer that breaks the sentence into tokens."""
while self.current_char is not None:
# Skip whitespace
if self.current_char.isspace():
self.skip_whitespace()
continue
# Try to match each pattern
text_remaining = self.text[self.pos:]
for pattern, token_type in self.token_patterns:
match = re.match(pattern, text_remaining)
if match:
value = match.group(0)
position = self.pos
# Advance position
for _ in range(len(value)):
self.advance()
# Special handling for some tokens
if token_type == TokenType.INTEGER:
value = int(value)
elif token_type == TokenType.RELATIVE_TODAY and value in [
"今早上", "今天早上", "今天早晨", "今天上午"
]:
token_type = TokenType.PERIOD_AM
elif token_type == TokenType.RELATIVE_TODAY and value in [
"今晚上", "今天下午", "今天晚上"
]:
token_type = TokenType.PERIOD_PM
return Token(token_type, value, position)
# Try to parse Chinese numbers
chinese_start_pos = self.pos
try:
chinese_value = self.chinese_number()
if chinese_value > 0:
# We successfully parsed a Chinese number
return Token(TokenType.CHINESE_NUMBER, chinese_value, chinese_start_pos)
except ValueError:
pass
# If no pattern matches, skip the character and continue
self.advance()
# End of file
return Token(TokenType.EOF, None, self.pos)
def tokenize(self) -> Iterator[Token]:
"""Generate all tokens from the input."""
while True:
token = self.get_next_token()
yield token
if token.type == TokenType.EOF:
break

View File

@ -0,0 +1,846 @@
"""
Parser for time expressions that builds an Abstract Syntax Tree (AST).
"""
from typing import Iterator, Optional, List
import datetime
from .ptime_token import Token, TokenType
from .ptime_ast import (
ASTNode, NumberNode, DateNode, TimeNode,
RelativeDateNode, RelativeTimeNode, WeekdayNode, TimeExpressionNode
)
from .lexer import Lexer
class ParserError(Exception):
"""Exception raised for parser errors."""
pass
class Parser:
"""Parser for time expressions that builds an AST."""
def __init__(self, text: str, now: Optional[datetime.datetime] = None):
self.lexer = Lexer(text, now)
self.tokens: List[Token] = list(self.lexer.tokenize())
self.pos = 0
self.now = now or datetime.datetime.now()
@property
def current_token(self) -> Token:
"""Get the current token."""
if self.pos < len(self.tokens):
return self.tokens[self.pos]
return Token(TokenType.EOF, None, len(self.tokens))
def eat(self, token_type: TokenType) -> Token:
"""Consume a token of the expected type."""
if self.current_token.type == token_type:
token = self.current_token
self.pos += 1
return token
else:
raise ParserError(
f"Expected token {token_type}, got {self.current_token.type} "
f"at position {self.current_token.position}"
)
def peek(self, offset: int = 1) -> Token:
"""Look ahead at the next token without consuming it."""
next_pos = self.pos + offset
if next_pos < len(self.tokens):
return self.tokens[next_pos]
return Token(TokenType.EOF, None, len(self.tokens))
def parse_number(self) -> NumberNode:
"""Parse a number (integer or Chinese number)."""
token = self.current_token
if token.type == TokenType.INTEGER:
self.eat(TokenType.INTEGER)
return NumberNode(value=token.value)
elif token.type == TokenType.CHINESE_NUMBER:
self.eat(TokenType.CHINESE_NUMBER)
return NumberNode(value=token.value)
else:
raise ParserError(
f"Expected number, got {token.type} at position {token.position}"
)
def parse_date(self) -> DateNode:
"""Parse a date specification."""
year_node = None
month_node = None
day_node = None
# Try YYYY-MM-DD or YYYY/MM/DD format
if (self.current_token.type == TokenType.INTEGER and
self.peek().type == TokenType.DATE_SEPARATOR and
self.peek().value in ['-', '/'] and
self.peek(2).type == TokenType.INTEGER and
self.peek(3).type == TokenType.DATE_SEPARATOR and
self.peek(3).value in ['-', '/'] and
self.peek(4).type == TokenType.INTEGER):
year_token = self.current_token
self.eat(TokenType.INTEGER)
separator1 = self.eat(TokenType.DATE_SEPARATOR).value
month_token = self.current_token
self.eat(TokenType.INTEGER)
separator2 = self.eat(TokenType.DATE_SEPARATOR).value
day_token = self.current_token
self.eat(TokenType.INTEGER)
year_node = NumberNode(value=year_token.value)
month_node = NumberNode(value=month_token.value)
day_node = NumberNode(value=day_token.value)
return DateNode(year=year_node, month=month_node, day=day_node)
# Try YYYY年MM月DD[日号] format
if (self.current_token.type == TokenType.INTEGER and
self.peek().type in [TokenType.DATE_SEPARATOR, TokenType.YEAR] and
self.peek(2).type == TokenType.INTEGER and
self.peek(3).type in [TokenType.DATE_SEPARATOR, TokenType.MONTH] and
self.peek(4).type == TokenType.INTEGER):
year_token = self.current_token
self.eat(TokenType.INTEGER)
self.eat(self.current_token.type) # 年 (could be DATE_SEPARATOR or YEAR)
month_token = self.current_token
self.eat(TokenType.INTEGER)
self.eat(self.current_token.type) # 月 (could be DATE_SEPARATOR or MONTH)
day_token = self.current_token
self.eat(TokenType.INTEGER)
# Optional 日 or 号
if self.current_token.type in [TokenType.DATE_SEPARATOR, TokenType.DAY]:
self.eat(self.current_token.type)
year_node = NumberNode(value=year_token.value)
month_node = NumberNode(value=month_token.value)
day_node = NumberNode(value=day_token.value)
return DateNode(year=year_node, month=month_node, day=day_node)
# Try MM月DD[日号] format (without year)
if (self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.peek().type in [TokenType.DATE_SEPARATOR, TokenType.MONTH] and
self.peek().value == '' and
self.peek(2).type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]):
month_token = self.current_token
self.eat(month_token.type)
self.eat(self.current_token.type) # 月 (could be DATE_SEPARATOR or MONTH)
day_token = self.current_token
self.eat(day_token.type)
# Optional 日 or 号
if self.current_token.type in [TokenType.DATE_SEPARATOR, TokenType.DAY]:
self.eat(self.current_token.type)
month_node = NumberNode(value=month_token.value)
day_node = NumberNode(value=day_token.value)
return DateNode(year=None, month=month_node, day=day_node)
# Try Chinese MM月DD[日号] format
if (self.current_token.type == TokenType.CHINESE_NUMBER and
self.peek().type == TokenType.DATE_SEPARATOR and
self.peek().value == '' and
self.peek(2).type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]):
month_token = self.current_token
self.eat(TokenType.CHINESE_NUMBER)
self.eat(TokenType.DATE_SEPARATOR) # 月
day_token = self.current_token
self.eat(day_token.type)
# Optional 日 or 号
if self.current_token.type == TokenType.DATE_SEPARATOR:
self.eat(TokenType.DATE_SEPARATOR)
month_node = NumberNode(value=month_token.value)
day_node = NumberNode(value=day_token.value)
return DateNode(year=None, month=month_node, day=day_node)
raise ParserError(
f"Unable to parse date at position {self.current_token.position}"
)
def parse_time(self) -> TimeNode:
"""Parse a time specification."""
hour_node = None
minute_node = None
second_node = None
is_24hour = False
period = None
# Try HH:MM format
if (self.current_token.type == TokenType.INTEGER and
self.peek().type == TokenType.TIME_SEPARATOR and
self.peek().value == ':'):
hour_token = self.current_token
self.eat(TokenType.INTEGER)
self.eat(TokenType.TIME_SEPARATOR) # :
minute_token = self.current_token
self.eat(TokenType.INTEGER)
hour_node = NumberNode(value=hour_token.value)
minute_node = NumberNode(value=minute_token.value)
is_24hour = True # HH:MM is always interpreted as 24-hour
# Optional :SS
if (self.current_token.type == TokenType.TIME_SEPARATOR and
self.peek().type == TokenType.INTEGER):
self.eat(TokenType.TIME_SEPARATOR) # :
second_token = self.current_token
self.eat(TokenType.INTEGER)
second_node = NumberNode(value=second_token.value)
return TimeNode(
hour=hour_node,
minute=minute_node,
second=second_node,
is_24hour=is_24hour,
period=period
)
# Try Chinese time format (X点X分)
# First check for period indicators
period = None
if self.current_token.type in [TokenType.PERIOD_AM, TokenType.PERIOD_PM]:
if self.current_token.type == TokenType.PERIOD_AM:
period = "AM"
else:
period = "PM"
self.eat(self.current_token.type)
if self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER, TokenType.EARLY_MORNING, TokenType.LATE_NIGHT]:
if self.current_token.type == TokenType.EARLY_MORNING:
self.eat(TokenType.EARLY_MORNING)
is_24hour = True
period = "AM"
# Expect a number next
if self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]:
hour_token = self.current_token
self.eat(hour_token.type)
hour_node = NumberNode(value=hour_token.value)
# "早八" should be interpreted as 08:00
# If hour is greater than 12, treat as 24-hour
if hour_node.value > 12:
is_24hour = True
period = None
else:
raise ParserError(
f"Expected number after '', got {self.current_token.type} "
f"at position {self.current_token.position}"
)
elif self.current_token.type == TokenType.LATE_NIGHT:
self.eat(TokenType.LATE_NIGHT)
is_24hour = True
period = "PM"
# Expect a number next
if self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]:
hour_token = self.current_token
self.eat(hour_token.type)
hour_node = NumberNode(value=hour_token.value)
# "晚十" should be interpreted as 22:00
# Adjust hour to 24-hour format
if hour_node.value <= 12:
hour_node.value += 12
is_24hour = True
period = None
else:
raise ParserError(
f"Expected number after '', got {self.current_token.type} "
f"at position {self.current_token.position}"
)
else:
# Regular time parsing
hour_token = self.current_token
self.eat(hour_token.type)
# Check for 点 or 时
if self.current_token.type == TokenType.TIME_SEPARATOR:
separator = self.current_token.value
self.eat(TokenType.TIME_SEPARATOR)
if separator == '':
is_24hour = False
elif separator == '':
is_24hour = True
hour_node = NumberNode(value=hour_token.value)
# Optional minutes
if self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]:
minute_token = self.current_token
self.eat(minute_token.type)
# Optional 分
if self.current_token.type == TokenType.TIME_SEPARATOR and \
self.current_token.value == '':
self.eat(TokenType.TIME_SEPARATOR)
minute_node = NumberNode(value=minute_token.value)
# Handle special markers
if self.current_token.type == TokenType.HALF:
self.eat(TokenType.HALF)
minute_node = NumberNode(value=30)
elif self.current_token.type == TokenType.QUARTER:
self.eat(TokenType.QUARTER)
minute_node = NumberNode(value=15)
elif self.current_token.type == TokenType.ZHENG:
self.eat(TokenType.ZHENG)
if minute_node is None:
minute_node = NumberNode(value=0)
# Optional 钟
if self.current_token.type == TokenType.ZHONG:
self.eat(TokenType.ZHONG)
else:
# If no separator, treat as hour-only time (like "三点")
hour_node = NumberNode(value=hour_token.value)
is_24hour = False
return TimeNode(
hour=hour_node,
minute=minute_node,
second=second_node,
is_24hour=is_24hour,
period=period
)
raise ParserError(
f"Unable to parse time at position {self.current_token.position}"
)
def parse_relative_date(self) -> RelativeDateNode:
"""Parse a relative date specification."""
years = 0
months = 0
weeks = 0
days = 0
# Handle today variants
if self.current_token.type == TokenType.RELATIVE_TODAY:
self.eat(TokenType.RELATIVE_TODAY)
days = 0
elif self.current_token.type == TokenType.RELATIVE_TOMORROW:
self.eat(TokenType.RELATIVE_TOMORROW)
days = 1
elif self.current_token.type == TokenType.RELATIVE_DAY_AFTER_TOMORROW:
self.eat(TokenType.RELATIVE_DAY_AFTER_TOMORROW)
days = 2
elif self.current_token.type == TokenType.RELATIVE_THREE_DAYS_AFTER_TOMORROW:
self.eat(TokenType.RELATIVE_THREE_DAYS_AFTER_TOMORROW)
days = 3
elif self.current_token.type == TokenType.RELATIVE_YESTERDAY:
self.eat(TokenType.RELATIVE_YESTERDAY)
days = -1
elif self.current_token.type == TokenType.RELATIVE_DAY_BEFORE_YESTERDAY:
self.eat(TokenType.RELATIVE_DAY_BEFORE_YESTERDAY)
days = -2
elif self.current_token.type == TokenType.RELATIVE_THREE_DAYS_BEFORE_YESTERDAY:
self.eat(TokenType.RELATIVE_THREE_DAYS_BEFORE_YESTERDAY)
days = -3
else:
# Check if this looks like an absolute date pattern before processing
# Look ahead to see if this matches absolute date patterns
is_likely_absolute_date = False
# Check for MM月DD[日号] patterns (like "6月20日")
if (self.pos + 2 < len(self.tokens) and
self.tokens[self.pos].type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.tokens[self.pos + 1].type in [TokenType.DATE_SEPARATOR, TokenType.MONTH] and
self.tokens[self.pos + 1].value == '' and
self.tokens[self.pos + 2].type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]):
is_likely_absolute_date = True
if is_likely_absolute_date:
# This looks like an absolute date, skip relative date parsing
raise ParserError("Looks like absolute date format")
# Try to parse extended relative time expressions
# Handle patterns like "明年", "去年", "下个月", "上个月", etc.
original_pos = self.pos
try:
# Check for "今年", "明年", "去年"
if self.current_token.type == TokenType.RELATIVE_THIS and self.peek().type == TokenType.YEAR:
self.eat(TokenType.RELATIVE_THIS)
self.eat(TokenType.YEAR)
years = 0 # Current year
elif self.current_token.type == TokenType.RELATIVE_NEXT and self.peek().type == TokenType.YEAR:
self.eat(TokenType.RELATIVE_NEXT)
self.eat(TokenType.YEAR)
years = 1 # Next year
elif self.current_token.type == TokenType.RELATIVE_LAST and self.peek().type == TokenType.YEAR:
self.eat(TokenType.RELATIVE_LAST)
self.eat(TokenType.YEAR)
years = -1 # Last year
elif self.current_token.type == TokenType.RELATIVE_NEXT and self.current_token.value == "明年":
self.eat(TokenType.RELATIVE_NEXT)
years = 1 # Next year
# Check if there's a month after "明年"
if (self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.peek().type == TokenType.MONTH):
# Parse the month
month_node = self.parse_number()
self.eat(TokenType.MONTH) # Eat the "月" token
# Store the month in the months field as a special marker
# We'll handle this in semantic analysis
months = month_node.value - 100 # Use negative offset to indicate absolute month
elif self.current_token.type == TokenType.RELATIVE_LAST and self.current_token.value == "去年":
self.eat(TokenType.RELATIVE_LAST)
years = -1 # Last year
elif self.current_token.type == TokenType.RELATIVE_THIS and self.current_token.value == "今年":
self.eat(TokenType.RELATIVE_THIS)
years = 0 # Current year
# Check for "这个月", "下个月", "上个月"
elif self.current_token.type == TokenType.RELATIVE_THIS and self.peek().type == TokenType.MONTH:
self.eat(TokenType.RELATIVE_THIS)
self.eat(TokenType.MONTH)
months = 0 # Current month
elif self.current_token.type == TokenType.RELATIVE_NEXT and self.peek().type == TokenType.MONTH:
self.eat(TokenType.RELATIVE_NEXT)
self.eat(TokenType.MONTH)
months = 1 # Next month
# Handle patterns like "下个月五号"
if (self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.peek().type == TokenType.DAY):
# Parse the day
day_node = self.parse_number()
self.eat(TokenType.DAY) # Eat the "号" token
# Instead of adding days to the current date, we should set a specific day in the target month
# We'll handle this in semantic analysis by setting a flag or special value
days = 0 # Reset days - we'll handle the day differently
# Use a special marker to indicate we want a specific day in the target month
# For now, we'll just store the target day in the weeks field as a temporary solution
weeks = day_node.value # This is a hack - we'll fix this in semantic analysis
elif self.current_token.type == TokenType.RELATIVE_LAST and self.peek().type == TokenType.MONTH:
self.eat(TokenType.RELATIVE_LAST)
self.eat(TokenType.MONTH)
months = -1 # Last month
# Check for "下周", "上周"
elif self.current_token.type == TokenType.RELATIVE_NEXT and self.peek().type == TokenType.WEEK:
self.eat(TokenType.RELATIVE_NEXT)
self.eat(TokenType.WEEK)
weeks = 1 # Next week
elif self.current_token.type == TokenType.RELATIVE_LAST and self.peek().type == TokenType.WEEK:
self.eat(TokenType.RELATIVE_LAST)
self.eat(TokenType.WEEK)
weeks = -1 # Last week
# Handle more complex patterns like "X年后", "X个月后", etc.
elif self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]:
# Check if this is likely an absolute date format (e.g., "2025年11月21日")
# If the next token after the number is a date separator or date unit,
# and the number looks like a year (4 digits) or the pattern continues,
# it might be an absolute date. In that case, skip relative date parsing.
# Look ahead to see if this matches absolute date patterns
lookahead_pos = self.pos
is_likely_absolute_date = False
# Check for YYYY-MM-DD or YYYY/MM/DD patterns
if (lookahead_pos + 4 < len(self.tokens) and
self.tokens[lookahead_pos].type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.tokens[lookahead_pos + 1].type in [TokenType.DATE_SEPARATOR, TokenType.YEAR] and
self.tokens[lookahead_pos + 1].value in ['-', '/', ''] and
self.tokens[lookahead_pos + 2].type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.tokens[lookahead_pos + 3].type in [TokenType.DATE_SEPARATOR, TokenType.MONTH] and
self.tokens[lookahead_pos + 3].value in ['-', '/', '']):
is_likely_absolute_date = True
# Check for YYYY年MM月DD patterns
if (lookahead_pos + 4 < len(self.tokens) and
self.tokens[lookahead_pos].type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.tokens[lookahead_pos + 1].type in [TokenType.DATE_SEPARATOR, TokenType.YEAR] and
self.tokens[lookahead_pos + 1].value == '' and
self.tokens[lookahead_pos + 2].type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.tokens[lookahead_pos + 3].type in [TokenType.DATE_SEPARATOR, TokenType.MONTH] and
self.tokens[lookahead_pos + 3].value == ''):
is_likely_absolute_date = True
# Check for MM月DD[日号] patterns (like "6月20日")
if (self.pos + 2 < len(self.tokens) and
self.tokens[self.pos].type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER] and
self.tokens[self.pos + 1].type in [TokenType.DATE_SEPARATOR, TokenType.MONTH] and
self.tokens[self.pos + 1].value == '' and
self.tokens[self.pos + 2].type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]):
is_likely_absolute_date = True
if is_likely_absolute_date:
# This looks like an absolute date, skip relative date parsing
raise ParserError("Looks like absolute date format")
print(f"DEBUG: Parsing complex relative date pattern")
# Parse the number
number_node = self.parse_number()
number_value = number_node.value
print(f"DEBUG: Parsed number: {number_value}")
# Check the unit
if self.current_token.type == TokenType.YEAR:
self.eat(TokenType.YEAR)
years = number_value
print(f"DEBUG: Set years to {years}")
elif self.current_token.type == TokenType.MONTH:
self.eat(TokenType.MONTH)
months = number_value
print(f"DEBUG: Set months to {months}")
elif self.current_token.type == TokenType.WEEK:
self.eat(TokenType.WEEK)
weeks = number_value
print(f"DEBUG: Set weeks to {weeks}")
elif self.current_token.type == TokenType.DAY:
self.eat(TokenType.DAY)
days = number_value
print(f"DEBUG: Set days to {days}")
else:
print(f"DEBUG: Unexpected token type: {self.current_token.type}")
raise ParserError(
f"Expected time unit, got {self.current_token.type} "
f"at position {self.current_token.position}"
)
# Check direction (前/后)
if self.current_token.type == TokenType.RELATIVE_DIRECTION_FORWARD:
self.eat(TokenType.RELATIVE_DIRECTION_FORWARD)
print(f"DEBUG: Forward direction, values are already positive")
# Values are already positive
elif self.current_token.type == TokenType.RELATIVE_DIRECTION_BACKWARD:
self.eat(TokenType.RELATIVE_DIRECTION_BACKWARD)
print(f"DEBUG: Backward direction, negating values")
years = -years
months = -months
weeks = -weeks
days = -days
except ParserError:
# Reset position if parsing failed
self.pos = original_pos
raise ParserError(
f"Expected relative date, got {self.current_token.type} "
f"at position {self.current_token.position}"
)
return RelativeDateNode(years=years, months=months, weeks=weeks, days=days)
def parse_weekday(self) -> WeekdayNode:
"""Parse a weekday specification."""
# Parse week scope (本, 上, 下)
scope = "current"
if self.current_token.type == TokenType.WEEK_SCOPE_CURRENT:
self.eat(TokenType.WEEK_SCOPE_CURRENT)
scope = "current"
elif self.current_token.type == TokenType.WEEK_SCOPE_LAST:
self.eat(TokenType.WEEK_SCOPE_LAST)
scope = "last"
elif self.current_token.type == TokenType.WEEK_SCOPE_NEXT:
self.eat(TokenType.WEEK_SCOPE_NEXT)
scope = "next"
# Parse weekday
weekday_map = {
TokenType.WEEKDAY_MONDAY: 0,
TokenType.WEEKDAY_TUESDAY: 1,
TokenType.WEEKDAY_WEDNESDAY: 2,
TokenType.WEEKDAY_THURSDAY: 3,
TokenType.WEEKDAY_FRIDAY: 4,
TokenType.WEEKDAY_SATURDAY: 5,
TokenType.WEEKDAY_SUNDAY: 6,
# Handle Chinese numbers (1=Monday, 2=Tuesday, etc.)
TokenType.CHINESE_NUMBER: lambda x: x - 1 if 1 <= x <= 7 else None,
}
if self.current_token.type in weekday_map:
if self.current_token.type == TokenType.CHINESE_NUMBER:
# Handle numeric weekday (1=Monday, 2=Tuesday, etc.)
weekday_num = self.current_token.value
if 1 <= weekday_num <= 7:
weekday = weekday_num - 1 # Convert to 0-based index
self.eat(TokenType.CHINESE_NUMBER)
return WeekdayNode(weekday=weekday, scope=scope)
else:
raise ParserError(
f"Invalid weekday number: {weekday_num} "
f"at position {self.current_token.position}"
)
else:
weekday = weekday_map[self.current_token.type]
self.eat(self.current_token.type)
return WeekdayNode(weekday=weekday, scope=scope)
raise ParserError(
f"Expected weekday, got {self.current_token.type} "
f"at position {self.current_token.position}"
)
def parse_relative_time(self) -> RelativeTimeNode:
"""Parse a relative time specification."""
hours = 0.0
minutes = 0.0
seconds = 0.0
def parse_relative_time(self) -> RelativeTimeNode:
"""Parse a relative time specification."""
hours = 0.0
minutes = 0.0
seconds = 0.0
# Parse sequences of relative time expressions
while self.current_token.type in [
TokenType.INTEGER, TokenType.CHINESE_NUMBER,
TokenType.HALF, TokenType.QUARTER
] or (self.current_token.type == TokenType.RELATIVE_DIRECTION_FORWARD or
self.current_token.type == TokenType.RELATIVE_DIRECTION_BACKWARD):
# Handle 半小时
if (self.current_token.type == TokenType.HALF):
self.eat(TokenType.HALF)
# Optional 个
if (self.current_token.type == TokenType.INTEGER and
self.current_token.value == ""):
self.eat(TokenType.INTEGER)
# Optional 小时
if self.current_token.type == TokenType.HOUR:
self.eat(TokenType.HOUR)
hours += 0.5
# Check for direction
if self.current_token.type == TokenType.RELATIVE_DIRECTION_FORWARD:
self.eat(TokenType.RELATIVE_DIRECTION_FORWARD)
elif self.current_token.type == TokenType.RELATIVE_DIRECTION_BACKWARD:
self.eat(TokenType.RELATIVE_DIRECTION_BACKWARD)
hours = -hours
continue
# Handle 一刻钟 (15 minutes)
if self.current_token.type == TokenType.QUARTER:
self.eat(TokenType.QUARTER)
# Optional 钟
if self.current_token.type == TokenType.ZHONG:
self.eat(TokenType.ZHONG)
minutes += 15
# Check for direction
if self.current_token.type == TokenType.RELATIVE_DIRECTION_FORWARD:
self.eat(TokenType.RELATIVE_DIRECTION_FORWARD)
elif self.current_token.type == TokenType.RELATIVE_DIRECTION_BACKWARD:
self.eat(TokenType.RELATIVE_DIRECTION_BACKWARD)
minutes = -minutes
continue
# Parse number if we have one
if self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]:
number_node = self.parse_number()
number_value = number_node.value
# Determine unit and direction
unit = None
direction = 1 # Forward by default
# Check for unit
if self.current_token.type == TokenType.HOUR:
self.eat(TokenType.HOUR)
# Optional 个
if (self.current_token.type == TokenType.INTEGER and
self.current_token.value == ""):
self.eat(TokenType.INTEGER)
unit = "hour"
elif self.current_token.type == TokenType.MINUTE:
self.eat(TokenType.MINUTE)
unit = "minute"
elif self.current_token.type == TokenType.SECOND:
self.eat(TokenType.SECOND)
unit = "second"
elif self.current_token.type == TokenType.TIME_SEPARATOR:
# Handle "X点", "X分", "X秒" format
sep_value = self.current_token.value
self.eat(TokenType.TIME_SEPARATOR)
if sep_value == "":
unit = "hour"
# Optional 钟
if self.current_token.type == TokenType.ZHONG:
self.eat(TokenType.ZHONG)
# If we have "X点" without a direction, this is likely an absolute time
# Check if there's a direction after
if not (self.current_token.type == TokenType.RELATIVE_DIRECTION_FORWARD or
self.current_token.type == TokenType.RELATIVE_DIRECTION_BACKWARD):
# This is probably an absolute time, not relative time
# Push back the number and break
break
elif sep_value == "":
unit = "minute"
# Optional 钟
if self.current_token.type == TokenType.ZHONG:
self.eat(TokenType.ZHONG)
elif sep_value == "":
unit = "second"
else:
# If no unit specified, but we have a number followed by a direction,
# assume it's hours
if (self.current_token.type == TokenType.RELATIVE_DIRECTION_FORWARD or
self.current_token.type == TokenType.RELATIVE_DIRECTION_BACKWARD):
unit = "hour"
else:
# If no unit and no direction, this might not be a relative time expression
# Push the number back and break
# We can't easily push back, so let's break
break
# Check for direction (后/前)
if self.current_token.type == TokenType.RELATIVE_DIRECTION_FORWARD:
self.eat(TokenType.RELATIVE_DIRECTION_FORWARD)
direction = 1
elif self.current_token.type == TokenType.RELATIVE_DIRECTION_BACKWARD:
self.eat(TokenType.RELATIVE_DIRECTION_BACKWARD)
direction = -1
# Apply the value based on unit
if unit == "hour":
hours += number_value * direction
elif unit == "minute":
minutes += number_value * direction
elif unit == "second":
seconds += number_value * direction
continue
# If we still haven't handled the current token, break
break
return RelativeTimeNode(hours=hours, minutes=minutes, seconds=seconds)
def parse_time_expression(self) -> TimeExpressionNode:
"""Parse a complete time expression."""
date_node = None
time_node = None
relative_date_node = None
relative_time_node = None
weekday_node = None
# Parse different parts of the expression
while self.current_token.type != TokenType.EOF:
# Try to parse date first (absolute dates should take precedence)
if self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER]:
if date_node is None:
original_pos = self.pos
try:
date_node = self.parse_date()
continue
except ParserError:
# Reset position if parsing failed
self.pos = original_pos
pass
# Try to parse relative date
if self.current_token.type in [
TokenType.RELATIVE_TODAY, TokenType.RELATIVE_TOMORROW,
TokenType.RELATIVE_DAY_AFTER_TOMORROW, TokenType.RELATIVE_THREE_DAYS_AFTER_TOMORROW,
TokenType.RELATIVE_YESTERDAY, TokenType.RELATIVE_DAY_BEFORE_YESTERDAY,
TokenType.RELATIVE_THREE_DAYS_BEFORE_YESTERDAY,
TokenType.INTEGER, TokenType.CHINESE_NUMBER, # For patterns like "X年后", "X个月后", etc.
TokenType.RELATIVE_NEXT, TokenType.RELATIVE_LAST, TokenType.RELATIVE_THIS
]:
if relative_date_node is None:
original_pos = self.pos
try:
relative_date_node = self.parse_relative_date()
continue
except ParserError:
# Reset position if parsing failed
self.pos = original_pos
pass
# Try to parse relative time first (since it can have numbers)
if self.current_token.type in [
TokenType.INTEGER, TokenType.CHINESE_NUMBER,
TokenType.HALF, TokenType.QUARTER,
TokenType.RELATIVE_DIRECTION_FORWARD, TokenType.RELATIVE_DIRECTION_BACKWARD
]:
if relative_time_node is None:
original_pos = self.pos
try:
relative_time_node = self.parse_relative_time()
# Only continue if we actually parsed some relative time
if relative_time_node.hours != 0 or relative_time_node.minutes != 0 or relative_time_node.seconds != 0:
continue
else:
# If we didn't parse any relative time, reset position
self.pos = original_pos
except ParserError:
# Reset position if parsing failed
self.pos = original_pos
pass
# Try to parse time
if self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER, TokenType.TIME_SEPARATOR, TokenType.PERIOD_AM, TokenType.PERIOD_PM]:
if time_node is None:
original_pos = self.pos
try:
time_node = self.parse_time()
continue
except ParserError:
# Reset position if parsing failed
self.pos = original_pos
pass
# Try to parse time
if self.current_token.type in [TokenType.INTEGER, TokenType.CHINESE_NUMBER, TokenType.TIME_SEPARATOR, TokenType.PERIOD_AM, TokenType.PERIOD_PM]:
if time_node is None:
original_pos = self.pos
try:
time_node = self.parse_time()
continue
except ParserError:
# Reset position if parsing failed
self.pos = original_pos
pass
# Try to parse weekday
if self.current_token.type in [
TokenType.WEEK_SCOPE_CURRENT, TokenType.WEEK_SCOPE_LAST, TokenType.WEEK_SCOPE_NEXT,
TokenType.WEEKDAY_MONDAY, TokenType.WEEKDAY_TUESDAY, TokenType.WEEKDAY_WEDNESDAY,
TokenType.WEEKDAY_THURSDAY, TokenType.WEEKDAY_FRIDAY, TokenType.WEEKDAY_SATURDAY,
TokenType.WEEKDAY_SUNDAY
]:
if weekday_node is None:
original_pos = self.pos
try:
weekday_node = self.parse_weekday()
continue
except ParserError:
# Reset position if parsing failed
self.pos = original_pos
pass
# If we get here and couldn't parse anything, skip the token
self.pos += 1
return TimeExpressionNode(
date=date_node,
time=time_node,
relative_date=relative_date_node,
relative_time=relative_time_node,
weekday=weekday_node
)
def parse(self) -> TimeExpressionNode:
"""Parse the complete time expression and return the AST."""
return self.parse_time_expression()

View File

@ -0,0 +1,71 @@
"""
Abstract Syntax Tree (AST) nodes for the time expression parser.
"""
from abc import ABC
from typing import Optional
from dataclasses import dataclass
@dataclass
class ASTNode(ABC):
"""Base class for all AST nodes."""
pass
@dataclass
class NumberNode(ASTNode):
"""Represents a numeric value."""
value: int
@dataclass
class DateNode(ASTNode):
"""Represents a date specification."""
year: Optional[ASTNode]
month: Optional[ASTNode]
day: Optional[ASTNode]
@dataclass
class TimeNode(ASTNode):
"""Represents a time specification."""
hour: Optional[ASTNode]
minute: Optional[ASTNode]
second: Optional[ASTNode]
is_24hour: bool = False
period: Optional[str] = None # AM or PM
@dataclass
class RelativeDateNode(ASTNode):
"""Represents a relative date specification."""
years: int = 0
months: int = 0
weeks: int = 0
days: int = 0
@dataclass
class RelativeTimeNode(ASTNode):
"""Represents a relative time specification."""
hours: float = 0.0
minutes: float = 0.0
seconds: float = 0.0
@dataclass
class WeekdayNode(ASTNode):
"""Represents a weekday specification."""
weekday: int # 0=Monday, 6=Sunday
scope: str # current, last, next
@dataclass
class TimeExpressionNode(ASTNode):
"""Represents a complete time expression."""
date: Optional[DateNode] = None
time: Optional[TimeNode] = None
relative_date: Optional[RelativeDateNode] = None
relative_time: Optional[RelativeTimeNode] = None
weekday: Optional[WeekdayNode] = None

View File

@ -0,0 +1,95 @@
"""
Token definitions for the time parser.
"""
from enum import Enum
from typing import Union
from dataclasses import dataclass
class TokenType(Enum):
"""Types of tokens recognized by the lexer."""
# Numbers
INTEGER = "INTEGER"
CHINESE_NUMBER = "CHINESE_NUMBER"
# Time units
YEAR = "YEAR"
MONTH = "MONTH"
DAY = "DAY"
WEEK = "WEEK"
HOUR = "HOUR"
MINUTE = "MINUTE"
SECOND = "SECOND"
# Date separators
DATE_SEPARATOR = "DATE_SEPARATOR" # -, /, 年, 月, 日, 号
# Time separators
TIME_SEPARATOR = "TIME_SEPARATOR" # :, 点, 时, 分, 秒
# Period indicators
PERIOD_AM = "PERIOD_AM" # 上午, 早上, 早晨, etc.
PERIOD_PM = "PERIOD_PM" # 下午, 晚上, 中午, etc.
# Relative time
RELATIVE_TODAY = "RELATIVE_TODAY" # 今天, 今晚, 今早, etc.
RELATIVE_TOMORROW = "RELATIVE_TOMORROW" # 明天
RELATIVE_DAY_AFTER_TOMORROW = "RELATIVE_DAY_AFTER_TOMORROW" # 后天
RELATIVE_THREE_DAYS_AFTER_TOMORROW = "RELATIVE_THREE_DAYS_AFTER_TOMORROW" # 大后天
RELATIVE_YESTERDAY = "RELATIVE_YESTERDAY" # 昨天
RELATIVE_DAY_BEFORE_YESTERDAY = "RELATIVE_DAY_BEFORE_YESTERDAY" # 前天
RELATIVE_THREE_DAYS_BEFORE_YESTERDAY = "RELATIVE_THREE_DAYS_BEFORE_YESTERDAY" # 大前天
RELATIVE_DIRECTION_FORWARD = "RELATIVE_DIRECTION_FORWARD" # 后, 以后, 之后
RELATIVE_DIRECTION_BACKWARD = "RELATIVE_DIRECTION_BACKWARD" # 前, 以前, 之前
# Extended relative time
RELATIVE_NEXT = "RELATIVE_NEXT" # 下
RELATIVE_LAST = "RELATIVE_LAST" # 上, 去
RELATIVE_THIS = "RELATIVE_THIS" # 这, 本
# Week days
WEEKDAY_MONDAY = "WEEKDAY_MONDAY"
WEEKDAY_TUESDAY = "WEEKDAY_TUESDAY"
WEEKDAY_WEDNESDAY = "WEEKDAY_WEDNESDAY"
WEEKDAY_THURSDAY = "WEEKDAY_THURSDAY"
WEEKDAY_FRIDAY = "WEEKDAY_FRIDAY"
WEEKDAY_SATURDAY = "WEEKDAY_SATURDAY"
WEEKDAY_SUNDAY = "WEEKDAY_SUNDAY"
# Week scope
WEEK_SCOPE_CURRENT = "WEEK_SCOPE_CURRENT" # 本
WEEK_SCOPE_LAST = "WEEK_SCOPE_LAST" # 上
WEEK_SCOPE_NEXT = "WEEK_SCOPE_NEXT" # 下
# Special time markers
HALF = "HALF" # 半
QUARTER = "QUARTER" # 一刻
ZHENG = "ZHENG" # 整
ZHONG = "ZHONG" # 钟
# Student-friendly time expressions
EARLY_MORNING = "EARLY_MORNING" # 早X
LATE_NIGHT = "LATE_NIGHT" # 晚X
# Whitespace
WHITESPACE = "WHITESPACE"
# End of input
EOF = "EOF"
@dataclass
class Token:
"""Represents a single token from the lexer."""
type: TokenType
value: Union[str, int]
position: int
def __str__(self):
return f"Token({self.type.value}, {repr(self.value)}, {self.position})"
def __repr__(self):
return self.__str__()

View File

@ -0,0 +1,369 @@
"""
Semantic analyzer for time expressions that evaluates the AST and produces datetime objects.
"""
import datetime
import calendar
from typing import Optional
from .ptime_ast import (
TimeExpressionNode, DateNode, TimeNode,
RelativeDateNode, RelativeTimeNode, WeekdayNode, NumberNode
)
from .err import TokenUnhandledException
class SemanticAnalyzer:
"""Semantic analyzer that evaluates time expression ASTs."""
def __init__(self, now: Optional[datetime.datetime] = None):
self.now = now or datetime.datetime.now()
def evaluate_number(self, node: NumberNode) -> int:
"""Evaluate a number node."""
return node.value
def evaluate_date(self, node: DateNode) -> datetime.date:
"""Evaluate a date node."""
year = self.now.year
month = 1
day = 1
if node.year is not None:
year = self.evaluate_number(node.year)
if node.month is not None:
month = self.evaluate_number(node.month)
if node.day is not None:
day = self.evaluate_number(node.day)
return datetime.date(year, month, day)
def evaluate_time(self, node: TimeNode) -> datetime.time:
"""Evaluate a time node."""
hour = 0
minute = 0
second = 0
if node.hour is not None:
hour = self.evaluate_number(node.hour)
if node.minute is not None:
minute = self.evaluate_number(node.minute)
if node.second is not None:
second = self.evaluate_number(node.second)
# Handle 24-hour vs 12-hour format
if not node.is_24hour and node.period is not None:
if node.period == "AM":
if hour == 12:
hour = 0
elif node.period == "PM":
if hour != 12 and hour <= 12:
hour += 12
# Validate time values
if not (0 <= hour <= 23):
raise TokenUnhandledException(f"Invalid hour: {hour}")
if not (0 <= minute <= 59):
raise TokenUnhandledException(f"Invalid minute: {minute}")
if not (0 <= second <= 59):
raise TokenUnhandledException(f"Invalid second: {second}")
return datetime.time(hour, minute, second)
def evaluate_relative_date(self, node: RelativeDateNode) -> datetime.timedelta:
"""Evaluate a relative date node."""
# Start with current time
result = self.now
# Special case: If weeks contains a target day (hacky way to pass target day info)
# This is for patterns like "下个月五号"
if node.weeks > 0 and node.weeks <= 31: # Valid day range
target_day = node.weeks
# Calculate the target month
if node.months != 0:
# Handle month arithmetic carefully
total_months = result.month + node.months - 1
new_year = result.year + total_months // 12
new_month = total_months % 12 + 1
# Handle day overflow (e.g., Jan 31 + 1 month = Feb 28/29)
max_day_in_target_month = calendar.monthrange(new_year, new_month)[1]
target_day = min(target_day, max_day_in_target_month)
try:
result = result.replace(year=new_year, month=new_month, day=target_day)
except ValueError:
# Handle edge cases
result = result.replace(year=new_year, month=new_month, day=max_day_in_target_month)
# Return the difference between the new date and the original date
return result - self.now
# Apply years
if node.years != 0:
# Handle year arithmetic carefully due to leap years
new_year = result.year + node.years
try:
result = result.replace(year=new_year)
except ValueError:
# Handle leap year edge case (Feb 29 -> Feb 28)
result = result.replace(year=new_year, month=2, day=28)
# Apply months
if node.months != 0:
# Check if this is a special marker for absolute month (negative offset)
if node.months < 0:
# This is an absolute month specification (e.g., from "明年五月")
absolute_month = node.months + 100
if 1 <= absolute_month <= 12:
result = result.replace(year=result.year, month=absolute_month, day=result.day)
else:
# Handle month arithmetic carefully
total_months = result.month + node.months - 1
new_year = result.year + total_months // 12
new_month = total_months % 12 + 1
# Handle day overflow (e.g., Jan 31 + 1 month = Feb 28/29)
new_day = min(result.day, calendar.monthrange(new_year, new_month)[1])
result = result.replace(year=new_year, month=new_month, day=new_day)
# Apply weeks and days
if node.weeks != 0 or node.days != 0:
delta_days = node.weeks * 7 + node.days
result = result + datetime.timedelta(days=delta_days)
return result - self.now
def evaluate_relative_time(self, node: RelativeTimeNode) -> datetime.timedelta:
"""Evaluate a relative time node."""
# Convert all values to seconds for precise calculation
total_seconds = (
node.hours * 3600 +
node.minutes * 60 +
node.seconds
)
return datetime.timedelta(seconds=total_seconds)
def evaluate_weekday(self, node: WeekdayNode) -> datetime.timedelta:
"""Evaluate a weekday node."""
current_weekday = self.now.weekday() # 0=Monday, 6=Sunday
target_weekday = node.weekday
if node.scope == "current":
delta = target_weekday - current_weekday
elif node.scope == "last":
delta = target_weekday - current_weekday - 7
elif node.scope == "next":
delta = target_weekday - current_weekday + 7
else:
delta = target_weekday - current_weekday
return datetime.timedelta(days=delta)
def infer_smart_time(self, hour: int, minute: int = 0, second: int = 0, base_time: Optional[datetime.datetime] = None) -> datetime.datetime:
"""
Smart time inference based on current time.
For example:
- If now is 14:30 and user says "3点", interpret as 15:00
- If now is 14:30 and user says "1点", interpret as next day 01:00
- If now is 8:00 and user says "3点", interpret as 15:00
- If now is 8:00 and user says "9点", interpret as 09:00
"""
# Use base_time if provided, otherwise use self.now
now = base_time if base_time is not None else self.now
# Handle 24-hour format directly (13-23)
if 13 <= hour <= 23:
candidate = now.replace(hour=hour, minute=minute, second=second, microsecond=0)
if candidate <= now:
candidate += datetime.timedelta(days=1)
return candidate
# Handle 12 (noon/midnight)
if hour == 12:
# For 12 specifically, we need to be more careful
# Try noon first
noon_candidate = now.replace(hour=12, minute=minute, second=second, microsecond=0)
midnight_candidate = now.replace(hour=0, minute=minute, second=second, microsecond=0)
# Special case: If it's afternoon or evening, "十二点" likely means next day midnight
if now.hour >= 12:
result = midnight_candidate + datetime.timedelta(days=1)
return result
# If noon is in the future and closer than midnight, use it
if noon_candidate > now and (midnight_candidate <= now or noon_candidate < midnight_candidate):
return noon_candidate
# If midnight is in the future, use it
elif midnight_candidate > now:
return midnight_candidate
# Both are in the past, use the closer one
elif noon_candidate > midnight_candidate:
return noon_candidate
# Otherwise use midnight next day
else:
result = midnight_candidate + datetime.timedelta(days=1)
return result
# Handle 1-11 (12-hour format)
if 1 <= hour <= 11:
# Calculate 12-hour format candidates
pm_hour = hour + 12
pm_candidate = now.replace(hour=pm_hour, minute=minute, second=second, microsecond=0)
am_candidate = now.replace(hour=hour, minute=minute, second=second, microsecond=0)
# Special case: If it's afternoon (12:00-18:00) and the hour is 1-6,
# user might mean either PM today or AM tomorrow.
# But if PM is in the future, that's more likely what they mean.
if 12 <= now.hour <= 18 and 1 <= hour <= 6:
if pm_candidate > now:
return pm_candidate
else:
# PM is in the past, so use AM tomorrow
result = am_candidate + datetime.timedelta(days=1)
return result
# Special case: If it's late evening (after 22:00) and user specifies early morning hours (1-5),
# user likely means next day early morning
if now.hour >= 22 and 1 <= hour <= 5:
result = am_candidate + datetime.timedelta(days=1)
return result
# Special case: In the morning (0-12:00)
if now.hour < 12:
# In the morning, for hours 1-11, generally prefer AM interpretation
# unless it's a very early hour that's much earlier than current time
# Only push to next day for very early hours (1-2) that are significantly earlier
if hour <= 2 and hour < now.hour and now.hour - hour >= 6:
# Very early morning hour that's significantly earlier, use next day
result = am_candidate + datetime.timedelta(days=1)
return result
else:
# For morning, generally prefer AM if it's in the future
if am_candidate > now:
return am_candidate
# If PM is in the future, use it
elif pm_candidate > now:
return pm_candidate
# Both are in the past, prefer AM if it's closer
elif am_candidate > pm_candidate:
return am_candidate
# Otherwise use PM next day
else:
result = pm_candidate + datetime.timedelta(days=1)
return result
else:
# General case: choose the one that's in the future and closer
if pm_candidate > now and (am_candidate <= now or pm_candidate < am_candidate):
return pm_candidate
elif am_candidate > now:
return am_candidate
# Both are in the past, use the closer one
elif pm_candidate > am_candidate:
return pm_candidate
# Otherwise use AM next day
else:
result = am_candidate + datetime.timedelta(days=1)
return result
# Handle 0 (midnight)
if hour == 0:
candidate = now.replace(hour=0, minute=minute, second=second, microsecond=0)
if candidate <= now:
candidate += datetime.timedelta(days=1)
return candidate
# Default case (should not happen with valid input)
candidate = now.replace(hour=hour, minute=minute, second=second, microsecond=0)
if candidate <= now:
candidate += datetime.timedelta(days=1)
return candidate
def evaluate(self, node: TimeExpressionNode) -> datetime.datetime:
"""Evaluate a complete time expression node."""
result = self.now
# Apply relative date (should set time to 00:00:00 for dates)
if node.relative_date is not None:
delta = self.evaluate_relative_date(node.relative_date)
result = result + delta
# For relative dates like "今天", "明天", set time to 00:00:00
# But only for cases where we're dealing with days, not years/months
if (node.date is None and node.time is None and node.weekday is None and
node.relative_date.years == 0 and node.relative_date.months == 0):
result = result.replace(hour=0, minute=0, second=0, microsecond=0)
# Apply weekday
if node.weekday is not None:
delta = self.evaluate_weekday(node.weekday)
result = result + delta
# For weekdays, set time to 00:00:00
if node.date is None and node.time is None:
result = result.replace(hour=0, minute=0, second=0, microsecond=0)
# Apply relative time
if node.relative_time is not None:
delta = self.evaluate_relative_time(node.relative_time)
result = result + delta
# Apply absolute date
if node.date is not None:
date = self.evaluate_date(node.date)
result = result.replace(year=date.year, month=date.month, day=date.day)
# For absolute dates without time, set time to 00:00:00
if node.time is None:
result = result.replace(hour=0, minute=0, second=0, microsecond=0)
# Apply time
if node.time is not None:
time = self.evaluate_time(node.time)
# Handle explicit period or student-friendly expressions
if node.time.is_24hour or node.time.period is not None:
# Handle explicit period
if not node.time.is_24hour and node.time.period is not None:
hour = time.hour
minute = time.minute
second = time.second
if node.time.period == "AM":
if hour == 12:
hour = 0
elif node.time.period == "PM":
# Special case: "晚上十二点" should be interpreted as next day 00:00
if hour == 12 and minute == 0 and second == 0:
# Move to next day at 00:00:00
result = result.replace(hour=0, minute=0, second=0, microsecond=0) + datetime.timedelta(days=1)
# Skip the general replacement since we've already handled it
skip_general_replacement = True
else:
# For other PM times, convert to 24-hour format
if hour != 12 and hour <= 12:
hour += 12
# Validate hour
if not (0 <= hour <= 23):
raise TokenUnhandledException(f"Invalid hour: {hour}")
# Only do general replacement if we haven't handled it specially
if not locals().get('skip_general_replacement', False):
result = result.replace(hour=hour, minute=minute, second=second, microsecond=0)
else:
# Already in 24-hour format
result = result.replace(hour=time.hour, minute=time.minute, second=time.second, microsecond=0)
else:
# Use smart time inference for regular times
# But if we have an explicit date, treat the time as 24-hour format
if node.date is not None or node.relative_date is not None:
# For explicit dates, treat time as 24-hour format
result = result.replace(hour=time.hour, minute=time.minute or 0, second=time.second or 0, microsecond=0)
else:
# Use smart time inference for regular times
smart_time = self.infer_smart_time(time.hour, time.minute, time.second, base_time=result)
result = smart_time
return result

View File

@ -0,0 +1,34 @@
from typing import Any
from loguru import logger
from nonebot_plugin_alconna import UniMessage
import playwright.async_api
from playwright.async_api import Page
from konabot.common.web_render import WebRenderer, konaweb
async def render_error_message(message: str) -> UniMessage[Any]:
"""
渲染文本消息为错误信息图片。
如果无法访达 Web 端则返回纯文本给用户。
"""
async def page_function(page: Page):
await page.wait_for_function("typeof setContent === 'function'", timeout=3000)
await page.evaluate(
"""(message) => {return setContent(message);}""",
message,
)
try:
img_data = await WebRenderer.render(
url=konaweb("error_report"),
target="#main",
other_function=page_function,
)
return UniMessage.image(raw=img_data)
except (playwright.async_api.Error, ConnectionError) as e:
logger.warning("渲染报错信息图片时出错了,回退到文本 ERR={}", e)
return UniMessage.text(message)

View File

@ -5,6 +5,7 @@ from pydantic import BaseModel
class Config(BaseModel):
module_web_render_weburl: str = "localhost:5173"
module_web_render_instance: str = ""
module_web_render_playwright_ws: str = ""
def get_instance_baseurl(self):
if self.module_web_render_instance:

View File

@ -1,6 +1,7 @@
from abc import ABC, abstractmethod
import asyncio
import queue
from typing import Any, Callable, Coroutine
from typing import Any, Callable, Coroutine, Generic, TypeVar
from loguru import logger
from playwright.async_api import (
Page,
@ -8,9 +9,14 @@ from playwright.async_api import (
async_playwright,
Browser,
BrowserContext,
Error as PlaywrightError,
)
from .config import web_render_config
from playwright.async_api import ConsoleMessage, Page
T = TypeVar("T")
TFunction = Callable[[T], Coroutine[Any, Any, Any]]
PageFunction = Callable[[Page], Coroutine[Any, Any, Any]]
@ -22,23 +28,17 @@ class WebRenderer:
@classmethod
async def get_browser_instance(cls) -> "WebRendererInstance":
if cls.browser_pool.empty():
instance = await WebRendererInstance.create()
if web_render_config.module_web_render_playwright_ws:
instance = await RemotePlaywrightInstance.create(
web_render_config.module_web_render_playwright_ws
)
else:
instance = await LocalPlaywrightInstance.create()
cls.browser_pool.put(instance)
instance = cls.browser_pool.get()
cls.browser_pool.put(instance)
return instance
@classmethod
async def get_browser_context(cls) -> BrowserContext:
instance = await cls.get_browser_instance()
if id(instance) not in cls.context_pool:
context = await instance.browser.new_context()
cls.context_pool[id(instance)] = context
logger.debug(
f"Created new persistent browser context for WebRendererInstance {id(instance)}"
)
return cls.context_pool[id(instance)]
@classmethod
async def render(
cls,
@ -67,49 +67,6 @@ class WebRenderer:
url, target, params=params, other_function=other_function, timeout=timeout
)
@classmethod
async def render_persistent_page(
cls,
page_id: str,
url: str,
target: str,
params: dict = {},
other_function: PageFunction | None = None,
timeout: int = 30,
) -> bytes:
"""
使用长期挂载的页面访问指定URL并返回截图
:param page_id: 页面唯一标识符
:param url: 目标URL
:param target: 渲染目标,如 ".box""#main" 等CSS选择器
:param timeout: 页面加载超时时间,单位秒
:param params: URL键值对参数
:param other_function: 其他自定义操作函数接受page参数
:return: 截图的字节数据
"""
logger.debug(
f"Requesting persistent render for page_id {page_id} at {url} targeting {target} with timeout {timeout}"
)
instance = await cls.get_browser_instance()
if page_id not in cls.page_pool:
context = await cls.get_browser_context()
page = await context.new_page()
cls.page_pool[page_id] = page
logger.debug(
f"Created new persistent page for page_id {page_id} using WebRendererInstance {id(instance)}"
)
page = cls.page_pool[page_id]
return await instance.render_with_page(
page,
url,
target,
params=params,
other_function=other_function,
timeout=timeout,
)
@classmethod
async def render_file(
cls,
@ -142,6 +99,75 @@ class WebRenderer:
timeout=timeout,
)
@classmethod
async def render_with_persistent_page(
cls,
page_id: str,
url: str,
target: str,
params: dict = {},
other_function: PageFunction | None = None,
timeout: int = 30,
) -> bytes:
"""
使用长期挂载的页面进行渲染
:param page_id: 页面唯一标识符
:param target: 渲染目标,如 ".box""#main" 等CSS选择器
:param timeout: 页面加载超时时间,单位秒
:param params: URL键值对参数
:param other_function: 其他自定义操作函数接受page参数
:return: 截图的字节数据
"""
instance = await cls.get_browser_instance()
logger.debug(
f"Using WebRendererInstance {id(instance)} to render with persistent page {page_id} targeting {target}"
)
return await instance.render_with_persistent_page(
page_id,
url,
target,
params=params,
other_function=other_function,
timeout=timeout,
)
@classmethod
async def get_persistent_page(cls, page_id: str, url: str) -> Page:
"""
获取长期挂载的页面,如果不存在则创建一个新的页面并存储
"""
if page_id in cls.page_pool:
return cls.page_pool[page_id]
async def on_console(msg: ConsoleMessage):
logger.debug(f"WEB CONSOLE {msg.text}")
instance = await cls.get_browser_instance()
if isinstance(instance, RemotePlaywrightInstance):
context = await instance.browser.new_context()
page = await context.new_page()
await page.goto(url)
cls.page_pool[page_id] = page
logger.debug(f"Created new persistent page for page_id {page_id}, navigated to {url}")
page.on('console', on_console)
return page
elif isinstance(instance, LocalPlaywrightInstance):
context = await instance.browser.new_context()
page = await context.new_page()
await page.goto(url)
cls.page_pool[page_id] = page
logger.debug(f"Created new persistent page for page_id {page_id}, navigated to {url}")
page.on('console', on_console)
return page
else:
raise NotImplementedError("Unsupported WebRendererInstance type")
@classmethod
async def close_persistent_page(cls, page_id: str) -> None:
"""
@ -156,38 +182,56 @@ class WebRenderer:
logger.debug(f"Closed and removed persistent page for page_id {page_id}")
class WebRendererInstance:
def __init__(self):
self._playwright: Playwright | None = None
self._browser: Browser | None = None
class WebRendererInstance(ABC, Generic[T]):
@abstractmethod
async def render(
self,
url: str,
target: str,
index: int = 0,
params: dict[str, Any] | None = None,
other_function: TFunction | None = None,
timeout: int = 30,
) -> bytes: ...
@abstractmethod
async def render_file(
self,
file_path: str,
target: str,
index: int = 0,
params: dict[str, Any] | None = None,
other_function: PageFunction | None = None,
timeout: int = 30,
) -> bytes: ...
@abstractmethod
async def render_with_persistent_page(
self,
page_id: str,
url: str,
target: str,
params: dict = {},
other_function: PageFunction | None = None,
timeout: int = 30,
) -> bytes: ...
class PlaywrightInstance(WebRendererInstance[Page]):
def __init__(self) -> None:
super().__init__()
self.lock = asyncio.Lock()
@property
def playwright(self) -> Playwright:
assert self._playwright is not None
return self._playwright
@property
def browser(self) -> Browser:
assert self._browser is not None
return self._browser
async def init(self):
self._playwright = await async_playwright().start()
self._browser = await self.playwright.chromium.launch(headless=True)
@classmethod
async def create(cls) -> "WebRendererInstance":
instance = cls()
await instance.init()
return instance
@abstractmethod
def browser(self) -> Browser: ...
async def render(
self,
url: str,
target: str,
index: int = 0,
params: dict = {},
params: dict[str, Any] | None = None,
other_function: PageFunction | None = None,
timeout: int = 30,
) -> bytes:
@ -207,42 +251,41 @@ class WebRendererInstance:
context = await self.browser.new_context()
page = await context.new_page()
screenshot = await self.inner_render(
page, url, target, index, params, other_function, timeout
page, url, target, index, params or {}, other_function, timeout
)
await page.close()
await context.close()
return screenshot
async def render_with_page(
self,
page: Page,
url: str,
target: str,
index: int = 0,
params: dict = {},
other_function: PageFunction | None = None,
timeout: int = 30,
) -> bytes:
async with self.lock:
screenshot = await self.inner_render(
page, url, target, index, params, other_function, timeout
)
return screenshot
async def render_file(
self,
file_path: str,
target: str,
index: int = 0,
params: dict = {},
params: dict[str, Any] | None = None,
other_function: PageFunction | None = None,
timeout: int = 30,
) -> bytes:
file_path = "file:///" + str(file_path).replace("\\", "/")
return await self.render(
file_path, target, index, params, other_function, timeout
file_path, target, index, params or {}, other_function, timeout
)
async def render_with_persistent_page(
self,
page_id: str,
url: str,
target: str,
params: dict = {},
other_function: PageFunction | None = None,
timeout: int = 30,
) -> bytes:
page = await WebRenderer.get_persistent_page(page_id, url)
screenshot = await self.inner_render(
page, url, target, 0, params, other_function, timeout
)
return screenshot
async def inner_render(
self,
page: Page,
@ -276,6 +319,85 @@ class WebRendererInstance:
logger.debug("Screenshot taken successfully")
return screenshot
class LocalPlaywrightInstance(PlaywrightInstance):
def __init__(self):
self._playwright: Playwright | None = None
self._browser: Browser | None = None
super().__init__()
@property
def playwright(self) -> Playwright:
assert self._playwright is not None
return self._playwright
@property
def browser(self) -> Browser:
assert self._browser is not None
return self._browser
async def init(self):
self._playwright = await async_playwright().start()
self._browser = await self.playwright.chromium.launch(headless=True)
@classmethod
async def create(cls) -> "WebRendererInstance":
instance = cls()
await instance.init()
return instance
async def close(self):
await self.browser.close()
await self.playwright.stop()
class RemotePlaywrightInstance(PlaywrightInstance):
def __init__(self, ws_endpoint: str) -> None:
self._playwright: Playwright | None = None
self._browser: Browser | None = None
self._ws_endpoint = ws_endpoint
super().__init__()
@property
def playwright(self) -> Playwright:
assert self._playwright is not None, "Playwright must be initialized by calling init()."
return self._playwright
@property
def browser(self) -> Browser:
assert self._browser is not None, "Browser must be connected by calling init()."
return self._browser
async def init(self):
logger.info(f"尝试连接远程 Playwright 服务器: {self._ws_endpoint}")
self._playwright = await async_playwright().start()
try:
self._browser = await self.playwright.chromium.connect(
self._ws_endpoint
)
logger.info("成功连接到远程 Playwright 服务器。")
except PlaywrightError as e:
await self.playwright.stop()
raise ConnectionError(
f"无法连接到远程 Playwright 服务器 ({self._ws_endpoint}){e}"
) from e
@classmethod
async def create(cls, ws_endpoint: str) -> "RemotePlaywrightInstance":
"""
创建并初始化远程 Playwright 实例的工厂方法。
"""
instance = cls(ws_endpoint)
await instance.init()
return instance
async def close(self):
"""
断开与远程浏览器的连接并停止本地 Playwright 实例。
"""
if self._browser:
await self.browser.close()
if self._playwright:
await self.playwright.stop()
print("已断开远程连接,本地 Playwright 实例已停止。")

View File

@ -1,4 +1,4 @@
关于「中间答案」或者「提示」
# 关于「中间答案」或者「提示」
在 KonaPH 中,当有人发送「提交答案 答案」时,会检查答案是否符合你设置的中间答案的 pattern。这个 pattern 可以有两种方式:

View File

@ -1,2 +1,3 @@
关于罗文和洛温
AdoreLowen 希望和洛温阿特金森区分,所以最好就不要叫他洛温了!此方 BOT 会在一些群提醒叫错了的人。
# 关于罗文和洛温
AdoreLowen 希望和洛温阿特金森区分,所以最好就不要叫他洛温了!此方 BOT 会在一些群提醒叫错了的人。

View File

@ -1,45 +1,48 @@
指令介绍
is_admin - 用于判断当前事件是否来自管理员的内部权限校验函数
# 指令介绍
格式
from konabot.common.nb.is_admin import is_admin
from nonebot import on
from nonebot.adapters import Event
from loguru import logger
`is_admin` - 用于判断当前事件是否来自管理员的内部权限校验函数
@on().handle()
async def _(event: Event):
if is_admin(event):
logger.info("管理员发送了消息")
# 格式
说明
is_admin 是 Bot 内部用于权限控制的核心函数根据事件来源QQ、Discord、控制台及插件配置判断触发事件的用户或群组是否具有管理员权限。
```python
from konabot.common.nb.is_admin import is_admin
from nonebot import on
from nonebot.adapters import Event
from loguru import logger
@on().handle()
async def _(event: Event):
if is_admin(event):
logger.info("管理员发送了消息")
```
# 说明
is_admin 是 Bot 内部用于权限控制的核心函数根据事件来源QQ、Discord、控制台及插件配置判断触发事件的用户或群组是否具有管理员权限。
支持的适配器与判定逻辑:
• OneBot V11QQ
- 若用户 ID 在配置项 admin_qq_account 中,则视为管理员
- 若为群聊消息,且群 ID 在配置项 admin_qq_group 中,则视为管理员
• Discord
- 若频道 ID 在配置项 admin_discord_channel 中,则视为管理员
- 若用户 ID 在配置项 admin_discord_account 中,则视为管理员
- OneBot V11QQ
- 若用户 ID 在配置项 admin_qq_account 中,则视为管理员
- 若为群聊消息,且群 ID 在配置项 admin_qq_group 中,则视为管理员
- Discord
- 若频道 ID 在配置项 admin_discord_channel 中,则视为管理员
- 若用户 ID 在配置项 admin_discord_account 中,则视为管理员
- Console控制台
- 所有控制台输入均默认视为管理员操作,自动返回 True
• Console控制台
- 所有控制台输入均默认视为管理员操作,自动返回 True
# 配置项(位于插件配置中
配置项(位于插件配置中)
ADMIN_QQ_GROUP: list[int]
允许的管理员 QQ 群 ID 列表
- `ADMIN_QQ_GROUP`: `list[int]`
- 允许的管理员 QQ 群 ID 列表
- `ADMIN_QQ_ACCOUNT`: `list[int]`
- 允许的管理员 QQ 账号 ID 列表
- `ADMIN_DISCORD_CHANNEL`: `list[int]`
- 允许的管理员 Discord 频道 ID 列表
- `ADMIN_DISCORD_ACCOUNT`: `list[int]`
- 允许的管理员 Discord 用户 ID 列表
ADMIN_QQ_ACCOUNT: list[int]
允许的管理员 QQ 账号 ID 列表
# 注意事项
ADMIN_DISCORD_CHANNEL: list[int]
允许的管理员 Discord 频道 ID 列表
ADMIN_DISCORD_ACCOUNT: list[int]
允许的管理员 Discord 用户 ID 列表
注意事项
- 若未在配置文件中设置任何管理员 ID该函数对所有非控制台事件返回 False
- 控制台事件始终拥有管理员权限,便于本地调试与运维
- 若未在配置文件中设置任何管理员 ID该函数对所有非控制台事件返回 False
- 控制台事件始终拥有管理员权限,便于本地调试与运维

View File

@ -0,0 +1,212 @@
# 指令介绍
`konaperm` - 用于查看和修改 Bot 内部权限系统记录的管理员指令
## 权限要求
只有拥有 `admin` 权限的主体才能使用本指令。
## 格式
```text
konaperm list <platform> <entity_type> <external_id> [page]
konaperm get <platform> <entity_type> <external_id> <perm>
konaperm set <platform> <entity_type> <external_id> <perm> <val>
```
## 子命令说明
### `list`
列出指定对象及其继承链上的显式权限记录,按分页输出。
参数:
- `platform` 平台名,如 `ob11`、`discord`、`sys`
- `entity_type` 对象类型,如 `user`、`group`、`global`
- `external_id` 平台侧对象 ID全局对象通常写 `global`
- `page` 页码,可省略,默认 `1`
### `get`
查询某个对象对指定权限的最终判断结果,并说明它是从哪一层继承来的。
参数:
- `platform`
- `entity_type`
- `external_id`
- `perm` 权限键,如 `admin`、`plugin.xxx.use`
### `set`
为指定对象写入显式权限。
参数:
- `platform`
- `entity_type`
- `external_id`
- `perm` 权限键
- `val` 设置值
`val` 支持以下写法:
- 允许:`y` `yes` `allow` `true` `t`
- 拒绝:`n` `no` `deny` `false` `f`
- 清除:`null` `none`
其中:
- 允许 表示显式授予该权限
- 拒绝 表示显式禁止该权限
- 清除 表示删除该层的显式设置,重新回退到继承链判断
## 对象格式
本指令操作的对象由三段组成:
```text
<platform>.<entity_type>.<external_id>
```
例如:
- `ob11.user.123456789`
- `ob11.group.987654321`
- `sys.global.global`
## 当前支持的 `PermEntity` 值
以下内容按当前实现整理,便于手工查询和设置权限。
### `sys`
- `sys.global.global`
这是系统总兜底对象。
### `ob11`
- `ob11.global.global`
- `ob11.group.<group_id>`
- `ob11.user.<user_id>`
常见场景:
- 给整个 OneBot V11 平台统一授权:`ob11.global.global`
- 给某个 QQ 群授权:`ob11.group.群号`
- 给某个 QQ 用户授权:`ob11.user.QQ号`
### `discord`
- `discord.global.global`
- `discord.guild.<guild_id>`
- `discord.channel.<channel_id>`
- `discord.user.<user_id>`
常见场景:
- 给整个 Discord 平台统一授权:`discord.global.global`
- 给某个服务器授权:`discord.guild.服务器ID`
- 给某个频道授权:`discord.channel.频道ID`
- 给某个用户授权:`discord.user.用户ID`
### `minecraft`
- `minecraft.global.global`
- `minecraft.server.<server_name>`
- `minecraft.player.<player_uuid_hex>`
常见场景:
- 给整个 Minecraft 平台统一授权:`minecraft.global.global`
- 给某个服务器授权:`minecraft.server.服务器名`
- 给某个玩家授权:`minecraft.player.玩家UUID的hex`
### `console`
- `console.global.global`
- `console.channel.<channel_id>`
- `console.user.<user_id>`
### 快速参考
```text
sys.global.global
ob11.global.global
ob11.group.<group_id>
ob11.user.<user_id>
discord.global.global
discord.guild.<guild_id>
discord.channel.<channel_id>
discord.user.<user_id>
minecraft.global.global
minecraft.server.<server_name>
minecraft.player.<player_uuid_hex>
console.global.global
console.channel.<channel_id>
console.user.<user_id>
```
## 权限继承
权限不是只看当前对象,还会按继承链回退。
例如对 `ob11.user.123456` 查询时,通常会从更具体的对象一路回退到:
1. 当前用户
2. 平台全局对象
3. 系统全局对象
权限键本身也支持逐级回退。比如查询 `plugin.demo.use` 时,可能依次命中:
1. `plugin.demo.use`
2. `plugin.demo`
3. `plugin`
4. `*`
所以 `get` 返回的结果可能来自更宽泛的权限键,或更上层的继承对象。
## 示例
```text
konaperm list ob11 user 123456
```
查看 `ob11.user.123456` 及其继承链上的权限记录第一页。
```text
konaperm get ob11 user 123456 admin
```
查看该用户最终是否拥有 `admin` 权限,以及命中来源。
```text
konaperm set ob11 user 123456 admin allow
```
显式授予该用户 `admin` 权限。
```text
konaperm set ob11 user 123456 admin deny
```
显式拒绝该用户 `admin` 权限。
```text
konaperm set ob11 user 123456 admin none
```
删除该用户这一层对 `admin` 的显式设置,恢复继承判断。
## 注意事项
- 这是系统级管理指令,误操作可能直接影响其他插件的权限控制。
- `list` 只列出显式记录;没有显示出来不代表最终一定无权限,可能是从上层继承。
- `get` 显示的是最终命中的结果,比 `list` 更适合排查“为什么有/没有某个权限”。
- 对 `admin` 或 `*` 这类高影响权限做修改前,建议先确认对象是否写对。

View File

@ -1,4 +1,5 @@
指令介绍
konaph - KonaBot 的 PuzzleHunt 管理工具
# 指令介绍
`konaph` - KonaBot 的 PuzzleHunt 管理工具
详细介绍请直接输入 konaph 获取使用指引(该指令权限仅对部分人开放。如果你有权限的话才有响应。建议在此方 BOT 私聊使用该指令。)

View File

@ -0,0 +1,4 @@
# 宾几人
查询 Bingo 有几个人。直接发送给 Bot 即可。

View File

@ -0,0 +1,38 @@
# Celeste
爬山小游戏,移植自 Ccleste是 Celeste Classic即 PICO-8版。
使用 `wasdxc` 和数字进行操作。
## 操作说明
`wsad` 是上下左右摇杆方向,或者是方向键。`c` 是跳跃键,`x` 是冲刺键。
使用空格分隔每一个操作,每个操作持续一帧。如果后面跟着数字,则持续那么多帧。
### 例子 1
```
xc 180
```
按下 xc 一帧,然后空置 180 帧。
### 例子 2
```
d10 cd d10 xdw d20
```
向右走 10 帧,向右跳一帧,再继续按下右 10 帧,按下向右上冲刺一帧,再按下右 20 帧。
## 指令使用说明
直接说 `celeste` 会开启一个新的游戏。但是,你需要在后面跟有操作,才能够渲染 gif 图出来。
一个常见的开始操作是直接发送 `celeste xc 130`,即按下 xc 两个按键触发 PICO 版的开始游戏,然后等待 130 秒动画播放完毕。
对于一个已经存在而且时间不是非常久远的 gif 图,只要是由 bot 自己发送出来的,就可以在它的基础上继续游戏。回复这条消息,可以继续游戏。
一种很常见的技巧是回复一个已经存在的 gif 图 `celeste 1`,此时会空操作一帧并且渲染画面。你可以用这种方法查看一个 gif 图的游戏目前的状态。

107
konabot/docs/user/fx.txt Normal file
View File

@ -0,0 +1,107 @@
## 指令介绍
`fx` - 用于对图片应用各种滤镜效果的指令
## 格式
```
fx [滤镜名称] <参数1> <参数2> ...
```
## 示例
- `fx 模糊`
- `fx 阈值 150`
- `fx 缩放 2.0`
- `fx 色彩 1.8`
- `fx 色键 rgb(0,255,0) 50`
## 可用滤镜列表
### 基础滤镜
* ```fx 轮廓```
* ```fx 锐化```
* ```fx 边缘增强```
* ```fx 浮雕```
* ```fx 查找边缘```
* ```fx 平滑```
* ```fx 暗角 <半径=1.5>```
* ```fx 发光 <强度=0.5> <模糊半径=15>```
* ```fx 噪点 <数量=0.05>```
* ```fx 素描```
* ```fx 阴影 <偏移量X=10> <偏移量Y=10> <模糊量=10> <不透明度=0.5> <阴影颜色=black>```
### 模糊滤镜
* ```fx 模糊 <半径=10>```
* ```fx 马赛克 <像素大小=10>```
* ```fx 径向模糊 <强度=3.0> <采样量=6>```
* ```fx 旋转模糊 <强度=30.0> <采样量=6>```
* ```fx 方向模糊 <角度=0.0> <距离=20> <采样量=6>```
* ```fx 缩放模糊 <强度=0.1> <采样量=6>```
* ```fx 边缘模糊 <半径=10.0>```
### 色彩处理滤镜
* ```fx 反色```
* ```fx 黑白```
* ```fx 阈值 <阈值=128>```
* ```fx 对比度 <因子=1.5>```
* ```fx 亮度 <因子=1.5>```
* ```fx 色彩 <因子=1.5>```
* ```fx 色调 <颜色="rgb(255,0,0)">```
* ```fx RGB分离 <偏移量=5>```
* ```fx 叠加颜色 <颜色列表=[rgb(255,0,0)|(0,0)+rgb(0,255,0)|(0,100)+rgb(0,0,255)|(50,100)]> <叠加模式=overlay>```
* ```fx 像素抖动 <最大偏移量=2>```
* ```fx 半调 <半径=5>```
* ```fx 描边 <半径=5> <颜色=black>```
* ```fx 形状描边 <半径=5> <颜色=black> <粗糙度=None>```
### 几何变换滤镜
* ```fx 平移 <x偏移量=10> <y偏移量=10>```
* ```fx 缩放 <比例(X)=1.5> <比例Y=None>```
* ```fx 旋转 <角度=45>```
* ```fx 透视变换 <变换矩阵>```
* ```fx 裁剪 <左=0> <上=0> <右=100> <下=100>(百分比)```
* ```fx 拓展边缘 <拓展量=10>```
* ```fx 波纹 <振幅=5> <波长=20>```
* ```fx 光学补偿 <数量=100> <反转=false>```
* ```fx 球面化 <强度=0.5>```
* ```fx 镜像 <角度=90>```
* ```fx 水平翻转```
* ```fx 垂直翻转```
* ```fx 复制 <目标位置=(100,100)> <缩放=1.0> <源区域=(0,0,100,100)>(百分比)```
### 特殊效果滤镜
* ```fx 设置通道 <通道=A>```
* 可用 R、G、B、A。
* ```fx 设置遮罩```
* ```fx 色键 <目标颜色="rgb(255,0,0)"> <容差=60>```
* ```fx 晃动 <最大偏移量=5> <运动模糊=False>```
* ```fx 动图 <帧率=10>```
### 多图像处理器
* ```fx 存入图像 <目标名称>```
* 目标名称是图像的代名词,图像最长可存 12 小时,如果公用容量满了图像也会被删除。
* 该项仅可于首项使用。
* ```fx 读取图像 <目标名称>```
* 该项仅可于首项使用。
* ```fx 暂存图像```
* 此项默认插入存储在暂存列表中第一张图像的后面。
* ```fx 交换图像 <交换项=2> <交换项=1>```
* ```fx 删除图像 <删除索引=1>```
* ```fx 选择图像 <目标索引=2>```
### 多图像混合
* ```fx 混合图像 <模式=normal> <alpha=0.5>```
* ```fx 覆盖图像```
### 生成类
* ```fx 覆加颜色 <颜色列表=[rgb(255,0,0)|(0,0)+rgb(0,255,0)|(0,100)+rgb(0,0,255)|(50,100)]>```
* ```fx 生成图层 <宽度=512> <高度=512>```
* ```fx 生成文本 <文本内容=请输入文本> <字体大小=32> <文字颜色=black> <字体文件=HarmonyOS_Sans_SC_Regular.ttf>```
## 颜色名称支持
- **格式**:颜色列表采用 ```[颜色|位置+颜色|位置+颜色|位置]``` 的格式,位置是形如```(x百分比,y百分比)```的元组。
- **基本颜色**:红、绿、蓝、黄、紫、黑、白、橙、粉、灰、青、靛、棕
- **修饰词**:浅、深、亮、暗(可组合使用,如`浅红`、`深蓝`
- **RGB格式**`rgb(255,0,0)`、`rgb(0,255,0)`、`(255,0,0)` 等
- **HEX格式**`#66ccff`等

View File

@ -1,59 +1,83 @@
指令介绍
giftool - 对 GIF 动图进行裁剪、抽帧等处理
# giftool - 对 GIF 动图进行裁剪、抽帧等处理
格式
giftool [图片] [选项]
## 格式
示例
回复一张 GIF 并发送:
`giftool --ss 1.5 -t 2.0`
从 1.5 秒处开始,截取 2 秒长度的片段。
```bash
giftool [图片] [选项]
```
`giftool [图片] --ss 0:10 -to 0:15`
截取从 10 秒到 15 秒之间的片段(支持 MM:SS 或 HH:MM:SS 格式)。
## 示例
`giftool [图片] --frames:v 10`
将整张 GIF 均匀抽帧,最终保留 10 帧。
- **回复一张 GIF 并发送:**
`giftool [图片] --ss 2 --frames:v 5`
从第 2 秒开始截取,并将结果抽帧为 5 帧。
```bash
giftool --ss 1.5 -t 2.0
```
参数说明
图片(必需)
- 必须是 GIF 动图。
- 支持直接附带图片,或回复一条含 GIF 的消息后使用指令。
从 1.5 秒处开始,截取 2 秒长度的片段。
--ss <时间戳>(可选)
- 指定开始时间(单位:秒),可使用以下格式:
• 纯数字(如 `1.5` 表示 1.5 秒)
• 分秒格式(如 `1:30` 表示 1 分 30 秒)
• 时分秒格式(如 `0:1:30` 表示 1 分 30 秒)
- 默认从开头开始0 秒)。
- ```bash
giftool [图片] --ss 0:10 -to 0:15
```
-t <持续时间>(可选)
- 指定截取的持续时间(单位:秒),格式同 --ss。
- 与 --ss 配合使用:截取 [ss, ss + t] 区间。
- 不能与 --to 同时使用。
截取从 10 秒到 15 秒之间的片段(支持 `MM:SS` 或 `HH:MM:SS` 格式)。
--to <时间戳>(可选)
- 指定结束时间(单位:秒),格式同 --ss。
- 与 --ss 配合使用:截取 [ss, to] 区间。
- 不能与 -t 同时使用。
- ```bash
giftool [图片] --frames:v 10
```
--frames:v <帧数>(可选)
- 对截取后的片段进行均匀抽帧,保留指定数量的帧。
- 帧数必须为正整数(> 0
- 若原始帧数 ≤ 指定帧数,则保留全部帧。
将整张 GIF 均匀抽帧,最终保留 10 帧。
--speed <速度>(可选)
- 调整 gif 图的速度。若为负数,则代表倒放
- ```bash
giftool [图片] --ss 2 --frames:v 5
```
使用方式
1. 发送指令前,请确保:
- 消息中附带一张 GIF 动图,或
- 回复一条包含 GIF 动图的消息后再发送指令。
2. 插件会自动:
- 解析 GIF 的每一帧及其持续时间duration
- 根据时间参数转换为帧索引进行裁剪
- 如指定抽帧,则对裁剪后的片段均匀采样
- 生成新的 GIF 并保持原始循环设置loop=0
从第 2 秒开始截取,并将结果抽帧为 5 帧。
## 参数说明
### 图片(必需)
- 必须是 GIF 动图。
- 支持直接附带图片,或回复一条含 GIF 的消息后使用指令。
### `--ss <时间戳>`(可选)
- 指定开始时间(单位:秒),可使用以下格式:
- 纯数字(如 `1.5` 表示 1.5 秒)
- 分秒格式(如 `1:30` 表示 1 分 30 秒)
- 时分秒格式(如 `0:1:30` 表示 1 分 30 秒)
- 默认从开头开始0 秒)。
### `-t <持续时间>`(可选)
- 指定截取的持续时间(单位:秒),格式同 `--ss`。
- 与 `--ss` 配合使用:截取 `[ss, ss + t]` 区间。
- **不能与 `--to` 同时使用。**
### `--to <时间戳>`(可选)
- 指定结束时间(单位:秒),格式同 `--ss`。
- 与 `--ss` 配合使用:截取 `[ss, to]` 区间。
- **不能与 `-t` 同时使用。**
### `--frames:v <帧数>`(可选)
- 对截取后的片段进行均匀抽帧,保留指定数量的帧。
- 帧数必须为正整数(> 0
- 若原始帧数 ≤ 指定帧数,则保留全部帧。
### `--speed <速度>`(可选)
- 调整 GIF 图的速度。若为负数,则代表倒放。
## 使用方式
1. 发送指令前,请确保:
- 消息中附带一张 GIF 动图,**或**
- 回复一条包含 GIF 动图的消息后再发送指令。
2. 插件会自动:
- 解析 GIF 的每一帧及其持续时间duration
- 根据时间参数转换为帧索引进行裁剪
- 如指定抽帧,则对裁剪后的片段均匀采样
- 生成新的 GIF 并保持原始循环设置(`loop=0`

View File

@ -0,0 +1,10 @@
# 指令介绍
根据文字生成 k8x12S
> 「现在还不知道k8x12S是什么的可以开除界隈籍了」—— Louis, 2025/12/31
## 使用指南
`k8x12S 安心をしてください`

View File

@ -1,20 +1,33 @@
指令介绍
man - 用于展示此方 BOT 使用手册的指令
# 指令介绍
格式
man 文档类型
man [文档类型] <指令>
`man` - 用于展示此方 BOT 使用手册的指令
示例
`man` 查看所有有文档的指令清单
`man 3` 列举所有可读文档的库函数清单
`man 喵` 查看指令「喵」的使用说明
`man 8 out` 查看管理员指令「out」的使用说明
## 格式
文档类型
文档类型用来区分同一指令在不同场景下的情景。你可以使用数字编号进行筛选。分为这些种类:
```
man 文档类型
man [文档类型] <指令>
```
- 1 用户态指令,用于日常使用的指令
- 3 库函数指令,用于 Bot 开发用的函数查询
- 7 概念指令,用于概念解释
- 8 系统指令,仅管理员可用
## 示例
- ``man``
查看所有有文档的指令清单
- ``man 3``
列举所有可读文档的库函数清单
- ``man 喵``
查看指令「喵」的使用说明
- ``man 8 out``
查看管理员指令「out」的使用说明
## 文档类型
文档类型用来区分同一指令在不同场景下的情景。你可以使用数字编号进行筛选。分为以下种类:
- **1** 用户态指令:用于日常使用的指令
- **3** 库函数指令:用于 Bot 开发用的函数查询
- **7** 概念指令:用于概念解释
- **8** 系统指令:仅管理员可用

View File

@ -1,15 +1,16 @@
指令介绍
ntfy - 配置使用 ntfy 来更好地为你通知此方 BOT 代办
## 指令介绍
**`ntfy`** - 配置使用 [ntfy](https://ntfy.sh/) 来更好地为你通知此方 BOT 的待办事项。
指令示例
`ntfy 创建`
创建一个随机的 ntfy 订阅主题来提醒代办,此方 Bot 将会给你使用指引。你可以前往 https://ntfy.sh/ 官网下载 ntfy APP或者使用网页版 ntfy。
## 指令示例
`ntfy 创建 kagami-notice`
创建一个名字含有 kagami-notice 的 ntfy 订阅主题
- **`ntfy 创建`**
创建一个随机的 ntfy 订阅主题来提醒待办。此方 Bot 将会给你使用指引。你可以前往 [https://ntfy.sh/](https://ntfy.sh/) 官网下载 ntfy APP或者使用网页版 ntfy。
`ntfy 删除`
清除并不再使用 ntfy 向你通知
- **`ntfy 创建 kagami-notice`**
创建一个名称包含 `kagami-notice` 的 ntfy 订阅主题。
另见
提醒我(1) 查询提醒(1) 删除提醒(1)
- **`ntfy 删除`**
清除配置,不再使用 ntfy 向你发送通知。
## 另见
[`提醒我(1)`](#) [`查询提醒(1)`](#) [`删除提醒(1)`](#)

View File

@ -1,21 +1,39 @@
指令介绍
openssl - 用于生成指定长度的加密安全随机数据
# 指令介绍
格式
openssl rand <模式> <字节数>
`openssl rand` — 用于生成指定长度的加密安全随机数据。
示例
`openssl rand -hex 16` 生成 16 字节的十六进制随机数
`openssl rand -base64 32` 生成 32 字节并以 Base64 编码输出的随机数据
## 格式
说明
该指令使用 Python 的 secrets 模块生成加密安全的随机字节,并支持以十六进制(-hex或 Base64-base64格式输出。
```bash
openssl rand <模式> <字节数>
```
参数说明
模式mode
- -hex :以十六进制字符串形式输出随机数据
- -base64 :以 Base64 编码字符串形式输出随机数据
## 示例
字节数num
- 必须为正整数
- 最大支持 256 字节
- ```bash
openssl rand -hex 16
```
生成 16 字节的十六进制随机数。
- ```bash
openssl rand -base64 32
```
生成 32 字节并以 Base64 编码输出的随机数据。
## 说明
该指令使用 Python 的 `secrets` 模块生成加密安全的随机字节,并支持以以下格式输出:
- 十六进制(`-hex`
- Base64 编码(`-base64`
## 参数说明
### 模式mode
- `-hex`:以十六进制字符串形式输出随机数据
- `-base64`:以 Base64 编码字符串形式输出随机数据
### 字节数num
- 必须为正整数
- 最大支持 256 字节

View File

@ -1,47 +1,55 @@
指令介绍
shadertool - 使用 SkSLSkia Shader Language代码实时渲染并生成 GIF 动画
# 指令介绍
`shadertool` - 使用 SkSLSkia Shader Language代码实时渲染并生成 GIF 动画
格式
shadertool [选项] <SkSL 代码>
## 格式
```bash
shadertool [选项] <SkSL 代码>
```
示例
shadertool """
uniform float u_time;
uniform float2 u_resolution;
## 示例
```bash
shadertool """
uniform float u_time;
uniform float2 u_resolution;
half4 main(float2 coord) {
return half4(
1.0,
sin((coord.y / u_resolution.y + u_time) * 3.1415926 * 2) * 0.5 + 0.5,
coord.x / u_resolution.x,
1.0
);
}
"""
half4 main(float2 coord) {
return half4(
1.0,
sin((coord.y / u_resolution.y + u_time) * 3.1415926 * 2) * 0.5 + 0.5,
coord.x / u_resolution.x,
1.0
);
}
"""
```
参数说明
SkSL 代码(必填)
- 类型:字符串(建议用英文双引号包裹)
- 内容:符合 SkSL 语法的片段着色器代码,必须包含 `void main()` 函数,并为 `sk_FragColor` 赋值。
- 注意:插件会自动去除代码首尾的单引号或双引号,便于命令行输入。
## 参数说明
--width <整数>(可选)
- 默认值320
- 作用:输出 GIF 的宽度(像素),必须大于 0
### SkSL 代码(必填)
- **类型**:字符串(建议用英文双引号包裹)
- **内容**:符合 SkSL 语法的片段着色器代码,必须包含 `main` 函数,并返回 `half4` 类型的颜色值
- **注意**:插件会自动去除代码首尾的单引号或双引号,便于命令行输入。
--height <整数>(可选)
- 默认值180
- 作用:输出 GIF 的度(像素),必须大于 0。
### `--width <整数>`(可选)
- **默认值**`320`
- **作用**:输出 GIF 的度(像素),必须大于 0。
--duration <浮点数>(可选)
- 默认值1.0
- 作用:动画总时长(秒),必须大于 0。
- 限制:`duration × fps` 必须 ≥ 1 且 ≤ 100即至少 1 帧,最多 100 帧)。
### `--height <整数>`(可选)
- **默认值**`180`
- **作用**:输出 GIF 的高度(像素),必须大于 0。
--fps <浮点数>(可选)
- 默认值15.0
- 作用:每秒帧数,控制动画流畅度,必须大于 0。
- 常见值10低配流畅、15默认、24/30电影/视频级)。
### `--duration <浮点数>`(可选)
- **默认值**`1.0`
- **作用**:动画总时长(秒),必须大于 0。
- **限制**`duration × fps` 必须 ≥ 1 且 ≤ 100即至少 1 帧,最多 100 帧)。
使用方式
直接在群聊或私聊中发送 `shadertool` 指令,附上合法的 SkSL 代码即可。
### `--fps <浮点数>`(可选)
- **默认值**`15.0`
- **作用**:每秒帧数,控制动画流畅度,必须大于 0。
- **常见值**
- `10`:低配流畅
- `15`:默认
- `24` / `30`:电影/视频级流畅度
## 使用方式
直接在群聊或私聊中发送 `shadertool` 指令,附上合法的 SkSL 代码即可。

View File

@ -0,0 +1,174 @@
# 文字处理机器人使用手册(小白友好版)
欢迎使用文字处理机器人!你不需要懂编程,只要会打字,就能用它完成各种文字操作——比如加密、解密、打乱顺序、进制转换、排版整理等。
---
## 一、基础演示
在 QQ 群里这样使用:
1. **直接输入命令**(适合短文本):
```
/textfx reverse 你好世界
```
→ 机器人回复:`界世好你`
2. **先发一段文字,再用命令处理它**(适合长文本):
- 先发送:`Hello, World!`
- 回复这条消息,输入:
```
/textfx b64 encode
```
→ 机器人返回:`SGVsbG8sIFdvcmxkIQ==`
> 命令可写为 `/textfx`、`/处理文字` 或 `/处理文本`。
> 若不回复消息,命令会处理当前行后面的文本。
---
## 二、流水线语法(超简单)
- 用 `|` 连接多个操作,前一个的输出自动作为后一个的输入。
- 用 `;` 分隔多条独立指令,它们各自产生输出,最终合并显示。
- 用 `>` 或 `>>` 把结果保存起来(见下文),被重定向的指令不会产生输出。
**例子**:把"HELLO"先反转,再转成摩斯电码:(转换为摩斯电码功能暂未实现)
```
textfx reverse HELLO | morse en
```
→ 输出:`--- .-.. .-.. . ....`
**例子**:多条指令各自输出:
```
textfx echo 你好; echo 世界
```
→ 输出:
```
你好
世界
```
**例子**:重定向的指令不输出,其余正常输出:
```
textfx echo 1; echo 2 > a; echo 3
```
→ 输出:
```
1
3
```
---
## 三、功能清单(含示例)
### reverse或 rev、反转
反转文字。
示例:`/textfx reverse 爱你一万年` → `年万一你爱`
### b64或 base64
Base64 编码或解码。
示例:`/textfx b64 encode 你好` → `5L2g5aW9`
示例:`/textfx b64 decode 5L2g5aW9` → `你好`
### caesar或 凯撒、rot
凯撒密码(仅对英文字母有效)。
示例:`/textfx caesar 3 ABC` → `DEF`
示例:`/textfx caesar -3 DEF` → `ABC`
### morse或 摩斯)
将摩斯电码解码为文字(支持英文和日文)。字符间用空格,单词间用 `/`。
示例:`/textfx morse en .... . .-.. .-.. ---` → `HELLO`
示例:`/textfx morse jp -... --.-- -.. --.. ..- ..` → `ハアホフウイ`
### baseconv或 进制转换)
在不同进制之间转换数字。
示例:`/textfx baseconv 2 10 1101` → `13`
示例:`/textfx baseconv 10 16 255` → `FF`
### shuffle或 打乱)
随机打乱文字顺序。
示例:`/textfx shuffle abcdef` → `fcbade`(每次结果不同)
### sort或 排序)
将文字按字符顺序排列。
示例:`/textfx sort dbca` → `abcd`
### b64hex
在 Base64 和十六进制之间互转。
示例:`/textfx b64hex dec SGVsbG8=` → `48656c6c6f`
示例:`/textfx b64hex enc 48656c6c6f` → `SGVsbG8=`
### align或 format、排版
按指定格式分组排版文字。
示例:`/textfx align 2 4 0123456789abcdef` →
```
01 23 45 67
89 ab cd ef
```
### echo
输出指定文字。
示例:`/textfx echo 你好` → `你好`
### cat
读取并拼接缓存内容,类似 Unix cat 命令。
- 无参数时直接传递标准输入(管道输入或回复的消息)。
- 使用 `-` 代表标准输入,可与缓存名混合使用。
- 支持多个参数,按顺序拼接输出。
示例:
- 传递输入:`/textfx echo 你好 | cat` → `你好`
- 读取缓存:`/textfx cat mytext` → 输出 mytext 的内容
- 拼接多个缓存:`/textfx cat a b c` → 依次拼接缓存 a、b、c
- 混合标准输入和缓存:`/textfx echo 前缀 | cat - mytext` → 拼接标准输入与缓存 mytext
### 缓存操作(保存中间结果)
- 保存:`/textfx reverse 你好 > mytext`(不输出,存入 mytext
- 读取:`/textfx cat mytext` → `好你`
- 追加:`/textfx echo world >> mytext`
- 删除:`/textfx rm mytext`
> 缓存仅在当前对话中有效,重启后清空。
### replace或 替换、sed
替换文字(支持正则表达式)。
示例(普通):`/textfx replace 世界 宇宙 你好世界` → `你好宇宙`
示例(正则):`/textfx replace \d+ [数字] 我有123个苹果` → `我有[数字]个苹果`
### trim或 strip、去空格
去除文本首尾空白字符。
示例:`/textfx trim " 你好 "` → `你好`
示例:`/textfx echo " hello " | trim` → `hello`
### ltrim或 lstrip
去除文本左侧空白字符。
示例:`/textfx ltrim " 你好 "` → `你好 `
### rtrim或 rstrip
去除文本右侧空白字符。
示例:`/textfx rtrim " 你好 "` → ` 你好`
### squeeze或 压缩空白)
将连续的空白字符(空格、制表符)压缩为单个空格。
示例:`/textfx squeeze "你好 世界"` → `你好 世界`
### lines或 行处理)
按行处理文本,支持以下子命令:
- `lines trim` — 去除每行首尾空白
- `lines empty` — 去除所有空行
- `lines squeeze` — 将连续空行压缩为一行
示例:`/textfx echo " hello\n\n\n world " | lines trim` → `hello\n\n\n world`
示例:`/textfx echo "a\n\n\nb" | lines squeeze` → `a\n\nb`
---
## 常见问题
- **没反应?** 可能内容被安全系统拦截,机器人会提示“内容被拦截”。
- **只支持纯文字**,暂不支持图片或文件。
- 命令拼错时,机器人会提示“不存在名为 xxx 的函数”,请检查名称。
快去试试吧!用法核心:**`/textfx` + 你的操作**

View File

@ -0,0 +1,24 @@
# tqszm
引用一条消息,让此方帮你提取首字母。
例子:
```
John: 11-28 16:50:37
谁来总结一下今天的工作?
Jack: 11-28 16:50:55
[引用John的消息] @此方Bot tqszm
此方Bot: 11-28 16:50:56
slzjyxjtdgz
```
或者,你也可以直接以正常指令的方式调用:
```
@此方Bot 提取首字母 中山大学
> zsdx
```

View File

@ -0,0 +1,4 @@
# Typst 渲染
只需使用 `typst ...` 就可以渲染 Typst 了

View File

@ -1,41 +1,72 @@
指令介绍
ytpgif - 生成来回镜像翻转的仿 YTPMV 动图
# `ytpgif` 指令说明
格式
ytpgif [倍速]
## 功能简介
`ytpgif` 用于生成来回镜像翻转的仿 YTPMVYouTube Poop Music Video风格动图。
示例
`ytpgif`
使用默认倍速1.0)处理你发送或回复的图片,生成镜像动图。
---
`ytpgif 2.5`
以 2.5 倍速处理图片,生成更快节奏的镜像动图。
## 命令格式
```bash
ytpgif [倍速]
```
回复一张图片并发送 `ytpgif 0.5`
以慢速0.5 倍)生成镜像动图。
---
参数说明
倍速(可选)
- 类型:浮点数
- 默认值1.0
- 有效范围0.1 20.0
- 作用:
• 对于静态图:控制镜像切换的快慢(值越大,切换越快)。
• 对于动图:控制截取原始动图正向和反向片段的时长(值越大,截取的片段越长)。
## 使用示例
使用方式
发送指令前,请确保:
- 直接在消息中附带一张图片,或
- 回复一条包含图片的消息后再发送指令。
- **默认倍速**
```bash
ytpgif
```
使用默认倍速1.0)处理你发送或回复的图片,生成镜像动图。
插件会自动:
- 下载并识别图片(支持静态图和 GIF 动图)
- 自动缩放至最大边长不超过 256 像素(保持宽高比)
- 静态图 → 生成“原图↔镜像”循环动图
- 动图 → 截取开头一段正向播放 + 同一段镜像翻转播放,拼接成新动图
- 保留透明通道(如原图含透明),否则转为 RGB 避免颜色异常
- **指定倍速(较快)**
```bash
ytpgif 2.5
```
以 2.5 倍速处理图片,生成节奏更快的镜像动图
注意事项
- 图片过大、格式损坏或网络问题可能导致处理失败。
- 动图帧数过多或单帧过短可能无法生成有效输出。
- 输出 GIF 最大单段帧数限制为 500 帧,以防资源耗尽。
- **指定倍速(较慢)**
回复一张图片并发送:
```bash
ytpgif 0.5
```
以 0.5 倍速生成慢节奏的镜像动图。
---
## 参数说明
### `倍速`(可选)
- **类型**:浮点数
- **默认值**`1.0`
- **有效范围**`0.1 20.0`
#### 作用:
- **静态图**:控制“原图 ↔ 镜像”切换的速度(值越大,切换越快)。
- **GIF 动图**:控制截取原始动图正向与反向片段的时长(值越大,截取的片段越长)。
---
## 使用方式
在发送指令前,请确保满足以下任一条件:
- 在消息中**直接附带一张图片**,或
- **回复一条包含图片的消息**后再发送指令。
插件将自动执行以下操作:
1. 下载并识别图片(支持静态图和 GIF 动图)。
2. 自动缩放图像,**最大边长不超过 256 像素**(保持宽高比)。
3. 根据图片类型处理:
- **静态图** → 生成“原图 ↔ 镜像”循环动图。
- **GIF 动图** → 截取开头一段正向播放 + 同一段镜像翻转播放,拼接成新动图。
4. **保留透明通道**(若原图含透明),否则转为 RGB 模式以避免颜色异常。
---
## 注意事项
⚠️ 以下情况可能导致处理失败或效果不佳:
- 图片过大、格式损坏或网络问题;
- 动图帧数过多或单帧持续时间过短;
- 输出 GIF 单段帧数超过 **500 帧**(系统将自动限制以防资源耗尽)。

View File

@ -1,8 +1,9 @@
指令介绍
删除提醒 - 删除在`查询提醒(1)`中查到的提醒
## 指令介绍
**删除提醒** - 删除在 [`查询提醒(1)`](查询提醒(1)) 中查到的提醒
指令示例
`删除提醒 1` 在查询提醒后,删除编号为 1 的提醒
## 指令示例
`删除提醒 1`
在查询提醒后,删除编号为 1 的提醒
另见
提醒我(1) 查询提醒(1) ntfy(1)
## 另见
[`提醒我(1)`](提醒我(1)) [`查询提醒(1)`](查询提醒(1)) [`ntfy(1)`](ntfy(1))

View File

@ -1,20 +1,24 @@
指令介绍
卵总展示 - 让卵总举起你的图片
# 指令介绍
格式
<引用图片> 卵总展示 [选项]
卵总展示 [选项] <图片>
**卵总展示** - 让卵总举起你的图片
选项
`--whiteness <number>` 白度
将原图进行指数变换,以调整它的白的程度,默认为 0.0
## 格式
`--black-level <number>` 黑色等级
将原图减淡,数值越大越淡,范围 0.0-1.0,默认 0.2
```
<引用图片> 卵总展示 [选项]
卵总展示 [选项] <图片>
```
`--opacity <number>` 不透明度
将你的图片叠放在图片上的不透明度,默认为 0.8
## 选项
`--saturation <number>` 饱和度
调整原图的饱和度,应该要大于 0.0,默认为 0.85
- `--whiteness <number>` **白度**
将原图进行指数变换,以调整它的白的程度,默认为 `0.0`。
- `--black-level <number>` **黑色等级**
将原图减淡,数值越大越淡,范围 `0.01.0`,默认为 `0.2`。
- `--opacity <number>` **不透明度**
将你的图片叠放在图片上的不透明度,默认为 `0.8`。
- `--saturation <number>` **饱和度**
调整原图的饱和度,应大于 `0.0`,默认为 `0.85`。

View File

@ -1,11 +1,16 @@
指令介绍
发起投票 - 发起一个投票
### 指令介绍
**发起投票** - 发起一个投票
格式
发起投票 <投票标题> <选项1> <选项2> ...
### 格式
```
发起投票 <投票标题> <选项1> <选项2> ...
```
示例
`发起投票 这是一个投票 A B C` 发起标题为“这是一个投票”选项为“A”、“B”、“C”的投票
### 示例
`发起投票 这是一个投票 A B C`
发起标题为“这是一个投票”选项为“A”、“B”、“C”的投票。
说明
投票各个选项之间用空格分隔选项数量为2-15项。投票的默认有效期为24小时
### 说明
- 投票各个选项之间用空格分隔。
- 选项数量必须为 **2 到 15 项**。
- 投票的默认有效期为 **24 小时**。

View File

@ -1,2 +1,3 @@
指令介绍
喵 - 你发喵,此方就会回复喵
# 指令介绍
喵 - 你发喵,此方就会回复喵

View File

@ -1,12 +1,16 @@
指令介绍
投票 - 参与已发起的投票
## 指令介绍
**投票** - 参与已发起的投票
格式
投票 <投票ID/标题> <选项文本>
## 格式
```
投票 <投票ID/标题> <选项文本>
```
示例
`投票 1 A` 在ID为1的投票中投给“A”
`投票 这是一个投票 B` 在标题为“这是一个投票”的投票中,投给“B
## 示例
- `投票 1 A`
在 ID 为 1 的投票中,投给 “A
- `投票 这是一个投票 B`
在标题为 “这是一个投票” 的投票中,投给 “B”
说明
目前不支持单人多投,每个人只能投一项。
## 说明
目前不支持单人多投,每个人只能投一项。

View File

@ -1,15 +1,18 @@
指令介绍
提醒我 - 在指定的时间提醒人事项的工具
## 指令介绍
使用示例
`下午五点提醒我吃饭`
创建一个下午五点的提醒,提醒你吃饭
**提醒我** - 在指定的时间提醒人事项的工具
`两分钟后提醒我睡觉`
创建一个相对于现在推迟 2 分钟的提醒,提醒你睡觉
## 使用示例
`2026年4月25日20点整提醒我生日快乐`
创建一个指定日期和时间的提醒
- `下午五点提醒我吃饭`
创建一个下午五点的提醒,提醒你吃饭
另见
查询提醒(1) 删除提醒(1) ntfy(1)
- `两分钟后提醒我睡觉`
创建一个相对于现在推迟 2 分钟的提醒,提醒你睡觉
- `2026年4月25日20点整提醒我生日快乐`
创建一个指定日期和时间的提醒
## 另见
[`查询提醒(1)`](查询提醒) [`删除提醒(1)`](删除提醒) [`ntfy(1)`](ntfy)

View File

@ -1,7 +1,13 @@
指令介绍
摇数字 - 生成一个随机数字并发送
## 指令介绍
示例
`摇数字` 随机生成一个 1-6 的数字
**摇数字** - 生成一个随机数字并发送
该指令不接受任何参数,直接调用即可。
### 示例
```
摇数字
```
随机生成一个 1-6 的数字。
> 该指令不接受任何参数,直接调用即可。

View File

@ -1,22 +1,33 @@
指令介绍
摇骰子 - 用于生成随机数并以骰子图像形式展示的指令
# 指令介绍
格式
摇骰子 [最小值] [最大值]
**摇骰子** - 用于生成随机数并以骰子图像形式展示的指令
示例
`摇骰子` 随机生成一个 1-6 的数字,并显示对应的骰子图像
`摇骰子 10` 生成 1 到 10 之间的随机整数
`摇骰子 0.5` 生成 0 到 0.5 之间的随机小数
`摇骰子 -5 5` 生成 -5 到 5 之间的随机数
## 格式
说明
该指令支持以下几种调用方式:
- 不带参数:使用默认范围生成随机数
- 仅指定一个参数 f1
- 若 f1 > 1则生成 [1, f1] 范围内的随机数
- 若 0 < f1 ≤ 1则生成 [0, f1] 范围内的随机数
- 若 f1 ≤ 0则生成 [f1, 0] 范围内的随机数
- 指定两个参数 f1 和 f2生成 [f1, f2] 范围内的随机数(顺序无关,内部会自动处理大小)
```
摇骰子 [最小值] [最大值]
```
## 示例
- `摇骰子`
随机生成一个 16 的数字,并显示对应的骰子图像
- `摇骰子 10`
生成 1 到 10 之间的随机整数
- `摇骰子 0.5`
生成 0 到 0.5 之间的随机小数
- `摇骰子 -5 5`
生成 -5 到 5 之间的随机数
## 说明
该指令支持以下几种调用方式:
- **不带参数**使用默认范围16生成随机数
- **仅指定一个参数 `f1`**
- 若 `f1 > 1`,则生成 `[1, f1]` 范围内的随机数
- 若 `0 < f1 ≤ 1`,则生成 `[0, f1]` 范围内的随机数
- 若 `f1 ≤ 0`,则生成 `[f1, 0]` 范围内的随机数
- **指定两个参数 `f1` 和 `f2`**:生成 `[f1, f2]` 范围内的随机数(顺序无关,内部会自动处理大小)
返回结果将以骰子样式的图像形式展示生成的随机数值。

View File

@ -1,12 +1,22 @@
指令介绍
查看投票 - 查看已发起的投票
# 指令介绍
格式
查看投票 <投票ID或标题>
**查看投票** - 查看已发起的投票
示例
`查看投票 1` 查看ID为1的投票
`查看投票 这是一个投票` 查看标题为“这是一个投票”的投票
## 格式
说明
投票在进行时,使用此命令可以看到投票的各个选项;投票结束后,则可以看到各项的票数。
```
查看投票 <投票ID或标题>
```
## 示例
- `查看投票 1`
查看 ID 为 1 的投票
- `查看投票 这是一个投票`
查看标题为“这是一个投票”的投票
## 说明
- 投票进行中时,使用此命令可查看投票的各个选项;
- 投票结束后,可查看各选项的最终票数。

View File

@ -1,9 +1,9 @@
指令介绍
查询提醒 - 查询已经创建的提醒
# 指令介绍
**查询提醒** - 查询已经创建的提醒
指令格式
`查询提醒` 查询提醒
`查询提醒 2` 查询第二页提醒
## 指令格式
- `查询提醒`查询提醒
- `查询提醒 2`查询第二页提醒
另见
提醒我(1) 删除提醒(1) ntfy(1)
## 另见
[提醒我(1)]()[删除提醒(1)]()[ntfy(1)]()

View File

@ -1,8 +1,17 @@
指令介绍
生成二维码 - 将文本内容转换为二维码
## 指令介绍
格式
生成二维码 <文本内容>
**生成二维码** - 将文本内容转换为二维码
示例
`生成二维码 嗨嗨嗨` 生成扫描结果为“嗨嗨嗨”的二维码图片
### 格式
```
生成二维码 <文本内容>
```
### 示例
```
生成二维码 嗨嗨嗨
```
生成扫描结果为“嗨嗨嗨”的二维码图片

View File

@ -0,0 +1,30 @@
# 指令介绍
**订阅** - 收听此方 BOT 的自动消息发送。
---
## 格式
- `订阅 <频道名称>`
- `取消订阅 <频道名称>`
- `查询订阅 [页码]`
- `可用订阅 [页码]`
---
## 示例
- **`订阅 此方谜题`**
在当前的聊天上下文订阅「此方谜题」频道。此后会每天推送此方谜题(由 konaph(8) 管理的)。
- 如果你是私聊,则能够每天发送此方谜题到你的私聊;
- 如果在群聊中使用该指令,则会每天发送题目到这个群里面。
- **`取消订阅 此方谜题`**
取消订阅「此方谜题」频道。
- **`查询订阅`**
查询当前聊天上下文订阅的所有频道。
- **`可用订阅 2`**
查询所有可用的订阅的第二页。

View File

@ -1,13 +1,20 @@
指令介绍
雷达回波 - 用于获取指定地区的天气雷达回波图像
# 指令介绍
格式
雷达回波 <地区>
**雷达回波** - 用于获取指定地区的天气雷达回波图像。
示例
`雷达回波 华南` 获取华南地区的天气雷达回波图
`雷达回波 全国` 获取全国的天气雷达回波图
## 格式
说明
该指令通过查询中国气象局 https://www.nmc.cn/publish/radar/chinaall.html ,获取指定地区的实时天气雷达回波图像。
支持的地区有:全国 华北 东北 华东 华中 华南 西南 西北。
```
雷达回波 <地区>
```
## 示例
- `雷达回波 华南`:获取华南地区的天气雷达回波图
- `雷达回波 全国`:获取全国的天气雷达回波图
## 说明
该指令通过查询中国气象局 [https://www.nmc.cn/publish/radar/chinaall.html](https://www.nmc.cn/publish/radar/chinaall.html),获取指定地区的实时天气雷达回波图像。
支持的地区有:**全国**、**华北**、**东北**、**华东**、**华中**、**华南**、**西南**、**西北**。

View File

@ -1,5 +0,0 @@
指令介绍
黑白 - 将图片经过一个黑白滤镜的处理
示例
引用一个带有图片的消息,或者消息本身携带图片,然后发送「黑白」即可

View File

@ -0,0 +1,52 @@
from io import BytesIO
import base64
import re
from loguru import logger
from nonebot import on_message
from nonebot.rule import Rule
from konabot.common.apis.ali_content_safety import AlibabaGreen
from konabot.common.llm import get_llm
from konabot.common.longtask import DepLongTaskTarget
from konabot.common.nb.extract_image import DepPILImage
from konabot.common.nb.match_keyword import match_keyword
cmd = on_message(rule=Rule(match_keyword(re.compile(r"^千问识图\s*$"))))
@cmd.handle()
async def _(img: DepPILImage, target: DepLongTaskTarget):
if 1:
return #TODO:这里还没写完,还有 Bug 要修
jpeg_data = BytesIO()
if img.width > 2160:
img = img.resize((2160, img.height * 2160 // img.width))
if img.height > 2160:
img = img.resize((img.width * 2160 // img.height, 2160))
img = img.convert("RGB")
img.save(jpeg_data, format="jpeg", optimize=True, quality=85)
data_url = "data:image/jpeg;base64,"
data_url += base64.b64encode(jpeg_data.getvalue()).decode('ascii')
llm = get_llm("qwen3-vl-plus")
res = await llm.chat([
{ "role": "user", "content": [
{ "type": "image_url", "image_url": {
"url": data_url
} },
{ "type": "text", "text": "请你提取这张图片中的所有文字,并尽量按照原图的排版输出,不需要其他内容" },
] }
])
result = res.content
logger.info(res)
if result is None:
await target.send_message("提取失败:可能存在网络异常")
return
if not await AlibabaGreen.detect(result):
await target.send_message("提取失败:图片中可能存在一些不合适的内容")
return
await target.send_message(result, at=False)

View File

@ -1,22 +1,29 @@
from io import BytesIO
from typing import Optional, Union
import cv2
import nonebot
from nonebot.adapters import Event as BaseEvent
from nonebot.adapters.console.event import MessageEvent as ConsoleMessageEvent
from nonebot.adapters.discord.event import MessageEvent as DiscordMessageEvent
from nonebot_plugin_alconna import Alconna, AlconnaMatcher, Args, UniMessage, on_alconna
from PIL import Image
import numpy as np
from konabot.common.database import DatabaseManager
from konabot.common.longtask import DepLongTaskTarget
from konabot.common.path import ASSETS_PATH
from konabot.common.web_render import WebRenderer
from konabot.plugins.air_conditioner.ac import AirConditioner, CrashType, generate_ac_image, wiggle_transform
from pathlib import Path
import random
import math
def get_ac(id: str) -> AirConditioner:
ac = AirConditioner.air_conditioners.get(id)
ROOT_PATH = Path(__file__).resolve().parent
# 创建全局数据库管理器实例
db_manager = DatabaseManager()
async def get_ac(id: str) -> AirConditioner:
ac = await AirConditioner.get_ac(id)
if ac is None:
ac = AirConditioner(id)
return ac
@ -43,14 +50,32 @@ async def send_ac_image(event: type[AlconnaMatcher], ac: AirConditioner):
ac_image = await generate_ac_image(ac)
await event.send(await UniMessage().image(raw=ac_image).export())
driver = nonebot.get_driver()
@driver.on_startup
async def register_startup_hook():
"""注册启动时需要执行的函数"""
# 初始化数据库表
await db_manager.execute_by_sql_file(
Path(__file__).resolve().parent / "sql" / "create_table.sql"
)
@driver.on_shutdown
async def register_shutdown_hook():
"""注册关闭时需要执行的函数"""
# 关闭所有数据库连接
await db_manager.close_all_connections()
evt = on_alconna(Alconna(
"群空调"
), use_cmd_start=True, use_cmd_sep=False, skip_for_unmatch=True)
@evt.handle()
async def _(event: BaseEvent, target: DepLongTaskTarget):
async def _(target: DepLongTaskTarget):
id = target.channel_id
ac = get_ac(id)
ac = await get_ac(id)
await send_ac_image(evt, ac)
evt = on_alconna(Alconna(
@ -58,10 +83,10 @@ evt = on_alconna(Alconna(
), use_cmd_start=True, use_cmd_sep=False, skip_for_unmatch=True)
@evt.handle()
async def _(event: BaseEvent, target: DepLongTaskTarget):
async def _(target: DepLongTaskTarget):
id = target.channel_id
ac = get_ac(id)
ac.on = True
ac = await get_ac(id)
await ac.update_ac(state=True)
await send_ac_image(evt, ac)
evt = on_alconna(Alconna(
@ -69,10 +94,10 @@ evt = on_alconna(Alconna(
), use_cmd_start=True, use_cmd_sep=False, skip_for_unmatch=True)
@evt.handle()
async def _(event: BaseEvent, target: DepLongTaskTarget):
async def _(target: DepLongTaskTarget):
id = target.channel_id
ac = get_ac(id)
ac.on = False
ac = await get_ac(id)
await ac.update_ac(state=False)
await send_ac_image(evt, ac)
evt = on_alconna(Alconna(
@ -81,31 +106,29 @@ evt = on_alconna(Alconna(
), use_cmd_start=True, use_cmd_sep=False, skip_for_unmatch=True)
@evt.handle()
async def _(event: BaseEvent, target: DepLongTaskTarget, temp: Optional[Union[int, float]] = 1):
async def _(target: DepLongTaskTarget, temp: Optional[Union[int, float]] = 1):
if temp is None:
temp = 1
if temp <= 0:
return
id = target.channel_id
ac = get_ac(id)
ac = await get_ac(id)
if not ac.on or ac.burnt == True or ac.frozen == True:
await send_ac_image(evt, ac)
return
ac.temperature += temp
if ac.temperature > 40:
# 根据温度随机出是否爆炸40度开始呈指数增长
possibility = -math.e ** ((40-ac.temperature) / 50) + 1
if random.random() < possibility:
# 打开爆炸图片
with open(ASSETS_PATH / "img" / "other" / "boom.jpg", "rb") as f:
output = BytesIO()
# 爆炸抖动
frames = wiggle_transform(np.array(Image.open(f)), intensity=5)
pil_frames = [Image.fromarray(frame) for frame in frames]
pil_frames[0].save(output, format="GIF", save_all=True, append_images=pil_frames[1:], loop=0, duration=35, disposal=2)
output.seek(0)
await evt.send(await UniMessage().image(raw=output).export())
ac.broke_ac(CrashType.BURNT)
await evt.send("太热啦,空调炸了!")
return
await ac.update_ac(temperature_delta=temp)
if ac.burnt:
# 打开爆炸图片
with open(ASSETS_PATH / "img" / "other" / "boom.jpg", "rb") as f:
output = BytesIO()
# 爆炸抖动
frames = wiggle_transform(np.array(Image.open(f)), intensity=5)
pil_frames = [Image.fromarray(frame) for frame in frames]
pil_frames[0].save(output, format="GIF", save_all=True, append_images=pil_frames[1:], loop=0, duration=35, disposal=2)
output.seek(0)
await evt.send(await UniMessage().image(raw=output).export())
await evt.send("太热啦,空调炸了!")
return
await send_ac_image(evt, ac)
evt = on_alconna(Alconna(
@ -114,20 +137,17 @@ evt = on_alconna(Alconna(
), use_cmd_start=True, use_cmd_sep=False, skip_for_unmatch=True)
@evt.handle()
async def _(event: BaseEvent, target: DepLongTaskTarget, temp: Optional[Union[int, float]] = 1):
async def _(target: DepLongTaskTarget, temp: Optional[Union[int, float]] = 1):
if temp is None:
temp = 1
if temp <= 0:
return
id = target.channel_id
ac = get_ac(id)
ac = await get_ac(id)
if not ac.on or ac.burnt == True or ac.frozen == True:
await send_ac_image(evt, ac)
return
ac.temperature -= temp
if ac.temperature < 0:
# 根据温度随机出是否冻结0度开始呈指数增长
possibility = -math.e ** (ac.temperature / 50) + 1
if random.random() < possibility:
ac.broke_ac(CrashType.FROZEN)
await ac.update_ac(temperature_delta=-temp)
await send_ac_image(evt, ac)
evt = on_alconna(Alconna(
@ -135,21 +155,34 @@ evt = on_alconna(Alconna(
), use_cmd_start=True, use_cmd_sep=False, skip_for_unmatch=True)
@evt.handle()
async def _(event: BaseEvent, target: DepLongTaskTarget):
async def _(target: DepLongTaskTarget):
id = target.channel_id
ac = get_ac(id)
ac.change_ac()
ac = await get_ac(id)
await ac.change_ac()
await send_ac_image(evt, ac)
async def query_number_ranking(id: str) -> tuple[int, int]:
result = await db_manager.query_by_sql_file(
ROOT_PATH / "sql" / "query_crash_and_rank.sql",
(id,id)
)
if len(result) == 0:
return 0, 0
else:
# 将字典转换为值的元组
values = list(result[0].values())
return values[0], values[1]
evt = on_alconna(Alconna(
"空调炸炸排行榜",
), use_cmd_start=True, use_cmd_sep=False, skip_for_unmatch=True)
@evt.handle()
async def _(event: BaseEvent, target: DepLongTaskTarget):
async def _(target: DepLongTaskTarget):
id = target.channel_id
ac = get_ac(id)
number, ranking = ac.get_crashes_and_ranking()
# ac = get_ac(id)
# number, ranking = ac.get_crashes_and_ranking()
number, ranking = await query_number_ranking(id)
params = {
"number": number,
"ranking": ranking
@ -159,4 +192,37 @@ async def _(event: BaseEvent, target: DepLongTaskTarget):
target=".box",
params=params
)
await evt.send(await UniMessage().image(raw=image).export())
await evt.send(await UniMessage().image(raw=image).export())
evt = on_alconna(Alconna(
"空调最高峰",
), use_cmd_start=True, use_cmd_sep=False, skip_for_unmatch=True)
@evt.handle()
async def _(target: DepLongTaskTarget):
result = await db_manager.query_by_sql_file(
ROOT_PATH / "sql" / "query_peak.sql"
)
if len(result) == 0:
await evt.send("没有空调记录!")
return
max_temp = result[0].get("max")
min_temp = result[0].get("min")
his_max = result[0].get("his_max")
his_min = result[0].get("his_min")
# 再从内存里的空调池中获取最高温度和最低温度
for ac in AirConditioner.InstancesPool.values():
if ac.on and not ac.burnt and not ac.frozen:
if max_temp is None or min_temp is None:
max_temp = ac.temperature
min_temp = ac.temperature
max_temp = max(max_temp, ac.temperature)
min_temp = min(min_temp, ac.temperature)
if max_temp is None or min_temp is None:
await evt.send(f"目前全部空调都被炸掉了!")
else:
await evt.send(f"全球在线空调最高温度为 {'%.1f' % max_temp}°C最低温度为 {'%.1f' % min_temp}°C")
if his_max is None or his_min is None:
pass
else:
await evt.send(f"历史最高温度为 {'%.1f' % his_max}°C最低温度为 {'%.1f' % his_min}°C\n(要进入历史记录,温度需至少保持 5 分钟)")

View File

@ -1,20 +1,193 @@
import asyncio
from enum import Enum
from io import BytesIO
import math
from pathlib import Path
import random
import signal
import time
import cv2
import numpy as np
from PIL import Image, ImageDraw, ImageFont
from nonebot import logger
from konabot.common.database import DatabaseManager
from konabot.common.path import ASSETS_PATH, FONTS_PATH
from konabot.common.path import DATA_PATH
import nonebot
import json
ROOT_PATH = Path(__file__).resolve().parent
# 创建全局数据库管理器实例
db_manager = DatabaseManager()
class CrashType(Enum):
BURNT = 0
FROZEN = 1
driver = nonebot.get_driver()
@driver.on_startup
async def register_startup_hook():
await ac_manager.start_auto_save()
@driver.on_shutdown
async def register_shutdown_hook():
"""注册关闭时需要执行的函数"""
# 停止自动保存任务
if ac_manager:
await ac_manager.stop_auto_save()
class AirConditionerManager:
def __init__(self, save_interval: int = 300): # 默认5分钟保存一次
self.save_interval = save_interval
self._save_task = None
self._running = False
async def start_auto_save(self):
"""启动自动保存任务"""
self._running = True
self._save_task = asyncio.create_task(self._auto_save_loop())
logger.info(f"自动保存任务已启动,间隔: {self.save_interval}")
async def stop_auto_save(self):
"""停止自动保存任务"""
if self._save_task:
self._running = False
self._save_task.cancel()
try:
await self._save_task
except asyncio.CancelledError:
pass
logger.info("自动保存任务已停止")
else:
logger.warning("没有正在运行的自动保存任务")
async def _auto_save_loop(self):
"""自动保存循环"""
while self._running:
try:
await asyncio.sleep(self.save_interval)
await self.save_all_instances()
except asyncio.CancelledError:
break
except Exception as e:
logger.error(f"定时保存失败: {e}")
async def save_all_instances(self):
save_time = time.time()
to_remove = []
"""保存所有实例到数据库"""
for ac_id, ac_instance in AirConditioner.InstancesPool.items():
try:
await db_manager.execute_by_sql_file(
ROOT_PATH / "sql" / "update_ac.sql",
[(ac_instance.on, ac_instance.temperature,
ac_instance.burnt, ac_instance.frozen, ac_id),(ac_id,)]
)
if(save_time - ac_instance.instance_get_time >= 300): # 5 分钟
to_remove.append(ac_id)
except Exception as e:
logger.error(f"保存空调 {ac_id} 失败: {e}")
logger.info(f"定时保存完成,共保存 {len(AirConditioner.InstancesPool)} 个空调实例")
# 删除时间过长实例
for ac_id in to_remove:
del AirConditioner.InstancesPool[ac_id]
logger.info(f"清理长期不活跃的空调实例完成,目前池内共有 {len(AirConditioner.InstancesPool)} 个实例")
ac_manager = AirConditionerManager(save_interval=300) # 5分钟
class AirConditioner:
air_conditioners: dict[str, "AirConditioner"] = {}
InstancesPool: dict[str, 'AirConditioner'] = {}
@classmethod
async def refresh_ac(cls, id: str):
cls.InstancesPool[id].instance_get_time = time.time()
@classmethod
async def storage_ac(cls, id: str, ac: 'AirConditioner'):
cls.InstancesPool[id] = ac
@classmethod
async def get_ac(cls, id: str) -> 'AirConditioner':
if(id in cls.InstancesPool):
await cls.refresh_ac(id)
return cls.InstancesPool[id]
# 如果没有,那么从数据库重新实例化一个 AC 出来
result = await db_manager.query_by_sql_file(ROOT_PATH / "sql" / "query_ac.sql", (id,))
if len(result) == 0:
ac = await cls.create_ac(id)
return ac
ac_data = result[0]
ac = AirConditioner(id)
ac.on = bool(ac_data["on"])
ac.temperature = float(ac_data["temperature"])
ac.burnt = bool(ac_data["burnt"])
ac.frozen = bool(ac_data["frozen"])
await cls.storage_ac(id, ac)
return ac
@classmethod
async def create_ac(cls, id: str) -> 'AirConditioner':
ac = AirConditioner(id)
await db_manager.execute_by_sql_file(
ROOT_PATH / "sql" / "insert_ac.sql",
[(id, ac.on, ac.temperature, ac.burnt, ac.frozen),(id,)]
)
await cls.storage_ac(id, ac)
return ac
async def change_ac_temp(self, temperature_delta: float) -> None:
'''
改变空调的温度
:param temperature_delta: float 温度变化量
'''
changed_temp = self.temperature + temperature_delta
random_poss = random.random()
if temperature_delta < 0 and changed_temp < 0:
# 根据温度随机出是否冻结0度开始呈指数增长
possibility = -math.e ** (changed_temp / 50) + 1
if random_poss < possibility:
await self.broke_ac(CrashType.FROZEN)
elif temperature_delta > 0 and changed_temp > 40:
# 根据温度随机出是否烧坏40度开始呈指数增长
possibility = -math.e ** ((40-changed_temp) / 50) + 1
if random_poss < possibility:
await self.broke_ac(CrashType.BURNT)
self.temperature = changed_temp
async def update_ac(self, state: bool = None, temperature_delta: float = None, burnt: bool = None, frozen: bool = None) -> 'AirConditioner':
if state is not None:
self.on = state
if temperature_delta is not None:
await self.change_ac_temp(temperature_delta)
if burnt is not None:
self.burnt = burnt
if frozen is not None:
self.frozen = frozen
# await db_manager.execute_by_sql_file(
# ROOT_PATH / "sql" / "update_ac.sql",
# (self.on, self.temperature, self.burnt, self.frozen, self.id)
# )
return self
async def change_ac(self) -> 'AirConditioner':
self.on = False
self.temperature = 24
self.burnt = False
self.frozen = False
# await db_manager.execute_by_sql_file(
# ROOT_PATH / "sql" / "update_ac.sql",
# (self.on, self.temperature, self.burnt, self.frozen, self.id)
# )
return self
def __init__(self, id: str) -> None:
self.id = id
@ -22,45 +195,42 @@ class AirConditioner:
self.temperature = 24 # 默认温度
self.burnt = False
self.frozen = False
AirConditioner.air_conditioners[id] = self
def change_ac(self):
self.burnt = False
self.frozen = False
self.on = False
self.temperature = 24 # 重置为默认温度
self.instance_get_time = time.time()
def broke_ac(self, crash_type: CrashType):
async def broke_ac(self, crash_type: CrashType):
'''
让空调坏掉,并保存数据
让空调坏掉
:param crash_type: CrashType 枚举,表示空调坏掉的类型
'''
match crash_type:
case CrashType.BURNT:
self.burnt = True
await self.update_ac(burnt=True)
case CrashType.FROZEN:
self.frozen = True
self.save_crash_data(crash_type)
await self.update_ac(frozen=True)
await db_manager.execute_by_sql_file(
ROOT_PATH / "sql" / "insert_crash.sql",
(self.id, crash_type.value)
)
def save_crash_data(self, crash_type: CrashType):
'''
如果空调爆炸了,就往本地的 ac_crash_data.json 里该 id 的记录加一
'''
data_file = DATA_PATH / "ac_crash_data.json"
crash_data = {}
if data_file.exists():
with open(data_file, "r", encoding="utf-8") as f:
crash_data = json.load(f)
if self.id not in crash_data:
crash_data[self.id] = {"burnt": 0, "frozen": 0}
match crash_type:
case CrashType.BURNT:
crash_data[self.id]["burnt"] += 1
case CrashType.FROZEN:
crash_data[self.id]["frozen"] += 1
with open(data_file, "w", encoding="utf-8") as f:
json.dump(crash_data, f, ensure_ascii=False, indent=4)
# def save_crash_data(self, crash_type: CrashType):
# '''
# 如果空调爆炸了,就往本地的 ac_crash_data.json 里该 id 的记录加一
# '''
# data_file = DATA_PATH / "ac_crash_data.json"
# crash_data = {}
# if data_file.exists():
# with open(data_file, "r", encoding="utf-8") as f:
# crash_data = json.load(f)
# if self.id not in crash_data:
# crash_data[self.id] = {"burnt": 0, "frozen": 0}
# match crash_type:
# case CrashType.BURNT:
# crash_data[self.id]["burnt"] += 1
# case CrashType.FROZEN:
# crash_data[self.id]["frozen"] += 1
# with open(data_file, "w", encoding="utf-8") as f:
# json.dump(crash_data, f, ensure_ascii=False, indent=4)
def get_crashes_and_ranking(self) -> tuple[int, int]:
'''

View File

@ -0,0 +1,26 @@
-- 创建所有表
CREATE TABLE IF NOT EXISTS air_conditioner (
id VARCHAR(128) PRIMARY KEY,
"on" BOOLEAN NOT NULL,
temperature REAL NOT NULL,
burnt BOOLEAN NOT NULL,
frozen BOOLEAN NOT NULL
);
CREATE TABLE IF NOT EXISTS air_conditioner_log (
log_id INTEGER PRIMARY KEY AUTOINCREMENT,
log_time DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
id VARCHAR(128),
"on" BOOLEAN NOT NULL,
temperature REAL NOT NULL,
burnt BOOLEAN NOT NULL,
frozen BOOLEAN NOT NULL,
FOREIGN KEY (id) REFERENCES air_conditioner(id)
);
CREATE TABLE IF NOT EXISTS air_conditioner_crash_log (
id VARCHAR(128) NOT NULL,
crash_type INT NOT NULL,
timestamp DATETIME NOT NULL,
FOREIGN KEY (id) REFERENCES air_conditioner(id)
);

View File

@ -0,0 +1,8 @@
-- 插入一台新空调
INSERT INTO air_conditioner (id, "on", temperature, burnt, frozen)
VALUES (?, ?, ?, ?, ?);
-- 使用返回的数据插入日志
INSERT INTO air_conditioner_log (id, "on", temperature, burnt, frozen)
SELECT id, "on", temperature, burnt, frozen
FROM air_conditioner
WHERE id = ?;

View File

@ -0,0 +1,3 @@
-- 插入一条空调爆炸记录
INSERT INTO air_conditioner_crash_log (id, crash_type, timestamp)
VALUES (?, ?, CURRENT_TIMESTAMP);

View File

@ -0,0 +1,4 @@
-- 查询空调状态
SELECT *
FROM air_conditioner
WHERE id = ?;

View File

@ -0,0 +1,23 @@
-- 从 air_conditioner_crash_log 表中获取指定 id 损坏的次数以及损坏次数的排名
SELECT crash_count, crash_rank
FROM (
SELECT id,
COUNT(*) AS crash_count,
RANK() OVER (ORDER BY COUNT(*) DESC) AS crash_rank
FROM air_conditioner_crash_log
GROUP BY id
) AS ranked_data
WHERE id = ?
-- 如果该 id 没有损坏记录,则返回 0 次损坏和对应的最后一名
UNION
SELECT 0 AS crash_count,
(SELECT COUNT(DISTINCT id) + 1 FROM air_conditioner_crash_log) AS crash_rank
FROM (
SELECT DISTINCT id
FROM air_conditioner_crash_log
) AS ranked_data
WHERE NOT EXISTS (
SELECT 1
FROM air_conditioner_crash_log
WHERE id = ?
);

View File

@ -0,0 +1,13 @@
-- 查询目前所有空调中的最高温度与最低温度与历史最高低温
SELECT
(SELECT MAX(temperature) FROM air_conditioner
WHERE "on" = TRUE AND NOT frozen AND NOT burnt) AS max,
(SELECT MIN(temperature) FROM air_conditioner
WHERE "on" = TRUE AND NOT frozen AND NOT burnt) AS min,
(SELECT MAX(temperature) FROM air_conditioner_log
WHERE "on" = TRUE AND NOT frozen AND NOT burnt) AS his_max,
(SELECT MIN(temperature) FROM air_conditioner_log
WHERE "on" = TRUE AND NOT frozen AND NOT burnt) AS his_min;

View File

@ -0,0 +1,10 @@
-- 更新空调状态
UPDATE air_conditioner
SET "on" = ?, temperature = ?, burnt = ?, frozen = ?
WHERE id = ?;
-- 插入日志记录(从更新后的数据获取)
INSERT INTO air_conditioner_log (id, "on", temperature, burnt, frozen)
SELECT id, "on", temperature, burnt, frozen
FROM air_conditioner
WHERE id = ?;

View File

@ -1,39 +1,41 @@
import re
from nonebot import on_message
from nonebot import get_plugin_config, on_message
from nonebot_plugin_alconna import Reference, Reply, UniMsg
from nonebot.adapters import Event
from nonebot.adapters.onebot.v11.event import GroupMessageEvent as OB11GroupEvent
from pydantic import BaseModel
matcher_fix = on_message()
class Config(BaseModel):
bilifetch_enabled_groups: list[int] = []
config = get_plugin_config(Config)
pattern = (
r"^(?:(?:av|cv)\d+|BV[a-zA-Z0-9]{10})|"
r"(?:b23\.tv|bili(?:22|23|33|2233)\.cn|\.bilibili\.com|QQ小程序(?:&amp;#93;|&#93;|\])哔哩哔哩).{0,500}"
)
@matcher_fix.handle()
async def _(msg: UniMsg, event: Event):
def _rule(msg: UniMsg, evt: Event) -> bool:
if isinstance(evt, OB11GroupEvent):
if evt.group_id not in config.bilifetch_enabled_groups:
return False
to_search = msg.exclude(Reply, Reference).dump(json=True)
to_search2 = msg.exclude(Reply, Reference).extract_plain_text()
if not re.search(pattern, to_search) and not re.search(pattern, to_search2):
return
return False
return True
matcher_fix = on_message(rule=_rule)
@matcher_fix.handle()
async def _(event: Event):
from nonebot_plugin_analysis_bilibili import handle_analysis
await handle_analysis(event)
# b_url: str
# b_page: str | None
# b_time: str | None
#
# from nonebot_plugin_analysis_bilibili.analysis_bilibili import extract as bilibili_extract
#
# b_url, b_page, b_time = bilibili_extract(to_search)
# if b_url is None:
# return
#
# await matcher_fix.send(await UniMessage().text(b_url).export())

View File

@ -0,0 +1,154 @@
from pathlib import Path
import subprocess
import tempfile
from typing import Any
from loguru import logger
from nonebot import on_message
from pydantic import BaseModel
from nonebot.adapters import Event, Bot
from nonebot_plugin_alconna import UniMessage, UniMsg
from nonebot.adapters.onebot.v11.event import MessageEvent as OB11MessageEvent
from konabot.common.artifact import ArtifactDepends, ensure_artifact, register_artifacts
from konabot.common.data_man import DataManager
from konabot.common.path import BINARY_PATH, DATA_PATH
arti_ccleste_wrap_linux = ArtifactDepends(
url="https://github.com/Passthem-desu/pt-ccleste-wrap/releases/download/v0.1.5/ccleste-wrap",
sha256="ba4118c6465d1ca1547cdd1bd11c6b9e6a6a98ea8967b55485aeb6b77bb7e921",
target=BINARY_PATH / "ccleste-wrap",
required_os="Linux",
required_arch="x86_64",
)
arti_ccleste_wrap_windows = ArtifactDepends(
url="https://github.com/Passthem-desu/pt-ccleste-wrap/releases/download/v0.1.5/ccleste-wrap.exe",
sha256="7df382486a452485cdcf2115eabd7f772339ece470ab344074dc163fc7981feb",
target=BINARY_PATH / "ccleste-wrap.exe",
required_os="Windows",
required_arch="AMD64",
)
register_artifacts(arti_ccleste_wrap_linux)
register_artifacts(arti_ccleste_wrap_windows)
class CelesteStatus(BaseModel):
records: dict[str, str] = {}
celeste_status = DataManager(CelesteStatus, DATA_PATH / "celeste-status.json")
# ↓ 这里的 Type Hinting 是为了能 fit 进去 set[str | tuple[str, ...]]
aliases: set[Any] = {"celeste", "蔚蓝", "爬山", "鳌太线"}
ALLOW_CHARS = "wasdxc0123456789 \t\n\r"
async def get_prev(evt: Event, bot: Bot) -> str | None:
prev = None
if isinstance(evt, OB11MessageEvent):
if evt.reply is not None:
prev = f"QQ:{bot.self_id}:" + str(evt.reply.message_id)
else:
for seg in evt.get_message():
if seg.type == 'reply':
msgid = seg.get('id')
prev = f"QQ:{bot.self_id}:" + str(msgid)
if prev is not None:
async with celeste_status.get_data() as data:
prev = data.records.get(prev)
return prev
async def match_celeste(evt: Event, bot: Bot, msg: UniMsg) -> bool:
prev = await get_prev(evt, bot)
text = msg.extract_plain_text().strip()
if any(text.startswith(a) for a in aliases):
return True
if prev is not None:
return True
return False
# cmd = on_command(cmd="celeste", aliases=aliases)
cmd = on_message(rule=match_celeste)
@cmd.handle()
async def _(msg: UniMsg, evt: Event, bot: Bot):
prev = await get_prev(evt, bot)
actions = msg.extract_plain_text().strip()
for alias in aliases:
actions = actions.removeprefix(alias)
actions = actions.strip()
if len(actions) == 0:
return
if any((c not in ALLOW_CHARS) for c in actions):
return
await ensure_artifact(arti_ccleste_wrap_linux)
await ensure_artifact(arti_ccleste_wrap_windows)
bin: Path | None = None
for arti in (
arti_ccleste_wrap_linux,
arti_ccleste_wrap_windows,
):
if not arti.is_corresponding_platform():
continue
bin = arti.target
if not bin.exists():
continue
break
if bin is None:
logger.warning("Celeste 模块没有找到该系统需要的二进制文件")
return
if prev is not None:
prev_append = ["-p", prev]
else:
prev_append = []
try:
with tempfile.TemporaryDirectory() as _tempdir:
tempdir = Path(_tempdir)
gif_path = tempdir / "render.gif"
cmd_celeste = [
bin,
"-a",
actions,
"-o",
gif_path,
] + prev_append
logger.info(f"执行指令调用 celeste: CMD={cmd_celeste}")
res = subprocess.run(cmd_celeste, timeout=5, capture_output=True)
if res.returncode != 0:
logger.warning(f"渲染 Celeste 时的输出不是 0 CODE={res.returncode} STDOUT={res.stdout} STDERR={res.stderr}")
await UniMessage.text(f"渲染 Celeste 时出错啦!下面是输出:\n\n{res.stdout.decode()}{res.stderr.decode()}").send(evt, bot, at_sender=True)
return
if not gif_path.exists():
logger.warning("没有找到 Celeste 渲染的文件")
await UniMessage.text("渲染 Celeste 时出错啦!").send(evt, bot, at_sender=True)
return
gif_data = gif_path.read_bytes()
except TimeoutError:
logger.warning("在渲染 Celeste 时超时了")
await UniMessage("渲染 Celeste 时超时了!请检查你的操作清单,不能太长").send(evt, bot, at_sender=True)
return
receipt = await UniMessage.image(raw=gif_data).send(evt, bot)
async with celeste_status.get_data() as data:
if prev:
actions = prev + "\n" + actions
if isinstance(evt, OB11MessageEvent):
for _msgid in receipt.msg_ids:
msgid = _msgid["message_id"]
data.records[f"QQ:{bot.self_id}:{msgid}"] = actions
else:
for msgid in receipt.msg_ids:
data.records[f"DISCORD:{bot.self_id}:{msgid}"] = actions

View File

@ -0,0 +1,277 @@
import asyncio as asynkio
from io import BytesIO
from inspect import signature
import random
from konabot.common.longtask import DepLongTaskTarget
from konabot.common.nb.exc import BotExceptionMessage
from konabot.common.nb.extract_image import DepImageBytesOrNone
from nonebot.adapters import Event as BaseEvent
from nonebot import on_message, logger
from nonebot_plugin_alconna import (
UniMessage,
UniMsg
)
from konabot.plugins.fx_process.fx_handle import ImageFilterStorage
from konabot.plugins.fx_process.fx_manager import ImageFilterManager
from PIL import Image, ImageSequence
from konabot.plugins.fx_process.types import FilterItem, ImageRequireSignal, ImagesListRequireSignal, SenderInfo, StoredInfo
def try_convert_type(param_type, input_param, sender_info: SenderInfo = None) -> tuple[bool, any]:
converted_value = None
try:
if param_type is float:
converted_value = float(input_param)
elif param_type is int:
converted_value = int(input_param)
elif param_type is bool:
converted_value = input_param.lower() in ['true', '1', 'yes', '', '']
elif param_type is Image.Image:
converted_value = ImageRequireSignal()
return False, converted_value
elif param_type is SenderInfo:
converted_value = sender_info
return False, converted_value
elif param_type == list[Image.Image]:
converted_value = ImagesListRequireSignal()
return False, converted_value
elif param_type is str:
if input_param is None:
return False, None
converted_value = str(input_param)
else:
return False, None
except Exception:
return False, None
return True, converted_value
def prase_input_args(input_str: str, sender_info: SenderInfo = None) -> list[FilterItem]:
# 按分号或换行符分割参数
args = []
for part in input_str.replace('\n', ';').split(';'):
part = part.strip()
if not part:
continue
split_part = part.split()
filter_name = split_part[0]
if not ImageFilterManager.has_filter(filter_name):
continue
filter_func = ImageFilterManager.get_filter(filter_name)
input_filter_args = split_part[1:]
# 获取函数最大参数数量
sig = signature(filter_func)
max_params = len(sig.parameters) - 1 # 减去第一个参数 image
# 从 args 提取参数,并转换为适当类型
func_args = []
for i in range(0, max_params):
# 尝试将参数转换为函数签名中对应的类型
param = list(sig.parameters.values())[i + 1]
param_type = param.annotation
# 根据函数所需要的参数,从输入参数中提取,如果不匹配就使用默认值,将当前参数递交给下一个循环
input_param = input_filter_args[0] if len(input_filter_args) > 0 else None
state, converted_param = try_convert_type(param_type, input_param, sender_info)
if state:
input_filter_args.pop(0)
if converted_param is None and param.default != param.empty:
converted_param = param.default
func_args.append(converted_param)
args.append(FilterItem(name=filter_name,filter=filter_func, args=func_args))
return args
def handle_filters_to_image(images: list[Image.Image], filters: list[FilterItem]) -> Image.Image:
for filter_item in filters:
logger.debug(f"{filter_item}")
filter_func = filter_item.filter
func_args = filter_item.args
# 检测参数中是否有 ImageRequireSignal如果有则传入对应数量的图像列表
if any(isinstance(arg, ImageRequireSignal) for arg in func_args):
# 替换 ImageRequireSignal 为 images 对应索引的图像
actual_args = []
img_signal_count = 1 # 从 images[1] 开始取图像
for arg in func_args:
if isinstance(arg, ImageRequireSignal):
if img_signal_count >= len(images):
raise BotExceptionMessage("图像数量不足,无法满足滤镜需求!")
actual_args.append(images[img_signal_count])
img_signal_count += 1
else:
actual_args.append(arg)
func_args = actual_args
# 检测参数中是否有 ImagesListRequireSignal如果有则传入整个图像列表
if any(isinstance(arg, ImagesListRequireSignal) for arg in func_args):
actual_args = []
for arg in func_args:
if isinstance(arg, ImagesListRequireSignal):
actual_args.append(images)
else:
actual_args.append(arg)
func_args = actual_args
logger.debug(f"Applying filter: {filter_item.name} with args: {func_args}")
images[0] = filter_func(images[0], *func_args)
return images[0]
def copy_images_by_index(images: list[Image.Image], index: int) -> list[Image.Image]:
# 将导入图像列表复制为新的图像列表,如果是动图,那么就找到对应索引下的帧
new_images = []
for img in images:
if getattr(img, "is_animated", False):
frames = img.n_frames
frame_idx = index % frames
img.seek(frame_idx)
new_images.append(img.copy())
else:
new_images.append(img.copy())
return new_images
def generate_image(images: list[Image.Image], filters: list[FilterItem]) -> Image.Image:
# 处理位于最前面的生成类滤镜
while filters and filters[0].name.strip() in ImageFilterManager.generate_filter_map:
gen_filter = filters.pop(0)
gen_func = gen_filter.filter
func_args = gen_filter.args[1:] # 去掉第一个 list 参数
gen_func(None, images, *func_args)
def save_or_load_image(images: list[Image.Image], filters: list[FilterItem], sender_info: SenderInfo) -> StoredInfo | None:
stored_info = None
# 处理位于最前面的“读取图像”、“存入图像”
if not filters:
return
while filters and filters[0].name.strip() in ["读取图像", "存入图像"]:
if filters[0].name.strip() == "读取图像":
load_filter = filters.pop(0)
path = load_filter.args[0] if load_filter.args else ""
ImageFilterStorage.load_image(None, path, images, sender_info)
elif filters[0].name.strip() == "存入图像":
store_filter = filters.pop(0)
name = store_filter.args[0] if store_filter.args[0] else str(random.randint(10000,99999))
stored_info = ImageFilterStorage.store_image(images[0], name, sender_info)
# 将剩下的“读取图像”或“存入图像”参数全部删除,避免后续非法操作
filters[:] = [f for f in filters if f.name.strip() not in ["读取图像", "存入图像"]]
return stored_info
async def apply_filters_to_images(images: list[Image.Image], filters: list[FilterItem], sender_info: SenderInfo) -> BytesIO | StoredInfo:
# 先处理存取图像、生成图像的操作
stored_info = save_or_load_image(images, filters, sender_info)
generate_image(images, filters)
if stored_info and len(filters) <= 0:
return stored_info
if len(images) <= 0:
raise BotExceptionMessage("没有可处理的图像!")
# 检测是否需要将静态图视作动图处理
frozen_to_move = any(
filter_item.name == "动图"
for filter_item in filters
)
static_fps = 10
# 找到动图参数 fps
if frozen_to_move:
for filter_item in filters:
if filter_item.name == "动图" and filter_item.args:
try:
static_fps = int(filter_item.args[0])
except Exception:
static_fps = 10
break
# 如果 image 是动图,则逐帧处理
img = images[0]
logger.debug("开始图像处理")
output = BytesIO()
if getattr(img, "is_animated", False) or frozen_to_move:
frames = []
append_images = []
if getattr(img, "is_animated", False):
logger.debug("处理动图帧")
else:
# 将静态图视作单帧动图处理,拷贝 10 帧
logger.debug("处理静态图为多帧动图")
append_images = [img.copy() for _ in range(10)]
img.info['duration'] = int(1000 / static_fps)
async def process_single_frame(frame_images: list[Image.Image], frame_idx: int) -> Image.Image:
"""处理单帧的异步函数"""
logger.debug(f"开始处理帧 {frame_idx}")
result = await asynkio.to_thread(handle_filters_to_image, frame_images, filters)
logger.debug(f"完成处理帧 {frame_idx}")
return result
# 并发处理所有帧
tasks = []
all_frames = []
for i, frame in enumerate(list(ImageSequence.Iterator(img)) + append_images):
all_frames.append(frame.copy())
images_copy = copy_images_by_index(images, i)
task = process_single_frame(images_copy, i)
tasks.append(task)
frames = await asynkio.gather(*tasks, return_exceptions=False)
# 检查是否有处理失败的帧
for i, result in enumerate(frames):
if isinstance(result, Exception):
logger.error(f"{i} 处理失败: {result}")
# 使用原始帧作为回退
frames[i] = all_frames[i]
logger.debug("保存动图")
frames[0].save(
output,
format="GIF",
save_all=True,
append_images=frames[1:],
loop=img.info.get("loop", 0),
disposal=img.info.get("disposal", 2),
duration=img.info.get("duration", 100),
)
logger.debug("Animated image saved")
else:
img = handle_filters_to_image(images=images, filters=filters)
img.save(output, format="PNG")
logger.debug("Image processing completed")
output.seek(0)
return output
def is_fx_mentioned(evt: BaseEvent, msg: UniMsg) -> bool:
txt = msg.extract_plain_text()
if "fx" not in txt[:3].lower():
return False
return True
fx_on = on_message(rule=is_fx_mentioned)
@fx_on.handle()
async def _(msg: UniMsg, event: BaseEvent, target: DepLongTaskTarget, image_data: DepImageBytesOrNone):
preload_imgs = []
# 提取图像
try:
preload_imgs.append(Image.open(BytesIO(image_data)))
except Exception:
logger.info("No image found in message for FX processing.")
args = msg.extract_plain_text().split()
if len(args) < 2:
return
sender_info = SenderInfo(
group_id=target.channel_id,
qq_id=target.target_id
)
filters = prase_input_args(msg.extract_plain_text()[2:], sender_info=sender_info)
# if not filters:
# return
output = await apply_filters_to_images(preload_imgs, filters, sender_info)
if isinstance(output,StoredInfo):
await fx_on.send(await UniMessage().text(f"图像已存为「{output.name}」!").export())
elif isinstance(output,BytesIO):
await fx_on.send(await UniMessage().image(raw=output).export())

View File

@ -0,0 +1,50 @@
from typing import Optional
from PIL import ImageColor
class ColorHandle:
color_name_map = {
"": (255, 0, 0),
"绿": (0, 255, 0),
"": (0, 0, 255),
"": (255, 255, 0),
"": (128, 0, 128),
"": (0, 0, 0),
"": (255, 255, 255),
"": (255, 165, 0),
"": (255, 192, 203),
"": (128, 128, 128),
"": (0, 255, 255),
"": (75, 0, 130),
"": (165, 42, 42),
"": (200, 200, 200),
"": (50, 50, 50),
"": (255, 255, 224),
"": (47, 79, 79),
}
@staticmethod
def set_or_blend_color(ori_color: Optional[tuple], target_color: tuple) -> tuple:
# 如果没有指定初始颜色,返回目标颜色
if ori_color is None:
return target_color
# 混合颜色,取平均值
blended_color = tuple((o + t) // 2 for o, t in zip(ori_color, target_color))
return blended_color
@staticmethod
def parse_color(color_str: str) -> tuple:
# 如果是纯括号,则加上前缀 rgb
if color_str.startswith('(') and color_str.endswith(')'):
color_str = 'rgb' + color_str
try:
return ImageColor.getrgb(color_str)
except ValueError:
pass
base_color = None
color_str = color_str.replace('', '')
for name, rgb in ColorHandle.color_name_map.items():
if name in color_str:
base_color = ColorHandle.set_or_blend_color(base_color, rgb)
if base_color is not None:
return base_color
return (255, 255, 255) # 默认白色

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,87 @@
from typing import Optional
from konabot.plugins.fx_process.fx_handle import ImageFilterEmpty, ImageFilterImplement, ImageFilterStorage
class ImageFilterManager:
filter_map = {
"模糊": ImageFilterImplement.apply_blur,
"马赛克": ImageFilterImplement.apply_mosaic,
"轮廓": ImageFilterImplement.apply_contour,
"锐化": ImageFilterImplement.apply_sharpen,
"边缘增强": ImageFilterImplement.apply_edge_enhance,
"浮雕": ImageFilterImplement.apply_emboss,
"查找边缘": ImageFilterImplement.apply_find_edges,
"平滑": ImageFilterImplement.apply_smooth,
"反色": ImageFilterImplement.apply_invert,
"黑白": ImageFilterImplement.apply_black_white,
"阈值": ImageFilterImplement.apply_threshold,
"对比度": ImageFilterImplement.apply_contrast,
"亮度": ImageFilterImplement.apply_brightness,
"色彩": ImageFilterImplement.apply_color,
"色调": ImageFilterImplement.apply_to_color,
"缩放": ImageFilterImplement.apply_resize,
"波纹": ImageFilterImplement.apply_wave,
"色键": ImageFilterImplement.apply_color_key,
"暗角": ImageFilterImplement.apply_vignette,
"发光": ImageFilterImplement.apply_glow,
"RGB分离": ImageFilterImplement.apply_rgb_split,
"光学补偿": ImageFilterImplement.apply_optical_compensation,
"球面化": ImageFilterImplement.apply_spherize,
"旋转": ImageFilterImplement.apply_rotate,
"透视变换": ImageFilterImplement.apply_perspective_transform,
"裁剪": ImageFilterImplement.apply_crop,
"噪点": ImageFilterImplement.apply_noise,
"平移": ImageFilterImplement.apply_translate,
"拓展边缘": ImageFilterImplement.apply_expand_edges,
"素描": ImageFilterImplement.apply_sketch,
"叠加颜色": ImageFilterImplement.apply_gradient_overlay,
"阴影": ImageFilterImplement.apply_shadow,
"径向模糊": ImageFilterImplement.apply_radial_blur,
"旋转模糊": ImageFilterImplement.apply_spin_blur,
"方向模糊": ImageFilterImplement.apply_directional_blur,
"边缘模糊": ImageFilterImplement.apply_focus_blur,
"缩放模糊": ImageFilterImplement.apply_zoom_blur,
"镜像": ImageFilterImplement.apply_mirror_half,
"水平翻转": ImageFilterImplement.apply_flip_horizontal,
"垂直翻转": ImageFilterImplement.apply_flip_vertical,
"复制": ImageFilterImplement.copy_area,
"晃动": ImageFilterImplement.apply_random_wiggle,
"动图": ImageFilterEmpty.empty_filter_param,
"像素抖动": ImageFilterImplement.apply_pixel_jitter,
"描边": ImageFilterImplement.apply_stroke,
"形状描边": ImageFilterImplement.apply_shape_stroke,
"半调": ImageFilterImplement.apply_halftone,
"设置通道": ImageFilterImplement.apply_set_channel,
"设置遮罩": ImageFilterImplement.apply_set_mask,
# 图像处理
"存入图像": ImageFilterStorage.store_image,
"读取图像": ImageFilterStorage.load_image,
"暂存图像": ImageFilterStorage.temp_store_image,
"交换图像": ImageFilterStorage.swap_image_index,
"删除图像": ImageFilterStorage.delete_image_by_index,
"选择图像": ImageFilterStorage.select_image_by_index,
# 多图像处理
"混合图像": ImageFilterImplement.apply_blend,
"覆盖图像": ImageFilterImplement.apply_overlay,
# 生成式
"覆加颜色": ImageFilterImplement.generate_solid,
}
generate_filter_map = {
"生成图层": ImageFilterImplement.generate_empty,
"生成文本": ImageFilterImplement.generate_text
}
@classmethod
def get_filter(cls, name: str) -> Optional[callable]:
if name in cls.filter_map:
return cls.filter_map[name]
elif name in cls.generate_filter_map:
return cls.generate_filter_map[name]
else:
return None
@classmethod
def has_filter(cls, name: str) -> bool:
return name in cls.filter_map or name in cls.generate_filter_map

View File

@ -0,0 +1,344 @@
import re
from konabot.plugins.fx_process.color_handle import ColorHandle
import numpy as np
from PIL import Image, ImageDraw
from typing import List, Tuple, Dict, Optional
class GradientGenerator:
"""渐变生成器类"""
def __init__(self):
self.has_numpy = hasattr(np, '__version__')
def parse_color_list(self, color_list_str: str) -> List[Dict]:
"""解析渐变颜色列表字符串
Args:
color_list_str: 格式如 '[rgb(255,0,0)|(0,0)+rgb(0,255,0)|(0,100)+rgb(0,0,255)|(50,100)]'
Returns:
list: 包含颜色和位置信息的字典列表
"""
color_nodes = []
color_list_str = color_list_str.strip('[]').strip()
matches = color_list_str.split('+')
for single_str in matches:
color_str = single_str.split('|')[0]
pos_str = single_str.split('|')[1] if '|' in single_str else '0,0'
color = ColorHandle.parse_color(color_str.strip())
try:
pos_str = pos_str.replace('(', '').replace(')', '')
x_str, y_str = pos_str.split(',')
x_percent = float(x_str.strip().replace('%', ''))
y_percent = float(y_str.strip().replace('%', ''))
x_percent = max(0, min(100, x_percent))
y_percent = max(0, min(100, y_percent))
except:
x_percent = 0
y_percent = 0
color_nodes.append({
'color': color,
'position': (x_percent / 100.0, y_percent / 100.0)
})
if not color_nodes:
color_nodes = [
{'color': (255, 0, 0), 'position': (0, 0)},
{'color': (0, 0, 255), 'position': (1, 1)}
]
return color_nodes
def create_gradient(self, width: int, height: int, color_nodes: List[Dict]) -> Image.Image:
"""创建渐变图像
Args:
width: 图像宽度
height: 图像高度
color_nodes: 颜色节点列表
Returns:
Image.Image: 渐变图像
"""
if len(color_nodes) == 1:
return Image.new('RGB', (width, height), color_nodes[0]['color'])
elif len(color_nodes) == 2:
return self._create_linear_gradient(width, height, color_nodes)
else:
return self._create_radial_gradient(width, height, color_nodes)
def _create_linear_gradient(self, width: int, height: int, color_nodes: List[Dict]) -> Image.Image:
"""创建线性渐变"""
color1 = color_nodes[0]['color']
color2 = color_nodes[1]['color']
pos1 = color_nodes[0]['position']
pos2 = color_nodes[1]['position']
if self.has_numpy:
return self._create_linear_gradient_numpy(width, height, color1, color2, pos1, pos2)
else:
return self._create_linear_gradient_pil(width, height, color1, color2, pos1, pos2)
def _create_linear_gradient_numpy(self, width: int, height: int,
color1: Tuple, color2: Tuple,
pos1: Tuple, pos2: Tuple) -> Image.Image:
"""使用numpy创建线性渐变"""
# 创建坐标网格
x = np.linspace(0, 1, width)
y = np.linspace(0, 1, height)
xx, yy = np.meshgrid(x, y)
# 计算渐变方向
dx = pos2[0] - pos1[0]
dy = pos2[1] - pos1[1]
length_sq = dx * dx + dy * dy
if length_sq > 0:
# 计算投影参数
t = ((xx - pos1[0]) * dx + (yy - pos1[1]) * dy) / length_sq
t = np.clip(t, 0, 1)
else:
t = np.zeros_like(xx)
# 插值颜色
r = color1[0] + (color2[0] - color1[0]) * t
g = color1[1] + (color2[1] - color1[1]) * t
b = color1[2] + (color2[2] - color1[2]) * t
# 创建图像
gradient_array = np.stack([r, g, b], axis=-1).astype(np.uint8)
return Image.fromarray(gradient_array)
def _create_linear_gradient_pil(self, width: int, height: int,
color1: Tuple, color2: Tuple,
pos1: Tuple, pos2: Tuple) -> Image.Image:
"""使用PIL创建线性渐变没有numpy时使用"""
gradient = Image.new('RGB', (width, height))
draw = ImageDraw.Draw(gradient)
# 判断渐变方向
if abs(pos1[0] - pos2[0]) < 0.01: # 垂直渐变
y1 = int(pos1[1] * (height - 1))
y2 = int(pos2[1] * (height - 1))
if y2 < y1:
y1, y2 = y2, y1
color1, color2 = color2, color1
if y2 > y1:
for y in range(height):
if y <= y1:
fill_color = color1
elif y >= y2:
fill_color = color2
else:
ratio = (y - y1) / (y2 - y1)
r = int(color1[0] + (color2[0] - color1[0]) * ratio)
g = int(color1[1] + (color2[1] - color1[1]) * ratio)
b = int(color1[2] + (color2[2] - color1[2]) * ratio)
fill_color = (r, g, b)
draw.line([(0, y), (width, y)], fill=fill_color)
else:
draw.rectangle([0, 0, width, height], fill=color1)
elif abs(pos1[1] - pos2[1]) < 0.01: # 水平渐变
x1 = int(pos1[0] * (width - 1))
x2 = int(pos2[0] * (width - 1))
if x2 < x1:
x1, x2 = x2, x1
color1, color2 = color2, color1
if x2 > x1:
for x in range(width):
if x <= x1:
fill_color = color1
elif x >= x2:
fill_color = color2
else:
ratio = (x - x1) / (x2 - x1)
r = int(color1[0] + (color2[0] - color1[0]) * ratio)
g = int(color1[1] + (color2[1] - color1[1]) * ratio)
b = int(color1[2] + (color2[2] - color1[2]) * ratio)
fill_color = (r, g, b)
draw.line([(x, 0), (x, height)], fill=fill_color)
else:
draw.rectangle([0, 0, width, height], fill=color1)
else: # 对角渐变(简化处理为左上到右下)
for y in range(height):
for x in range(width):
distance = (x/width + y/height) / 2
r = int(color1[0] + (color2[0] - color1[0]) * distance)
g = int(color1[1] + (color2[1] - color1[1]) * distance)
b = int(color1[2] + (color2[2] - color1[2]) * distance)
draw.point((x, y), fill=(r, g, b))
return gradient
def _create_radial_gradient(self, width: int, height: int, color_nodes: List[Dict]) -> Image.Image:
"""创建径向渐变"""
if self.has_numpy and len(color_nodes) > 2:
return self._create_radial_gradient_numpy(width, height, color_nodes)
else:
return self._create_simple_gradient(width, height, color_nodes)
def _create_radial_gradient_numpy(self, width: int, height: int, color_nodes: List[Dict]) -> Image.Image:
"""使用numpy创建径向渐变多色"""
# 创建坐标网格
x = np.linspace(0, 1, width)
y = np.linspace(0, 1, height)
xx, yy = np.meshgrid(x, y)
# 提取颜色和位置
positions = np.array([node['position'] for node in color_nodes])
colors = np.array([node['color'] for node in color_nodes])
# 计算每个点到所有节点的距离
distances = np.sqrt((xx[:, :, np.newaxis] - positions[np.newaxis, np.newaxis, :, 0]) ** 2 +
(yy[:, :, np.newaxis] - positions[np.newaxis, np.newaxis, :, 1]) ** 2)
# 找到最近的两个节点
sorted_indices = np.argsort(distances, axis=2)
nearest_idx = sorted_indices[:, :, 0]
second_idx = sorted_indices[:, :, 1]
# 获取对应的颜色
nearest_colors = colors[nearest_idx]
second_colors = colors[second_idx]
# 获取距离并计算权重
nearest_dist = np.take_along_axis(distances, np.expand_dims(nearest_idx, axis=2), axis=2)[:, :, 0]
second_dist = np.take_along_axis(distances, np.expand_dims(second_idx, axis=2), axis=2)[:, :, 0]
total_dist = nearest_dist + second_dist
mask = total_dist > 0
weight1 = np.zeros_like(nearest_dist)
weight1[mask] = second_dist[mask] / total_dist[mask]
weight2 = 1 - weight1
# 插值颜色
r = nearest_colors[:, :, 0] * weight1 + second_colors[:, :, 0] * weight2
g = nearest_colors[:, :, 1] * weight1 + second_colors[:, :, 1] * weight2
b = nearest_colors[:, :, 2] * weight1 + second_colors[:, :, 2] * weight2
gradient_array = np.stack([r, g, b], axis=-1).astype(np.uint8)
return Image.fromarray(gradient_array)
def _create_simple_gradient(self, width: int, height: int, color_nodes: List[Dict]) -> Image.Image:
"""创建简化渐变没有numpy或多色时使用"""
gradient = Image.new('RGB', (width, height))
draw = ImageDraw.Draw(gradient)
if len(color_nodes) >= 2:
# 使用第一个和最后一个颜色创建简单渐变
color1 = color_nodes[0]['color']
color2 = color_nodes[-1]['color']
# 判断节点分布
x_positions = [node['position'][0] for node in color_nodes]
y_positions = [node['position'][1] for node in color_nodes]
if all(abs(x - x_positions[0]) < 0.01 for x in x_positions):
# 垂直渐变
for y in range(height):
ratio = y / (height - 1) if height > 1 else 0
r = int(color1[0] + (color2[0] - color1[0]) * ratio)
g = int(color1[1] + (color2[1] - color1[1]) * ratio)
b = int(color1[2] + (color2[2] - color1[2]) * ratio)
draw.line([(0, y), (width, y)], fill=(r, g, b))
else:
# 水平渐变
for x in range(width):
ratio = x / (width - 1) if width > 1 else 0
r = int(color1[0] + (color2[0] - color1[0]) * ratio)
g = int(color1[1] + (color2[1] - color1[1]) * ratio)
b = int(color1[2] + (color2[2] - color1[2]) * ratio)
draw.line([(x, 0), (x, height)], fill=(r, g, b))
else:
# 单色
draw.rectangle([0, 0, width, height], fill=color_nodes[0]['color'])
return gradient
def create_simple_gradient(self, width: int, height: int,
start_color: Tuple, end_color: Tuple,
direction: str = 'vertical') -> Image.Image:
"""创建简单双色渐变
Args:
width: 图像宽度
height: 图像高度
start_color: 起始颜色
end_color: 结束颜色
direction: 渐变方向 'vertical', 'horizontal', 'diagonal'
Returns:
Image.Image: 渐变图像
"""
if direction == 'vertical':
return self._create_vertical_gradient(width, height, start_color, end_color)
elif direction == 'horizontal':
return self._create_horizontal_gradient(width, height, start_color, end_color)
else: # diagonal
return self._create_diagonal_gradient(width, height, start_color, end_color)
def _create_vertical_gradient(self, width: int, height: int,
color1: Tuple, color2: Tuple) -> Image.Image:
"""创建垂直渐变"""
gradient = Image.new('RGB', (width, height))
draw = ImageDraw.Draw(gradient)
for y in range(height):
ratio = y / (height - 1) if height > 1 else 0
r = int(color1[0] + (color2[0] - color1[0]) * ratio)
g = int(color1[1] + (color2[1] - color1[1]) * ratio)
b = int(color1[2] + (color2[2] - color1[2]) * ratio)
draw.line([(0, y), (width, y)], fill=(r, g, b))
return gradient
def _create_horizontal_gradient(self, width: int, height: int,
color1: Tuple, color2: Tuple) -> Image.Image:
"""创建水平渐变"""
gradient = Image.new('RGB', (width, height))
draw = ImageDraw.Draw(gradient)
for x in range(width):
ratio = x / (width - 1) if width > 1 else 0
r = int(color1[0] + (color2[0] - color1[0]) * ratio)
g = int(color1[1] + (color2[1] - color1[1]) * ratio)
b = int(color1[2] + (color2[2] - color1[2]) * ratio)
draw.line([(x, 0), (x, height)], fill=(r, g, b))
return gradient
def _create_diagonal_gradient(self, width: int, height: int,
color1: Tuple, color2: Tuple) -> Image.Image:
"""创建对角渐变"""
if self.has_numpy:
return self._create_diagonal_gradient_numpy(width, height, color1, color2)
else:
return self._create_horizontal_gradient(width, height, color1, color2) # 降级为水平渐变
def _create_diagonal_gradient_numpy(self, width: int, height: int,
color1: Tuple, color2: Tuple) -> Image.Image:
"""使用numpy创建对角渐变"""
x = np.linspace(0, 1, width)
y = np.linspace(0, 1, height)
xx, yy = np.meshgrid(x, y)
distance = (xx + yy) / 2.0
r = color1[0] + (color2[0] - color1[0]) * distance
g = color1[1] + (color2[1] - color1[1]) * distance
b = color1[2] + (color2[2] - color1[2]) * distance
gradient_array = np.stack([r, g, b], axis=-1).astype(np.uint8)
return Image.fromarray(gradient_array)

View File

@ -0,0 +1,182 @@
import asyncio
from dataclasses import dataclass
from hashlib import md5
import time
from nonebot import logger
from nonebot_plugin_apscheduler import driver
from konabot.common.path import DATA_PATH
import os
from PIL import Image
from io import BytesIO
IMAGE_PATH = DATA_PATH / "temp" / "images"
@dataclass
class ImageResource:
filename: str
expire: int
@dataclass
class StorageImage:
name: str
resources: dict[str,
dict[str,ImageResource]] # {群号: {QQ号: ImageResource}}
class ImageStorager:
images_pool: dict[str,StorageImage] = {}
max_storage: int = 10 * 1024 * 1024 # 最大存储10MB
max_image_count: int = 200 # 最大存储图片数量
@staticmethod
def init():
if not IMAGE_PATH.exists():
IMAGE_PATH.mkdir(parents=True, exist_ok=True)
@staticmethod
def delete_path_image(name: str):
resource_path = IMAGE_PATH / name
if resource_path.exists():
os.remove(resource_path)
@staticmethod
async def clear_all_image():
# 清理 temp 目录下的所有图片资源
for file in os.listdir(IMAGE_PATH):
file_path = IMAGE_PATH / file
if file_path.is_file():
os.remove(file_path)
@classmethod
async def clear_expire_image(cls):
# 清理过期的图片资源,将未被删除的放入列表中,如果超过最大数量则删除最早过期的
remaining_images = []
current_time = time.time()
for name, storage_image in list(ImageStorager.images_pool.items()):
for group_id, resources in list(storage_image.resources.items()):
for qq_id, resource in list(resources.items()):
if resource.expire < current_time:
del storage_image.resources[group_id][qq_id]
cls.delete_path_image(name)
else:
remaining_images.append((name, group_id, qq_id, resource.expire))
if not storage_image.resources:
del ImageStorager.images_pool[name]
# 如果剩余图片超过最大数量,按过期时间排序并删除最早过期的
if len(remaining_images) > ImageStorager.max_image_count:
remaining_images.sort(key=lambda x: x[3]) # 按过期时间排序
to_delete = len(remaining_images) - ImageStorager.max_image_count
for i in range(to_delete):
name, group_id, qq_id, _ = remaining_images[i]
resource = ImageStorager.images_pool[name].resources[group_id][qq_id]
del ImageStorager.images_pool[name].resources[group_id][qq_id]
cls.delete_path_image(name)
logger.info("过期图片清理完成")
@classmethod
def _add_to_pool(cls, filename: str, name: str, group_id: str, qq_id: str, expire: int = 36000):
expire_time = time.time() + expire
if name not in cls.images_pool:
cls.images_pool[name] = StorageImage(name=name,resources={})
if group_id not in cls.images_pool[name].resources:
cls.images_pool[name].resources[group_id] = {}
cls.images_pool[name].resources[group_id][qq_id] = ImageResource(filename=filename, expire=expire_time)
logger.debug(f"{cls.images_pool}")
@classmethod
def save_image(cls, image: bytes, name: str, group_id: str, qq_id: str) -> None:
"""
以哈希值命名保存图片,并返回图片资源信息
"""
# 检测图像大小,不得超过 10 MB
if len(image) > cls.max_storage:
raise ValueError("图片大小超过 10 MB 限制")
hash_name = md5(image).hexdigest()
ext = os.path.splitext(name)[1]
file_name = f"{hash_name}{ext}"
full_path = IMAGE_PATH / file_name
with open(full_path, "wb") as f:
f.write(image)
# 将文件写入 images_pool
logger.debug(f"Image saved: {file_name} for group {group_id}, qq {qq_id}")
cls._add_to_pool(file_name, name, group_id, qq_id)
@classmethod
def save_image_by_pil(cls, image: Image.Image, name: str, group_id: str, qq_id: str) -> None:
"""
以哈希值命名保存图片,并返回图片资源信息
"""
img_byte_arr = BytesIO()
# 如果图片是动图,保存为 GIF 格式
if getattr(image, "is_animated", False):
image.save(img_byte_arr, format="GIF", save_all=True, loop=0)
else:
image.save(img_byte_arr, format=image.format or "PNG")
img_bytes = img_byte_arr.getvalue()
cls.save_image(img_bytes, name, group_id, qq_id)
@classmethod
def load_image(cls, name: str, group_id: str, qq_id: str) -> Image:
logger.debug(f"Loading image: {name} for group {group_id}, qq {qq_id}")
if name not in cls.images_pool:
logger.debug(f"Image {name} not found in pool")
return None
if group_id not in cls.images_pool[name].resources:
logger.debug(f"No resources for group {group_id} in image {name}")
return None
# 寻找对应 QQ 号 的资源,如果没有就返回相同群下的第一个资源
if qq_id not in cls.images_pool[name].resources[group_id]:
first_qq_id = next(iter(cls.images_pool[name].resources[group_id]))
qq_id = first_qq_id
resource = cls.images_pool[name].resources[group_id][qq_id]
resource_path = IMAGE_PATH / resource.filename
logger.debug(f"Image path: {resource_path}")
return Image.open(resource_path)
class ImageStoragerManager:
def __init__(self, interval: int = 300): # 默认 5 分钟执行一次
self.interval = interval
self._clear_task = None
self._running = False
async def start_auto_clear(self):
"""启动自动任务"""
# 先清理一次
await ImageStorager.clear_all_image()
self._running = True
self._clear_task = asyncio.create_task(self._auto_clear_loop())
logger.info(f"自动清理任务已启动,间隔: {self.interval}")
async def stop_auto_clear(self):
"""停止自动清理任务"""
if self._clear_task:
self._running = False
self._clear_task.cancel()
try:
await self._clear_task
except asyncio.CancelledError:
pass
logger.info("自动清理任务已停止")
else:
logger.warning("没有正在运行的自动清理任务")
async def _auto_clear_loop(self):
"""自动清理循环"""
while self._running:
try:
await asyncio.sleep(self.interval)
await ImageStorager.clear_expire_image()
except asyncio.CancelledError:
break
except Exception as e:
logger.error(f"定时清理失败: {e}")
image_manager = ImageStoragerManager(interval=300) # 每5分钟清理一次
@driver.on_startup
async def init_image_storage():
ImageStorager.init()
# 启用定时任务清理过期图片
await image_manager.start_auto_clear()

View File

@ -0,0 +1,125 @@
import cv2
from nonebot import logger
import numpy as np
from shapely.geometry import Polygon
from shapely.ops import unary_union
def fix_with_shapely(contours: list) -> np.ndarray:
"""
使用Shapely库处理复杂自相交
"""
fixed_polygons = []
for contour in contours:
# 转换输入为正确的格式
contour_array = contour.reshape(-1, 2)
# 转换为Shapely多边形
polygon = Polygon(contour_array)
if not polygon.is_valid:
polygon = polygon.buffer(0)
fixed_polygons.append(polygon)
# 接下来把所有轮廓合并为一个
if len(fixed_polygons) >= 1:
merged_polygon = unary_union(fixed_polygons)
if merged_polygon.geom_type == 'Polygon':
merged_points = np.array(merged_polygon.exterior.coords, dtype=np.int32)
elif merged_polygon.geom_type == 'MultiPolygon':
largest = max(merged_polygon.geoms, key=lambda p: p.area)
merged_points = np.array(largest.exterior.coords, dtype=np.int32)
return [merged_points.reshape(-1, 1, 2)]
else:
logger.warning("No valid contours found after fixing with Shapely.")
return [np.array([], dtype=np.int32).reshape(0, 1, 2)]
def expand_contours(contours, stroke_width):
"""
将轮廓向外扩展指定宽度
参数:
contours: OpenCV轮廓列表
stroke_width: 扩展宽度(像素)
返回:
扩展后的轮廓列表
"""
expanded_contours = []
for cnt in contours:
# 将轮廓转换为点列表
points = cnt.reshape(-1, 2).astype(np.float32)
n = len(points)
if n < 3:
continue # 至少需要3个点才能形成多边形
expanded_points = []
for i in range(n):
# 获取当前点、前一个点和后一个点
p_curr = points[i]
p_prev = points[(i - 1) % n]
p_next = points[(i + 1) % n]
# 计算两条边的向量
v1 = p_curr - p_prev # 前一条边从prev到curr
v2 = p_next - p_curr # 后一条边从curr到next
# 归一化
norm1 = np.linalg.norm(v1)
norm2 = np.linalg.norm(v2)
if norm1 == 0 or norm2 == 0:
# 如果有零向量,直接沿着法线方向扩展
edge_dir = np.array([0, 0])
if norm1 > 0:
edge_dir = v1 / norm1
elif norm2 > 0:
edge_dir = v2 / norm2
normal = np.array([-edge_dir[1], edge_dir[0]])
expanded_point = p_curr + normal * stroke_width
else:
# 归一化向量
v1_norm = v1 / norm1
v2_norm = v2 / norm2
# 计算两条边的单位法向量(指向多边形外部)
n1 = np.array([-v1_norm[1], v1_norm[0]])
n2 = np.array([-v2_norm[1], v2_norm[0]])
# 计算角平分线方向(两个法向量的和)
bisector = n1 + n2
# 计算平分线的长度
bisector_norm = np.linalg.norm(bisector)
if bisector_norm == 0:
# 如果两条边平行(同向或反向),取任一法线方向
expanded_point = p_curr + n1 * stroke_width
else:
# 归一化平分线
bisector_normalized = bisector / bisector_norm
# 计算偏移距离(考虑夹角)
# 使用余弦定理计算正确的偏移距离
cos_angle = np.dot(v1_norm, v2_norm)
angle = np.arccos(np.clip(cos_angle, -1.0, 1.0))
if abs(np.pi - angle) < 1e-6: # 近似平角
# 接近直线的情况
offset_distance = stroke_width
else:
# 计算正确的偏移距离
offset_distance = stroke_width / np.sin(angle / 2)
# 计算扩展点
expanded_point = p_curr + bisector_normalized * offset_distance
expanded_points.append(expanded_point)
# 将扩展后的点转换为整数坐标
expanded_cnt = np.array(expanded_points, dtype=np.float32).reshape(-1, 1, 2)
expanded_contours.append(expanded_cnt.astype(np.int32))
expanded_contours = fix_with_shapely(expanded_contours)
return expanded_contours

Some files were not shown because too many files have changed in this diff Show More