Skip to content

*: support logging session connect attrs to slow query log (#66617)#67323

Open
ti-chi-bot wants to merge 2 commits intopingcap:release-8.5from
ti-chi-bot:cherry-pick-66617-to-release-8.5
Open

*: support logging session connect attrs to slow query log (#66617)#67323
ti-chi-bot wants to merge 2 commits intopingcap:release-8.5from
ti-chi-bot:cherry-pick-66617-to-release-8.5

Conversation

@ti-chi-bot
Copy link
Copy Markdown
Member

@ti-chi-bot ti-chi-bot commented Mar 26, 2026

This is an automated cherry-pick of #66617

What problem does this PR solve?

Issue Number: close #66616

Problem Summary:
In a Kubernetes deployment environment, container IPs change frequently. The current slow query log records only IP information, which is insufficient to trace the real source or application of SQL statements. Recording SESSION_CONNECT_ATTRS in the slow log provides critical client metadata (such as app_name, _os, _client_name, etc.) injected during the connection handshake.

What changed and how does it work?

This PR introduces the feature to inject and log SESSION_CONNECT_ATTRS into the slow query log and exposes them through system tables:

  1. Slow Query Log Enhancement:

    • Added a Session_connect_attrs field in JSON format to the slow query file (pkg/sessionctx/variable/slow_log.go).
    • Added a new SESSION_CONNECT_ATTRS JSON column to both the SLOW_QUERY and CLUSTER_SLOW_QUERY system tables (pkg/infoschema/tables.go).
    • Updated the slow log parser to correctly parse and deserialize this JSON field (pkg/executor/slow_query.go).
  2. System Variables:

    • Converted performance_schema_session_connect_attrs_size from a noop variable to a functional GLOBAL variable (pkg/sessionctx/variable/sysvar.go), controlling the maximum total byte size of attributes per connection (default: 4096).
  3. Status Variables & Truncation Logic:

    • Implemented a two-tier restriction approach during the handshake: a hard limit of 1 MiB to reject unusually large data, and a soft limit driven by performance_schema_session_connect_attrs_size (pkg/server/internal/parse/parse.go).
    • Added two new GLOBAL status variables: Performance_schema_session_connect_attrs_longest_seen and Performance_schema_session_connect_attrs_lost to monitor attribute size and truncation stats.
    • For truncated lists, it cleanly discards exceeded properties and injects _truncated to record discarded bytes.

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)

Environment and connection settings

  • TiDB start mode: unistore
  • TiDB listen: 127.0.0.1:4000
  • TiDB status: 127.0.0.1:10080
  • Slow log file: /tmp/tidb-slow-attrs-demo.log
  • Client driver: mysql-connector-python 9.6.0
  • Client connection config:
    • host: 127.0.0.1
    • port: 4000
    • user: root
    • conn_attrs (normal case): {"app_name":"slowlog_demo","client_case":"attrs_normal"}
    • conn_attrs (truncation case): {"app_name":"slowlog_demo","client_case":"attrs_trunc","extra":"x"*256}

Variables used in test

set global tidb_slow_log_threshold=0;
set global performance_schema_session_connect_attrs_size=65536;
set global performance_schema_session_connect_attrs_size=0;

Test SQL

select /*slowlog_attr_normal*/ 1;
select /*slowlog_attr_trunc*/ 2;

Virtual table query SQL

select query, cast(Session_connect_attrs as char) as Session_connect_attrs
from information_schema.slow_query
where query like 'select /*slowlog_attr_normal*/%' or query like 'select /*slowlog_attr_trunc*/%'
order by time desc;

select query, cast(Session_connect_attrs as char) as Session_connect_attrs
from information_schema.cluster_slow_query
where query like 'select /*slowlog_attr_normal*/%' or query like 'select /*slowlog_attr_trunc*/%'
order by time desc;

Observed output: information_schema.slow_query

+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| query                             | Session_connect_attrs                                                                                                                                                                                                                                                    |
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| select /*slowlog_attr_trunc*/ 2;  | {"_truncated": "478"}                                                                                                                                                                                                                                                    |
| select /*slowlog_attr_normal*/ 1; | {"_client_name": "libmysql", "_client_version": "9.6.0", "_connector_license": "GPL-2.0", "_connector_name": "mysql-connector-python", "_connector_version": "9.6.0", "_os": "macos15", "_pid": "46202", "_platform": "arm64", "_source_host": "bogon", "app_name": "slowlog_demo", "client_case": "attrs_normal"} |
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

####Observed output: information_schema.cluster_slow_query

+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| query                             | Session_connect_attrs                                                                                                                                                                                                                                                    |
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| select /*slowlog_attr_trunc*/ 2;  | {"_truncated": "478"}                                                                                                                                                                                                                                                    |
| select /*slowlog_attr_normal*/ 1; | {"_client_name": "libmysql", "_client_version": "9.6.0", "_connector_license": "GPL-2.0", "_connector_name": "mysql-connector-python", "_connector_version": "9.6.0", "_os": "macos15", "_pid": "46202", "_platform": "arm64", "_source_host": "bogon", "app_name": "slowlog_demo", "client_case": "attrs_normal"} |
+-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • No need to test
    • I checked and no code files have been changed.

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Please refer to Release Notes Language Style Guide to write a quality release note.

Improve observability of slow queries by logging client connection attributes (Session_connect_attrs) in slow query logs and exposing them in information_schema.slow_query and information_schema.cluster_slow_query; performance_schema_session_connect_attrs_size now controls attribute truncation behavior, with truncated bytes recorded via "_truncated".

Summary by CodeRabbit

  • New Features
    • Session connection attributes are now captured, recorded in slow query logs, and exposed in information_schema slow-query tables.
    • New global setting performance_schema_session_connect_attrs_size (default 4KB, range -1 to 64KB) controls attribute size, with automatic truncation and a _truncated indicator.
  • Tests
    • Added coverage and integration tests validating capture, truncation, logging, and queryability of session connect attributes.

@ti-chi-bot ti-chi-bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. ok-to-test Indicates a PR is ready to be tested. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-8.5 This PR is cherry-picked to release-8.5 from a source PR. labels Mar 26, 2026
@ti-chi-bot
Copy link
Copy Markdown
Member Author

@jiong-nba This PR has conflicts, I have hold it.
Please resolve them or ask others to resolve them, then comment /unhold to remove the hold label.

@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented Mar 26, 2026

@ti-chi-bot: ## If you want to know how to resolve it, please read the guide in TiDB Dev Guide.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 26, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: e749e248-a93f-4805-aa8c-b2ab193b6370

📥 Commits

Reviewing files that changed from the base of the PR and between 6053171 and e4f7563.

📒 Files selected for processing (1)
  • pkg/executor/infoschema_reader_test.go
✅ Files skipped from review due to trivial changes (1)
  • pkg/executor/infoschema_reader_test.go

📝 Walkthrough

Walkthrough

This PR records MySQL SESSION_CONNECT_ATTRS in slow query logs and information schema, adds parsing, truncation and size-limit policies for handshake connect-attributes, exposes sysvars/metrics to control/report limits, and adds extensive tests and test utilities validating formatting, parsing, truncation, and cluster-table exposure.

Changes

Cohort / File(s) Summary
Schema & Slow-log constants
pkg/infoschema/tables.go, pkg/sessionctx/variable/slow_log.go
Added Session_connect_attrs slow-log field constant, appended column (JSON) to SLOW_QUERY schema, and extended SlowQueryLogItems to carry session connect attributes.
Slow-log formatting & executor
pkg/executor/adapter_slow_log.go, pkg/executor/slow_query.go
Set slow-log items' SessionConnectAttrs from session connection info; parseLog now recognizes Session_connect_attrs and DB string field and routes values into row columns (JSON parsing for non-empty attrs).
Handshake parsing & policies
pkg/server/internal/parse/parse.go, pkg/server/internal/parse/BUILD.bazel, pkg/server/conn_test.go, pkg/server/internal/parse/parse_test.go
Implemented connection-attrs decoding pipeline, 1 MiB hard cap, truncation policy with injected _truncated, metrics updates, warnings for non-standard underscore keys, defensive bounds checks, and added tests for malformed, truncation, and warning behaviors.
Globals & sysvars
pkg/sessionctx/variable/tidb_vars.go, pkg/sessionctx/variable/sysvar.go, pkg/sessionctx/variable/statusvar.go, pkg/sessionctx/variable/noop.go, pkg/sessionctx/variable/session_test.go, pkg/sessionctx/variable/variable_test.go
Added DefConnectAttrsSize and atomic globals (ConnectAttrsSize, ConnectAttrsLongestSeen, ConnectAttrsLost), new global sysvar performance_schema_session_connect_attrs_size with bounds, status vars wired to atomics, removed noop entry, and tests for sysvar behavior and slow-log formatting.
Infoschema & cluster-table tests
pkg/infoschema/test/clustertablestest/cluster_tables_test.go, pkg/infoschema/test/clustertablestest/tables_test.go, pkg/infoschema/test/clustertablestest/BUILD.bazel
Added tests verifying cluster_slow_query exposes Session_connect_attrs JSON and updated expected rows/column ordinals to account for the new column.
Slow query SQL & executor tests
pkg/executor/slow_query_sql_test.go, pkg/executor/slow_query_test.go, pkg/executor/cluster_table_test.go, pkg/executor/BUILD.bazel
Added tests writing custom slow-log files, verifying Session_connect_attrs presence, JSON extraction, truncated/missing handling; adjusted expected slow-query column values; added test dependency on testutil.
Test utilities
pkg/testkit/testutil/require.go
Added shared default Session_connect_attrs JSON fixture and helpers to generate slow-log line and assert expected keys/values.
Misc test expectation
pkg/executor/infoschema_reader_test.go
Updated a test expected count (5017 → 5019) for information_schema.columns.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Server
    participant Parser
    participant Executor
    participant Logger
    participant Reader

    Client->>Server: Handshake (may include ClientConnectAttrs)
    Server->>Parser: decodeConnAttrs(data)
    Parser->>Parser: applyConnAttrsPolicyAndMetrics (limit, truncate, warn)
    Parser-->>Server: attrs map (+ warningsText)
    Server->>Executor: store attrs in SessionVars.ConnectionInfo.Attributes

    Executor->>Logger: SetSlowLogItems(... SessionConnectAttrs)
    Logger->>Logger: serialize JSON "# Session_connect_attrs: ..."
    Logger-->>Logger: write slow log file

    Reader->>Logger: read slow log entries
    Reader->>Executor: parseLog() finds Session_connect_attrs field
    Executor->>Parser: parse JSON/BinaryJSON -> column value
    Reader-->>Client: exposed via information_schema.slow_query / cluster_slow_query
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • PR #66617: Overlapping modifications touching slow-log/session-connect-attrs handling, parsing, and related tests.

Suggested labels

approved, lgtm

Suggested reviewers

  • yibin87
  • D3Hunter
  • yudongusa
  • nolouch

Poem

🐰 I hopped through handshakes, tiny and spry,
I packed app_name and client in JSON byte-by-byte.
When too big I nibbled—left _truncated behind,
Now slow logs tell where the queries unwind.
Cheers from a rabbit, hopping logs in the night.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 15.79% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'support logging session connect attrs to slow query log' clearly and specifically describes the main change: adding SESSION_CONNECT_ATTRS logging to slow query logs.
Description check ✅ Passed The PR description is comprehensive, includes the linked issue number (close #66616), explains the problem (container IPs change; need client metadata for traceability), details all changes made across multiple packages, includes test validation steps, and provides a clear release note.
Linked Issues check ✅ Passed The PR fully addresses issue #66616 by implementing SESSION_CONNECT_ATTRS logging in slow query logs with truncation handling, system variables, and status variables to track attribute sizes.
Out of Scope Changes check ✅ Passed All changes are directly related to the stated objective of logging SESSION_CONNECT_ATTRS in slow query logs. A minor column count update in infoschema_reader_test.go reflects the addition of the Session_connect_attrs column.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.11.4)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (2)
pkg/testkit/testutil/require.go (1)

92-100: Mark this assertion helper with t.Helper().

Without t.Helper(), failures are attributed to pkg/testkit/testutil/require.go instead of the calling test, which makes these new assertions harder to debug.

Suggested change
 func RequireContainsDefaultSessionConnectAttrs(t testing.TB, attrsText string) {
+	t.Helper()
 	require.Contains(t, attrsText, `"_client_name"`)
 	require.Contains(t, attrsText, `"Go-MySQL-Driver"`)
 	require.Contains(t, attrsText, `"_os"`)
 	require.Contains(t, attrsText, `"linux"`)
 	require.Contains(t, attrsText, `"app_name"`)
 	require.Contains(t, attrsText, `"test_app"`)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/testkit/testutil/require.go` around lines 92 - 100, The helper
RequireContainsDefaultSessionConnectAttrs should call t.Helper() so failures are
reported at the caller; update the function
RequireContainsDefaultSessionConnectAttrs(t testing.TB, attrsText string) to
invoke t.Helper() as the first statement, then keep the existing
require.Contains assertions (references:
RequireContainsDefaultSessionConnectAttrs and parameter t) unchanged.
pkg/sessionctx/variable/tests/variable_test.go (1)

121-146: Use t.Cleanup for temporary sysvar registration.

Both tests mutate the global sysvar registry and only unregister on the happy path. Any earlier require failure leaves the registry polluted for later tests. Apply the same pattern to both mynewsysvar and newinstancesysvar.

♻️ Suggested cleanup pattern
 variable.RegisterSysVar(&sv)
+t.Cleanup(func() {
+	variable.UnregisterSysVar("mynewsysvar")
+})
 require.Len(t, variable.GetSysVars(), count+1)
@@
-// Test unregistration restores previous count
-variable.UnregisterSysVar("mynewsysvar")
 require.Equal(t, len(variable.GetSysVars()), count)

Based on learnings, **/*_test.go : Keep test changes minimal and deterministic; avoid broad golden/testdata churn unless required.

Also applies to: 578-603

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/sessionctx/variable/tests/variable_test.go` around lines 121 - 146, The
test registers a global sysvar via variable.RegisterSysVar(&sv) (and similarly
for newinstancesysvar) but only calls variable.UnregisterSysVar on the happy
path, which can leak state on failures; change the test to call t.Cleanup to
unregister the sysvar immediately after RegisterSysVar (e.g., call
t.Cleanup(func(){ variable.UnregisterSysVar("mynewsysvar") })) so the registry
is always restored even if require assertions fail; do this for both
"mynewsysvar" and "newinstancesysvar" and keep the rest of the assertions
(GetSysVars, GetSysVar, Validate, SetSessionFromHook) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/executor/cluster_table_test.go`:
- Around line 35-39: Remove the leftover git conflict markers and restore both
imports so compilation succeeds: delete the <<<<<<</=======/>>>>>>> lines and
include both "github.com/pingcap/tidb/pkg/util" and
"github.com/pingcap/tidb/pkg/testkit/testutil" in the import block (no alias
needed), ensuring references like util.ProcessInfo used by createRPCServer
remain valid and testkit/testutil helpers stay available.

In `@pkg/executor/slow_query_test.go`:
- Around line 181-187: Remove the Git conflict markers and duplicate blocks in
the expected record string constants in slow_query_test.go and keep the merged
expectation that includes the new Session_connect_attrs column (the variant
containing the extra comma and the "null" token), i.e., delete the <<<<<<<,
=======, and >>>>>>> sections and ensure the test record string used by the test
functions (the expected record literal around the sample lines and the second
occurrence noted at the other location) matches the merged form with
Session_connect_attrs present.

In `@pkg/executor/slow_query.go`:
- Around line 706-714: The parseLog function contains Git conflict markers
around the warnings-handling block; remove the unresolved markers and retain the
added handlers for Session_connect_attrs and DB so the branch after warnings
parsing continues with the new checks. Specifically, in parseLog, delete the
<<<<<<<, =======, and >>>>>>> lines and ensure the else-if chain includes the
blocks that check strings.HasPrefix(line,
variable.SlowLogSessionConnectAttrs+variable.SlowLogSpaceMarkStr) (calling
e.setColumnValue with variable.SlowLogSessionConnectAttrs) and
strings.HasPrefix(line, variable.SlowLogDBStr+variable.SlowLogSpaceMarkStr)
(calling e.setColumnValue with variable.SlowLogDBStr) so the function compiles
and the new columns are parsed.

In `@pkg/infoschema/test/clustertablestest/cluster_tables_test.go`:
- Around line 53-57: Remove the leftover merge markers and restore both imports:
keep "github.com/pingcap/tidb/pkg/util" (used for util.ProcessInfo and
util.SessionManager) and add "github.com/pingcap/tidb/pkg/testkit/testutil";
ensure the import block no longer contains <<<<<<</=======/>>>>>>> and that
usages of util.ProcessInfo/util.SessionManager and testutil remain unchanged and
compile.

In `@pkg/server/conn_test.go`:
- Around line 2359-2718: Add the missing import for package vardef so the tests
referencing vardef.ConnectAttrsSize, vardef.ConnectAttrsLongestSeen, and
vardef.ConnectAttrsLost compile; update the import block in
pkg/server/conn_test.go to include
"github.com/pingcap/tidb/pkg/sessionctx/vardef" (or the module path used in this
repo) alongside the existing imports so the TestParseHandshakeAttrsTruncation
cases can access the vardef symbols.

In `@pkg/sessionctx/variable/statusvar.go`:
- Around line 132-144: Remove the merge conflict markers in
pkg/sessionctx/variable/statusvar.go, add the missing import
"pkg/sessionctx/vardef", and ensure the two new status keys
(Performance_schema_session_connect_attrs_longest_seen and
Performance_schema_session_connect_attrs_lost, or their vardef constants) are
present in the defaultStatus map with appropriate scope (vardef.ScopeGlobal) and
initial int64(0) values; keep the existing Ssl_* entries using
vardef.ScopeGlobal | vardef.ScopeSession so GetScope() can safely dereference
defaultStatus[status].Scope without risking a nil lookup.

In `@pkg/sessionctx/variable/tests/variable_test.go`:
- Around line 1-13: The file pkg/sessionctx/variable/tests/variable_test.go has
an outdated license header (2022); replace the header with the standard TiDB
Apache-2.0 license header used by nearby Go files (copy from an existing *.go
file in the repo) and update the copyright year(s) to the current year or range
as used elsewhere so the header matches project conventions.

---

Nitpick comments:
In `@pkg/sessionctx/variable/tests/variable_test.go`:
- Around line 121-146: The test registers a global sysvar via
variable.RegisterSysVar(&sv) (and similarly for newinstancesysvar) but only
calls variable.UnregisterSysVar on the happy path, which can leak state on
failures; change the test to call t.Cleanup to unregister the sysvar immediately
after RegisterSysVar (e.g., call t.Cleanup(func(){
variable.UnregisterSysVar("mynewsysvar") })) so the registry is always restored
even if require assertions fail; do this for both "mynewsysvar" and
"newinstancesysvar" and keep the rest of the assertions (GetSysVars, GetSysVar,
Validate, SetSessionFromHook) unchanged.

In `@pkg/testkit/testutil/require.go`:
- Around line 92-100: The helper RequireContainsDefaultSessionConnectAttrs
should call t.Helper() so failures are reported at the caller; update the
function RequireContainsDefaultSessionConnectAttrs(t testing.TB, attrsText
string) to invoke t.Helper() as the first statement, then keep the existing
require.Contains assertions (references:
RequireContainsDefaultSessionConnectAttrs and parameter t) unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 696c8d69-3804-4ca1-bf80-4bea77b30e20

📥 Commits

Reviewing files that changed from the base of the PR and between 4fae156 and 1356485.

📒 Files selected for processing (24)
  • pkg/executor/BUILD.bazel
  • pkg/executor/adapter_slow_log.go
  • pkg/executor/cluster_table_test.go
  • pkg/executor/slow_query.go
  • pkg/executor/slow_query_sql_test.go
  • pkg/executor/slow_query_test.go
  • pkg/infoschema/tables.go
  • pkg/infoschema/test/clustertablestest/BUILD.bazel
  • pkg/infoschema/test/clustertablestest/cluster_tables_test.go
  • pkg/infoschema/test/clustertablestest/tables_test.go
  • pkg/server/conn_test.go
  • pkg/server/internal/parse/BUILD.bazel
  • pkg/server/internal/parse/parse.go
  • pkg/server/internal/parse/parse_test.go
  • pkg/sessionctx/vardef/sysvar.go
  • pkg/sessionctx/vardef/tidb_vars.go
  • pkg/sessionctx/variable/noop.go
  • pkg/sessionctx/variable/session_test.go
  • pkg/sessionctx/variable/slow_log.go
  • pkg/sessionctx/variable/statusvar.go
  • pkg/sessionctx/variable/sysvar.go
  • pkg/sessionctx/variable/tests/BUILD.bazel
  • pkg/sessionctx/variable/tests/variable_test.go
  • pkg/testkit/testutil/require.go

Comment thread pkg/executor/cluster_table_test.go Outdated
Comment thread pkg/executor/slow_query_test.go Outdated
Comment thread pkg/executor/slow_query.go Outdated
Comment thread pkg/infoschema/test/clustertablestest/cluster_tables_test.go Outdated
Comment thread pkg/server/conn_test.go
Comment on lines +2359 to +2718
func TestParseHandshakeAttrsTruncation(t *testing.T) {
// Save and restore global atomic counters.
origSize := vardef.ConnectAttrsSize.Load()
origLongest := vardef.ConnectAttrsLongestSeen.Load()
origLost := vardef.ConnectAttrsLost.Load()
defer func() {
vardef.ConnectAttrsSize.Store(origSize)
vardef.ConnectAttrsLongestSeen.Store(origLongest)
vardef.ConnectAttrsLost.Store(origLost)
}()

t.Run("exceeds limit truncation", func(t *testing.T) {
vardef.ConnectAttrsSize.Store(5) // very small limit
vardef.ConnectAttrsLongestSeen.Store(0)
vardef.ConnectAttrsLost.Store(0)

// Construct payload
// Capability: ClientProtocol41 | ClientConnectAtts
var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts

var buf bytes.Buffer
// Header (32 bytes)
binary.Write(&buf, binary.LittleEndian, clientCap)
binary.Write(&buf, binary.LittleEndian, uint32(0)) // MaxPacketSize
buf.WriteByte(0) // Collation
buf.Write(make([]byte, 23)) // Reserved

// Body
buf.WriteString("root")
buf.WriteByte(0) // User null-term

buf.WriteByte(0) // Auth null-term

// Attrs: "ab":"cd" (4), "ef":"gh" (4). Total = 8. Limit = 5.
attrsBuf := bytes.NewBuffer(nil)
// K1
attrsBuf.WriteByte(2)
attrsBuf.WriteString("ab")
// V1
attrsBuf.WriteByte(2)
attrsBuf.WriteString("cd")
// K2
attrsBuf.WriteByte(2)
attrsBuf.WriteString("ef")
// V2
attrsBuf.WriteByte(2)
attrsBuf.WriteString("gh")

attrsBytes := attrsBuf.Bytes()
buf.WriteByte(byte(len(attrsBytes))) // 12 bytes total length (includes len enc overhead or just payload?)
// ParseLengthEncodedInt parses the integer. It returns the integer value.
// HandshakeResponseBody uses this value as the length of the *bytes* to consume for attributes.
// The bytes consumed are then passed to parseAttrs.
// parseAttrs expects `[len][str][len][str]`.
// So `attrsBytes` is correct.
buf.Write(attrsBytes)

data := buf.Bytes()

var p handshake.Response41
offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data)
require.NoError(t, err)

err = parse.HandshakeResponseBody(context.Background(), &p, data, offset)
require.NoError(t, err)

// Check truncation
require.Len(t, p.Attrs, 2)
require.Equal(t, "cd", p.Attrs["ab"])
val, ok := p.Attrs["_truncated"]
require.True(t, ok)
require.Equal(t, "4", val)

require.Equal(t, int64(1), vardef.ConnectAttrsLost.Load())
require.Equal(t, int64(8), vardef.ConnectAttrsLongestSeen.Load()) // 4+4=8
})

t.Run("limit 0 disables collection", func(t *testing.T) {
vardef.ConnectAttrsSize.Store(0)
vardef.ConnectAttrsLongestSeen.Store(0)
vardef.ConnectAttrsLost.Store(0)

var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts
var buf bytes.Buffer
binary.Write(&buf, binary.LittleEndian, clientCap)
binary.Write(&buf, binary.LittleEndian, uint32(0))
buf.WriteByte(0)
buf.Write(make([]byte, 23))
buf.WriteString("root")
buf.WriteByte(0)
buf.WriteByte(0)

attrsBuf := bytes.NewBuffer(nil)
attrsBuf.WriteByte(2)
attrsBuf.WriteString("ab")
attrsBuf.WriteByte(2)
attrsBuf.WriteString("cd")
attrsBytes := attrsBuf.Bytes()
buf.WriteByte(byte(len(attrsBytes)))
buf.Write(attrsBytes)
data := buf.Bytes()

var p handshake.Response41
offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data)
require.NoError(t, err)

err = parse.HandshakeResponseBody(context.Background(), &p, data, offset)
require.NoError(t, err)

require.Len(t, p.Attrs, 0)
require.Equal(t, int64(0), vardef.ConnectAttrsLost.Load())
require.Equal(t, int64(0), vardef.ConnectAttrsLongestSeen.Load())
})

t.Run("limit 65536 acceptance", func(t *testing.T) {
vardef.ConnectAttrsSize.Store(65536)
vardef.ConnectAttrsLongestSeen.Store(0)
vardef.ConnectAttrsLost.Store(0)

var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts
var buf bytes.Buffer
binary.Write(&buf, binary.LittleEndian, clientCap)
binary.Write(&buf, binary.LittleEndian, uint32(0))
buf.WriteByte(0)
buf.Write(make([]byte, 23))
buf.WriteString("root")
buf.WriteByte(0)
buf.WriteByte(0)

attrsBuf := bytes.NewBuffer(nil)
attrsBuf.WriteByte(2)
attrsBuf.WriteString("ab")
attrsBuf.WriteByte(2)
attrsBuf.WriteString("cd")
attrsBytes := attrsBuf.Bytes()
buf.WriteByte(byte(len(attrsBytes)))
buf.Write(attrsBytes)
data := buf.Bytes()

var p handshake.Response41
offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data)
require.NoError(t, err)

err = parse.HandshakeResponseBody(context.Background(), &p, data, offset)
require.NoError(t, err)

// 65536 should easily accept everything
require.Len(t, p.Attrs, 1)
require.Equal(t, "cd", p.Attrs["ab"])

require.Equal(t, int64(0), vardef.ConnectAttrsLost.Load())
require.Equal(t, int64(4), vardef.ConnectAttrsLongestSeen.Load())
})

t.Run("limit 1MiB rejection", func(t *testing.T) {
// Construct payload declaring > 1 MiB of attributes.
var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts

var buf bytes.Buffer
// Header (32 bytes)
binary.Write(&buf, binary.LittleEndian, clientCap)
binary.Write(&buf, binary.LittleEndian, uint32(0))
buf.WriteByte(0)
buf.Write(make([]byte, 23))
// Body
buf.WriteString("root")
buf.WriteByte(0)
buf.WriteByte(0)

// Encode a length-encoded integer of 1<<20 + 1 (1 MiB + 1 byte).
// 0xfd prefix = 3-byte little-endian integer (range 65536–16777215).
// 1048577 = 0x100001 → LE bytes: 0x01, 0x00, 0x10.
buf.WriteByte(0xfd)
buf.WriteByte(0x01)
buf.WriteByte(0x00)
buf.WriteByte(0x10)

// The 1 MiB check fires before the bounds-check on
// data[offset : offset+num], so no panic is expected.

data := buf.Bytes()
var p handshake.Response41
offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data)
require.NoError(t, err)

err = parse.HandshakeResponseBody(context.Background(), &p, data, offset)
require.Error(t, err)
require.Contains(t, err.Error(), "connection refused: session connection attributes exceed the 1 MiB hard limit")
})

t.Run("no limit", func(t *testing.T) {
vardef.ConnectAttrsSize.Store(-1) // -1 means no limit up to 64KB
vardef.ConnectAttrsLongestSeen.Store(0)
vardef.ConnectAttrsLost.Store(0)

var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts

var buf bytes.Buffer
// Header (32 bytes)
binary.Write(&buf, binary.LittleEndian, clientCap)
binary.Write(&buf, binary.LittleEndian, uint32(0))
buf.WriteByte(0)
buf.Write(make([]byte, 23))
// Body
buf.WriteString("root")
buf.WriteByte(0)
buf.WriteByte(0)

// Attrs: "ab":"cd", "ef":"gh". Total = 8.
attrsBuf := bytes.NewBuffer(nil)
attrsBuf.WriteByte(2)
attrsBuf.WriteString("ab")
attrsBuf.WriteByte(2)
attrsBuf.WriteString("cd")
attrsBuf.WriteByte(2)
attrsBuf.WriteString("ef")
attrsBuf.WriteByte(2)
attrsBuf.WriteString("gh")

attrsBytes := attrsBuf.Bytes()
buf.WriteByte(byte(len(attrsBytes)))
buf.Write(attrsBytes)

data := buf.Bytes()
var p handshake.Response41
offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data)
require.NoError(t, err)

err = parse.HandshakeResponseBody(context.Background(), &p, data, offset)
require.NoError(t, err)

// All attrs should be accepted, no truncation.
require.Len(t, p.Attrs, 2)
require.Equal(t, "cd", p.Attrs["ab"])
require.Equal(t, "gh", p.Attrs["ef"])
_, hasTruncated := p.Attrs["_truncated"]
require.False(t, hasTruncated)

require.Equal(t, int64(0), vardef.ConnectAttrsLost.Load())
require.Equal(t, int64(8), vardef.ConnectAttrsLongestSeen.Load())
})

t.Run("longest_seen not updated for large payloads", func(t *testing.T) {
// Attrs >= 64KB should NOT update LongestSeen.
vardef.ConnectAttrsSize.Store(-1) // Limit mapped to 65536 max internally
vardef.ConnectAttrsLongestSeen.Store(100)
vardef.ConnectAttrsLost.Store(0)

var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts

var buf bytes.Buffer
binary.Write(&buf, binary.LittleEndian, clientCap)
binary.Write(&buf, binary.LittleEndian, uint32(0))
buf.WriteByte(0)
buf.Write(make([]byte, 23))
buf.WriteString("root")
buf.WriteByte(0)
buf.WriteByte(0)

// Build attrs: one key "k" with a 70000-byte value (total > 64KB).
attrsBuf := bytes.NewBuffer(nil)
attrsBuf.WriteByte(1) // key length = 1
attrsBuf.WriteByte('k')
// value length = 70000 → needs 3-byte length-encoded int: 0xfd + 3 LE bytes
valLen := 70000
attrsBuf.WriteByte(0xfd)
attrsBuf.WriteByte(byte(valLen))
attrsBuf.WriteByte(byte(valLen >> 8))
attrsBuf.WriteByte(byte(valLen >> 16))
attrsBuf.Write(make([]byte, valLen)) // value payload

attrsBytes := attrsBuf.Bytes()
// Encode overall attrs length as 3-byte length-encoded int.
attrsLen := len(attrsBytes)
buf.WriteByte(0xfd)
buf.WriteByte(byte(attrsLen))
buf.WriteByte(byte(attrsLen >> 8))
buf.WriteByte(byte(attrsLen >> 16))
buf.Write(attrsBytes)

data := buf.Bytes()
var p handshake.Response41
offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data)
require.NoError(t, err)

err = parse.HandshakeResponseBody(context.Background(), &p, data, offset)
require.NoError(t, err)

// Attrs are truncated (totalSize=70001 > effectiveLimit=65536, because
// limit=-1 maps to effectiveLimit=65536 internally), but LongestSeen
// should NOT be updated because totalSize >= 64KB.
require.Equal(t, int64(100), vardef.ConnectAttrsLongestSeen.Load(),
"LongestSeen should not be updated for payloads >= 64KB")
})

t.Run("underscore-prefixed attrs include reserved key", func(t *testing.T) {
// Keep underscore-prefixed attributes from clients for MySQL parity and
// observability. The client-provided "_truncated" is retained when no
// truncation occurs; if truncation happens, server may overwrite it.
vardef.ConnectAttrsSize.Store(-1) // Limit mapped to 65536 max internally
vardef.ConnectAttrsLongestSeen.Store(0)
vardef.ConnectAttrsLost.Store(0)

var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts

var buf bytes.Buffer
binary.Write(&buf, binary.LittleEndian, clientCap)
binary.Write(&buf, binary.LittleEndian, uint32(0))
buf.WriteByte(0)
buf.Write(make([]byte, 23))
buf.WriteString("root")
buf.WriteByte(0)
buf.WriteByte(0)

// Attrs:
// "_client_name":"Go-MySQL-Driver" → must be kept
// "_os":"linux" → must be kept
// "_program_name":"mysql" → must be kept
// "_custom":"val" → must be kept
// "_truncated":"fake" → kept (may be overwritten by server on truncation)
// "app_name":"myapp" → must be kept
attrsBuf := bytes.NewBuffer(nil)
for _, kv := range [][2]string{
{"_client_name", "Go-MySQL-Driver"},
{"_os", "linux"},
{"_program_name", "mysql"},
{"_custom", "val"},
{"_truncated", "fake"},
{"app_name", "myapp"},
} {
attrsBuf.WriteByte(byte(len(kv[0])))
attrsBuf.WriteString(kv[0])
attrsBuf.WriteByte(byte(len(kv[1])))
attrsBuf.WriteString(kv[1])
}

attrsBytes := attrsBuf.Bytes()
buf.WriteByte(byte(len(attrsBytes)))
buf.Write(attrsBytes)

data := buf.Bytes()
var p handshake.Response41
offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data)
require.NoError(t, err)

err = parse.HandshakeResponseBody(context.Background(), &p, data, offset)
require.NoError(t, err)

// All keys are kept.
require.Len(t, p.Attrs, 6)
require.Equal(t, "Go-MySQL-Driver", p.Attrs["_client_name"])
require.Equal(t, "linux", p.Attrs["_os"])
require.Equal(t, "mysql", p.Attrs["_program_name"])
require.Equal(t, "val", p.Attrs["_custom"])
require.Equal(t, "fake", p.Attrs["_truncated"])
require.Equal(t, "myapp", p.Attrs["app_name"])

require.Equal(t, int64(0), vardef.ConnectAttrsLost.Load())
})
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

sed -n '17,64p' pkg/server/conn_test.go
rg -n '\bvardef\.' pkg/server/conn_test.go

Repository: pingcap/tidb

Length of output: 3557


🏁 Script executed:

fd -t f vardef.go pkg/sessionctx/

Repository: pingcap/tidb

Length of output: 38


🏁 Script executed:

rg -l "ConnectAttrsSize|ConnectAttrsLongestSeen|ConnectAttrsLost" pkg/sessionctx/

Repository: pingcap/tidb

Length of output: 280


🏁 Script executed:

rg -n "ConnectAttrsSize|ConnectAttrsLongestSeen|ConnectAttrsLost" pkg/sessionctx/vardef/

Repository: pingcap/tidb

Length of output: 1229


🏁 Script executed:

grep -A 2 "var ConnectAttrs" pkg/sessionctx/vardef/*.go

Repository: pingcap/tidb

Length of output: 38


Import pkg/sessionctx/vardef before adding these truncation tests.

The test code references vardef.ConnectAttrsSize, vardef.ConnectAttrsLongestSeen, and vardef.ConnectAttrsLost, but the import block does not include github.com/pingcap/tidb/pkg/sessionctx/vardef, so the file will not compile. Add the import to resolve the compilation error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/server/conn_test.go` around lines 2359 - 2718, Add the missing import for
package vardef so the tests referencing vardef.ConnectAttrsSize,
vardef.ConnectAttrsLongestSeen, and vardef.ConnectAttrsLost compile; update the
import block in pkg/server/conn_test.go to include
"github.com/pingcap/tidb/pkg/sessionctx/vardef" (or the module path used in this
repo) alongside the existing imports so the TestParseHandshakeAttrsTruncation
cases can access the vardef symbols.

Comment thread pkg/sessionctx/variable/statusvar.go Outdated
Comment on lines +132 to +144
<<<<<<< HEAD
"Ssl_cipher": {ScopeGlobal | ScopeSession, ""},
"Ssl_cipher_list": {ScopeGlobal | ScopeSession, ""},
"Ssl_verify_mode": {ScopeGlobal | ScopeSession, 0},
"Ssl_version": {ScopeGlobal | ScopeSession, ""},
=======
"Ssl_cipher": {vardef.ScopeGlobal | vardef.ScopeSession, ""},
"Ssl_cipher_list": {vardef.ScopeGlobal | vardef.ScopeSession, ""},
"Ssl_verify_mode": {vardef.ScopeGlobal | vardef.ScopeSession, 0},
"Ssl_version": {vardef.ScopeGlobal | vardef.ScopeSession, ""},
"Performance_schema_session_connect_attrs_longest_seen": {vardef.ScopeGlobal, int64(0)},
"Performance_schema_session_connect_attrs_lost": {vardef.ScopeGlobal, int64(0)},
>>>>>>> dffbd3e60fe (*: support logging session connect attrs to slow query log (#66617))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
rg -n '^(<<<<<<<|=======|>>>>>>>)|Performance_schema_session_connect_attrs_(longest_seen|lost)|\bvardef\.' \
  pkg/sessionctx/variable/statusvar.go
rg -n 'pkg/sessionctx/vardef' pkg/sessionctx/variable/statusvar.go

Repository: pingcap/tidb

Length of output: 864


🏁 Script executed:

head -30 pkg/sessionctx/variable/statusvar.go | cat -n

Repository: pingcap/tidb

Length of output: 1154


🏁 Script executed:

grep -n "^import\|^[[:space:]]*\"pkg/sessionctx" pkg/sessionctx/variable/statusvar.go | head -20

Repository: pingcap/tidb

Length of output: 67


🏁 Script executed:

rg -n "defaultStatus\[" pkg/sessionctx/variable/statusvar.go -A 2 | head -30

Repository: pingcap/tidb

Length of output: 106


🏁 Script executed:

rg -n "func.*GetScope" pkg/sessionctx/variable/statusvar.go -A 10

Repository: pingcap/tidb

Length of output: 407


🏁 Script executed:

rg -n "var defaultStatus" pkg/sessionctx/variable/statusvar.go -A 5

Repository: pingcap/tidb

Length of output: 350


🏁 Script executed:

sed -n '60,160p' pkg/sessionctx/variable/statusvar.go | cat -n

Repository: pingcap/tidb

Length of output: 4104


🏁 Script executed:

rg -n "import.*vardef|from.*vardef" pkg/sessionctx/variable/statusvar.go

Repository: pingcap/tidb

Length of output: 38


🏁 Script executed:

wc -l pkg/sessionctx/variable/statusvar.go

Repository: pingcap/tidb

Length of output: 96


🏁 Script executed:

tail -50 pkg/sessionctx/variable/statusvar.go | cat -n

Repository: pingcap/tidb

Length of output: 2174


Resolve merge conflict, add missing vardef import, and ensure new status keys are in defaultStatus map.

Lines 72–86 contain unresolved merge conflict markers that prevent compilation. Additionally, the file uses vardef.ConnectAttrsLongestSeen and vardef.ConnectAttrsLost on lines 162–163 but does not import pkg/sessionctx/vardef.

More critically, lines 162–163 unconditionally add two status variable names to the returned map, but GetScope() (line 92) directly dereferences defaultStatus[status].Scope without nil checks. If the merge is resolved by dropping the two new keys from defaultStatus, calling GetScope() on those names will panic. Ensure the final merge includes both new keys in the defaultStatus map from the incoming branch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/sessionctx/variable/statusvar.go` around lines 132 - 144, Remove the
merge conflict markers in pkg/sessionctx/variable/statusvar.go, add the missing
import "pkg/sessionctx/vardef", and ensure the two new status keys
(Performance_schema_session_connect_attrs_longest_seen and
Performance_schema_session_connect_attrs_lost, or their vardef constants) are
present in the defaultStatus map with appropriate scope (vardef.ScopeGlobal) and
initial int64(0) values; keep the existing Ssl_* entries using
vardef.ScopeGlobal | vardef.ScopeSession so GetScope() can safely dereference
defaultStatus[status].Scope without risking a nil lookup.

Comment on lines +1 to +13
// Copyright 2022 PingCAP, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update the license header year for this new file.

This file is new in the PR, so the header should be copied from a nearby file with the year adjusted as needed instead of keeping 2022.

As per coding guidelines, **/*.go: For new source files (for example \*.go`), include the standard TiDB license header (copyright + Apache 2.0) by copying from a nearby file and updating year if needed.`

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/sessionctx/variable/tests/variable_test.go` around lines 1 - 13, The file
pkg/sessionctx/variable/tests/variable_test.go has an outdated license header
(2022); replace the header with the standard TiDB Apache-2.0 license header used
by nearby Go files (copy from an existing *.go file in the repo) and update the
copyright year(s) to the current year or range as used elsewhere so the header
matches project conventions.

@ti-chi-bot ti-chi-bot Bot added the sig/planner SIG: Planner label Mar 30, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
pkg/ddl/backfilling_operators.go (1)

528-560: Consider moving idxResult declaration inside the loop.

The idxResult variable is declared at line 528 but only used within the loop (lines 550, 555). Moving it inside the loop improves scoping and makes the variable's lifetime clearer.

✨ Suggested scope refinement
-	var idxResult IndexRecordChunk
 	err := wrapInBeginRollback(w.se, func(startTS uint64) error {
 		// ...
 		for !done {
 			srcChk := w.getChunk()
 			done, err = fetchTableScanResult(w.ctx, w.copCtx.GetBase(), rs, srcChk)
 			if err != nil || w.ctx.Err() != nil {
 				w.recycleChunk(srcChk)
 				terror.Call(rs.Close)
 				return err
 			}
-			idxResult = IndexRecordChunk{ID: task.ID, Chunk: srcChk, Done: done}
+			idxResult := IndexRecordChunk{ID: task.ID, Chunk: srcChk, Done: done}
 			if w.cpOp != nil {
 				w.cpOp.UpdateChunk(task.ID, srcChk.NumRows(), done)
 			}
 			w.totalCount.Add(int64(srcChk.NumRows()))
 			sender(idxResult)
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/ddl/backfilling_operators.go` around lines 528 - 560, The idxResult
variable is declared outside the fetch loop but only used inside it; move its
declaration into the loop body where it's assigned and sent to improve scope and
lifetime. In the closure passed to wrapInBeginRollback (the anonymous func with
startTS uint64) locate idxResult = IndexRecordChunk{...} and change it so you
create a new local idxResult inside the for !done { ... } block just before
assigning/sending it (refer to functions/getters: getChunk,
fetchTableScanResult, sender, and methods cpOp.UpdateChunk and totalCount.Add to
ensure no other references rely on the outer variable).
pkg/planner/core/rule_eliminate_projection.go (1)

191-207: Avoid full-map prefiltering here; it adds avoidable optimizer overhead.

On Lines 193-204, every node scans all replace entries against all child schema columns before replacement. Since column replacement already no-ops for missing keys, this extra filtering is redundant and can noticeably increase planning cost on large plans.

♻️ Proposed simplification
-	// Filter replace map first to make sure the replaced columns
-	// exist in the child output.
-	newReplace := make(map[string]*expression.Column, len(replace))
-	for code, expr := range replace {
-childloop:
-		for _, child := range p.Children() {
-			for _, schemaCol := range child.Schema().Columns {
-				if schemaCol.Equal(nil, expr) {
-					newReplace[code] = expr
-					break childloop
-				}
-			}
-		}
-	}
-	if len(newReplace) > 0 {
-		p.ReplaceExprColumns(newReplace)
-	}
+	if len(replace) > 0 {
+		p.ReplaceExprColumns(replace)
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/planner/core/rule_eliminate_projection.go` around lines 191 - 207, The
prefilter loop building newReplace is redundant and costly because
ReplaceExprColumns already ignores missing keys; remove the newReplace
construction and instead directly call p.ReplaceExprColumns(replace) when
replace is non-empty (check len(replace) > 0), eliminating the nested scans over
p.Children() and child.Schema().Columns; update any references to newReplace to
use replace and keep p.Children() and p.ReplaceExprColumns(...) as the focal
symbols.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/server/internal/parse/parse.go`:
- Around line 198-208: The current fast-path in parseAttrs skips
applyConnAttrsPolicyAndMetrics when variable.ConnectAttrsSize.Load() == 0,
preventing truncation markers and metrics; remove that early return and always
call applyConnAttrsPolicyAndMetrics with the loaded size (even when zero). Keep
decodeConnAttrs(data) and its error handling as-is, then call
applyConnAttrsPolicyAndMetrics(decoded, variable.ConnectAttrsSize.Load()) to get
attrs and warningsText and return them; this ensures the truncation policy and
metrics updating run for the zero-size "drop everything" configuration.
- Around line 174-177: The code currently preserves a client-supplied
"_truncated" key (constant reservedConnAttrTruncated) which lets clients forge
TiDB-injected metadata; update the connection-attribute parsing/merging logic to
drop any client-provided entry whose key equals reservedConnAttrTruncated before
merging into server-side attributes so only the server can set that key, and
apply the same filter in the other parsing path that handles connection
attributes (the secondary block around where connection attributes are
parsed/merged — ensure both code paths remove reservedConnAttrTruncated from
client input).

In `@tests/realtikvtest/addindextest3/ingest_test.go`:
- Around line 474-476: The code sets ingest.ForceSyncFlagForTest then calls
tk.MustExec and clears the flag directly, which can leak the flag if MustExec
aborts; wrap the DDL call in a small closure so you can immediately defer
restoring ingest.ForceSyncFlagForTest to false (i.e., set
ForceSyncFlagForTest.Store(true) inside the closure and call defer
ForceSyncFlagForTest.Store(false) before invoking tk.MustExec("alter table t add
index idx_test(b);")) so the flag is always restored even on failure; locate the
call site using the symbol ingest.ForceSyncFlagForTest and the tk.MustExec call
shown.

---

Nitpick comments:
In `@pkg/ddl/backfilling_operators.go`:
- Around line 528-560: The idxResult variable is declared outside the fetch loop
but only used inside it; move its declaration into the loop body where it's
assigned and sent to improve scope and lifetime. In the closure passed to
wrapInBeginRollback (the anonymous func with startTS uint64) locate idxResult =
IndexRecordChunk{...} and change it so you create a new local idxResult inside
the for !done { ... } block just before assigning/sending it (refer to
functions/getters: getChunk, fetchTableScanResult, sender, and methods
cpOp.UpdateChunk and totalCount.Add to ensure no other references rely on the
outer variable).

In `@pkg/planner/core/rule_eliminate_projection.go`:
- Around line 191-207: The prefilter loop building newReplace is redundant and
costly because ReplaceExprColumns already ignores missing keys; remove the
newReplace construction and instead directly call p.ReplaceExprColumns(replace)
when replace is non-empty (check len(replace) > 0), eliminating the nested scans
over p.Children() and child.Schema().Columns; update any references to
newReplace to use replace and keep p.Children() and p.ReplaceExprColumns(...) as
the focal symbols.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: f1500afc-3bcd-43c2-96d3-e77b8fea6cbd

📥 Commits

Reviewing files that changed from the base of the PR and between 1356485 and 3c09e39.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (31)
  • DEPS.bzl
  • go.mod
  • pkg/ddl/backfilling_operators.go
  • pkg/ddl/index_cop.go
  • pkg/executor/adapter_slow_log.go
  • pkg/executor/cluster_table_test.go
  • pkg/executor/infoschema_reader_test.go
  • pkg/executor/slow_query.go
  • pkg/executor/slow_query_test.go
  • pkg/infoschema/test/clustertablestest/cluster_tables_test.go
  • pkg/infoschema/test/clustertablestest/tables_test.go
  • pkg/lightning/backend/local/local.go
  • pkg/owner/manager.go
  • pkg/planner/core/casetest/rule/BUILD.bazel
  • pkg/planner/core/casetest/rule/rule_eliminate_projection_test.go
  • pkg/planner/core/rule_eliminate_projection.go
  • pkg/server/conn_test.go
  • pkg/server/internal/parse/BUILD.bazel
  • pkg/server/internal/parse/parse.go
  • pkg/server/internal/parse/parse_test.go
  • pkg/sessionctx/variable/noop.go
  • pkg/sessionctx/variable/session_test.go
  • pkg/sessionctx/variable/slow_log.go
  • pkg/sessionctx/variable/statusvar.go
  • pkg/sessionctx/variable/sysvar.go
  • pkg/sessionctx/variable/tidb_vars.go
  • pkg/sessionctx/variable/variable_test.go
  • pkg/util/dbterror/ddl_terror.go
  • tests/realtikvtest/addindextest1/disttask_test.go
  • tests/realtikvtest/addindextest3/ingest_test.go
  • tests/realtikvtest/pessimistictest/pessimistic_test.go
💤 Files with no reviewable changes (5)
  • pkg/util/dbterror/ddl_terror.go
  • pkg/planner/core/casetest/rule/rule_eliminate_projection_test.go
  • pkg/executor/infoschema_reader_test.go
  • tests/realtikvtest/pessimistictest/pessimistic_test.go
  • pkg/sessionctx/variable/noop.go
✅ Files skipped from review due to trivial changes (6)
  • pkg/infoschema/test/clustertablestest/tables_test.go
  • pkg/planner/core/casetest/rule/BUILD.bazel
  • pkg/sessionctx/variable/variable_test.go
  • go.mod
  • pkg/server/internal/parse/parse_test.go
  • pkg/server/conn_test.go
🚧 Files skipped from review as they are similar to previous changes (5)
  • pkg/executor/adapter_slow_log.go
  • pkg/server/internal/parse/BUILD.bazel
  • pkg/executor/cluster_table_test.go
  • pkg/sessionctx/variable/sysvar.go
  • pkg/sessionctx/variable/slow_log.go

Comment on lines +174 to +177
// reservedConnAttrTruncated is injected by TiDB when connection attributes
// are truncated. A client-provided key with the same name may be overwritten
// when truncation happens.
const reservedConnAttrTruncated = "_truncated"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't let clients forge the reserved _truncated metadata.

_truncated is documented as TiDB-injected, but a client-supplied key is currently preserved whenever no server-side truncation happens. That makes the slow log and SLOW_QUERY output ambiguous, because consumers can no longer tell whether _truncated came from TiDB or the client.

💡 Suggested fix
 		if totalSize > effectiveLimit {
 			if !truncated {
 				truncated = true
 				variable.ConnectAttrsLost.Add(1)
 			}
 			continue
 		}
 		if !truncated {
-			attrs[item.key] = item.value
+			if item.key != reservedConnAttrTruncated {
+				attrs[item.key] = item.value
+			}
 			acceptedSize += kvSize
 		}
 	}

Also applies to: 262-277

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/server/internal/parse/parse.go` around lines 174 - 177, The code
currently preserves a client-supplied "_truncated" key (constant
reservedConnAttrTruncated) which lets clients forge TiDB-injected metadata;
update the connection-attribute parsing/merging logic to drop any
client-provided entry whose key equals reservedConnAttrTruncated before merging
into server-side attributes so only the server can set that key, and apply the
same filter in the other parsing path that handles connection attributes (the
secondary block around where connection attributes are parsed/merged — ensure
both code paths remove reservedConnAttrTruncated from client input).

Comment thread pkg/server/internal/parse/parse.go
Comment on lines +474 to +476
ingest.ForceSyncFlagForTest.Store(true)
defer ingest.ForceSyncFlagForTest.Store(oldForceSyncFlag)
tk.MustExec("alter table t add index idx_test(b);")
ingest.ForceSyncFlagForTest.Store(false)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Restore ForceSyncFlagForTest with a scoped defer.

This is package-global state. If tk.MustExec("alter table ...") aborts, Line 476 never runs and the flag leaks into the second call and later tests. Wrap just the DDL in a small closure so the restore still happens on failure without leaving the flag enabled for the subsequent update.

Suggested fix
-	ingest.ForceSyncFlagForTest.Store(true)
-	tk.MustExec("alter table t add index idx_test(b);")
-	ingest.ForceSyncFlagForTest.Store(false)
+	func() {
+		prev := ingest.ForceSyncFlagForTest.Load()
+		ingest.ForceSyncFlagForTest.Store(true)
+		defer func() { ingest.ForceSyncFlagForTest.Store(prev) }()
+		tk.MustExec("alter table t add index idx_test(b);")
+	}()
As per coding guidelines, "Keep test changes minimal and deterministic; avoid broad golden/testdata churn unless required."
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ingest.ForceSyncFlagForTest.Store(true)
defer ingest.ForceSyncFlagForTest.Store(oldForceSyncFlag)
tk.MustExec("alter table t add index idx_test(b);")
ingest.ForceSyncFlagForTest.Store(false)
func() {
prev := ingest.ForceSyncFlagForTest.Load()
ingest.ForceSyncFlagForTest.Store(true)
defer func() { ingest.ForceSyncFlagForTest.Store(prev) }()
tk.MustExec("alter table t add index idx_test(b);")
}()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/realtikvtest/addindextest3/ingest_test.go` around lines 474 - 476, The
code sets ingest.ForceSyncFlagForTest then calls tk.MustExec and clears the flag
directly, which can leak the flag if MustExec aborts; wrap the DDL call in a
small closure so you can immediately defer restoring ingest.ForceSyncFlagForTest
to false (i.e., set ForceSyncFlagForTest.Store(true) inside the closure and call
defer ForceSyncFlagForTest.Store(false) before invoking tk.MustExec("alter table
t add index idx_test(b);")) so the flag is always restored even on failure;
locate the call site using the symbol ingest.ForceSyncFlagForTest and the
tk.MustExec call shown.

@jiong-nba jiong-nba force-pushed the cherry-pick-66617-to-release-8.5 branch from 3c09e39 to 6053171 Compare March 30, 2026 07:11
@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 30, 2026

Codecov Report

❌ Patch coverage is 90.34483% with 14 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (release-8.5@abb50b1). Learn more about missing BASE report.

Additional details and impacted files
@@               Coverage Diff                @@
##             release-8.5     #67323   +/-   ##
================================================
  Coverage               ?   55.5072%           
================================================
  Files                  ?       1823           
  Lines                  ?     655042           
  Branches               ?          0           
================================================
  Hits                   ?     363596           
  Misses                 ?     264444           
  Partials               ?      27002           
Flag Coverage Δ
integration 39.1875% <6.8965%> (?)
unit 64.9158% <90.3448%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
dumpling 52.8824% <0.0000%> (?)
parser ∅ <0.0000%> (?)
br 62.7887% <0.0000%> (?)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
pkg/server/internal/parse/parse.go (2)

198-208: ⚠️ Potential issue | 🟠 Major

Keep the truncation policy active when the limit is 0.

The fast path on Line 199 skips decoding and applyConnAttrsPolicyAndMetrics(), so a supported “drop all attrs” configuration never increments Performance_schema_session_connect_attrs_lost, never updates ..._longest_seen, and never emits the server-side _truncated marker. Downstream, slow logs and system tables will silently omit Session_connect_attrs instead of showing truncation metadata.

💡 Suggested fix
 func parseAttrs(data []byte) (map[string]string, string, error) {
-	if variable.ConnectAttrsSize.Load() == 0 {
-		return map[string]string{}, "", nil
-	}
-
 	decoded, err := decodeConnAttrs(data)
 	if err != nil {
 		return map[string]string{}, "", err
 	}
 	attrs, warningsText := applyConnAttrsPolicyAndMetrics(decoded, variable.ConnectAttrsSize.Load())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/server/internal/parse/parse.go` around lines 198 - 208, The early
fast-return in parseAttrs when variable.ConnectAttrsSize.Load() == 0 skips
decodeConnAttrs and applyConnAttrsPolicyAndMetrics so truncation metrics and
markers are never recorded; change parseAttrs to still run the normal path (call
decodeConnAttrs and then applyConnAttrsPolicyAndMetrics with the loaded size,
even if it's 0) or explicitly invoke applyConnAttrsPolicyAndMetrics with an
empty/decoded attrs payload and size 0 so
Performance_schema_session_connect_attrs_lost, ..._longest_seen and the
server-side _truncated marker are properly updated; locate parseAttrs,
decodeConnAttrs, and applyConnAttrsPolicyAndMetrics to implement this behavior.

252-277: ⚠️ Potential issue | 🟠 Major

Don't let clients forge the reserved _truncated metadata.

When no server-side truncation happens, the merge loop still preserves a client-supplied _truncated entry. That makes the slow log and SLOW_QUERY output ambiguous, because consumers can no longer tell whether _truncated was injected by TiDB or spoofed by the client.

💡 Suggested fix
 	for _, item := range decoded.items {
 		kvSize := int64(len(item.key)) + int64(len(item.value))
 		totalSize += kvSize
 		if totalSize > effectiveLimit {
 			if !truncated {
 				truncated = true
 				variable.ConnectAttrsLost.Add(1)
 			}
 			continue
 		}
 		if !truncated {
-			attrs[item.key] = item.value
+			if item.key != reservedConnAttrTruncated {
+				attrs[item.key] = item.value
+			}
 			acceptedSize += kvSize
 		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/server/internal/parse/parse.go` around lines 252 - 277, The code
currently preserves a client-supplied reservedConnAttrTruncated entry when no
server truncation happened; update the merge logic in the loop that iterates
decoded.items (and/or the post-merge handling of attrs) to ignore or remove any
item whose key equals reservedConnAttrTruncated so clients cannot inject
`_truncated`, i.e. skip adding that key during the attrs[item.key] = item.value
step (and ensure after the loop that if truncated == false there is no
attrs[reservedConnAttrTruncated] entry). Ensure references to decoded.items,
attrs, truncated, acceptedSize and reservedConnAttrTruncated are used to locate
and modify the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@pkg/server/internal/parse/parse.go`:
- Around line 198-208: The early fast-return in parseAttrs when
variable.ConnectAttrsSize.Load() == 0 skips decodeConnAttrs and
applyConnAttrsPolicyAndMetrics so truncation metrics and markers are never
recorded; change parseAttrs to still run the normal path (call decodeConnAttrs
and then applyConnAttrsPolicyAndMetrics with the loaded size, even if it's 0) or
explicitly invoke applyConnAttrsPolicyAndMetrics with an empty/decoded attrs
payload and size 0 so Performance_schema_session_connect_attrs_lost,
..._longest_seen and the server-side _truncated marker are properly updated;
locate parseAttrs, decodeConnAttrs, and applyConnAttrsPolicyAndMetrics to
implement this behavior.
- Around line 252-277: The code currently preserves a client-supplied
reservedConnAttrTruncated entry when no server truncation happened; update the
merge logic in the loop that iterates decoded.items (and/or the post-merge
handling of attrs) to ignore or remove any item whose key equals
reservedConnAttrTruncated so clients cannot inject `_truncated`, i.e. skip
adding that key during the attrs[item.key] = item.value step (and ensure after
the loop that if truncated == false there is no attrs[reservedConnAttrTruncated]
entry). Ensure references to decoded.items, attrs, truncated, acceptedSize and
reservedConnAttrTruncated are used to locate and modify the logic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 40922d9a-4d62-42bb-9a01-79803a22f933

📥 Commits

Reviewing files that changed from the base of the PR and between 3c09e39 and 6053171.

📒 Files selected for processing (22)
  • pkg/executor/BUILD.bazel
  • pkg/executor/adapter_slow_log.go
  • pkg/executor/cluster_table_test.go
  • pkg/executor/slow_query.go
  • pkg/executor/slow_query_sql_test.go
  • pkg/executor/slow_query_test.go
  • pkg/infoschema/tables.go
  • pkg/infoschema/test/clustertablestest/BUILD.bazel
  • pkg/infoschema/test/clustertablestest/cluster_tables_test.go
  • pkg/infoschema/test/clustertablestest/tables_test.go
  • pkg/server/conn_test.go
  • pkg/server/internal/parse/BUILD.bazel
  • pkg/server/internal/parse/parse.go
  • pkg/server/internal/parse/parse_test.go
  • pkg/sessionctx/variable/noop.go
  • pkg/sessionctx/variable/session_test.go
  • pkg/sessionctx/variable/slow_log.go
  • pkg/sessionctx/variable/statusvar.go
  • pkg/sessionctx/variable/sysvar.go
  • pkg/sessionctx/variable/tidb_vars.go
  • pkg/sessionctx/variable/variable_test.go
  • pkg/testkit/testutil/require.go
💤 Files with no reviewable changes (1)
  • pkg/sessionctx/variable/noop.go
✅ Files skipped from review due to trivial changes (6)
  • pkg/executor/BUILD.bazel
  • pkg/sessionctx/variable/variable_test.go
  • pkg/testkit/testutil/require.go
  • pkg/executor/cluster_table_test.go
  • pkg/sessionctx/variable/statusvar.go
  • pkg/server/conn_test.go
🚧 Files skipped from review as they are similar to previous changes (8)
  • pkg/infoschema/test/clustertablestest/BUILD.bazel
  • pkg/server/internal/parse/BUILD.bazel
  • pkg/infoschema/tables.go
  • pkg/executor/slow_query_test.go
  • pkg/sessionctx/variable/slow_log.go
  • pkg/server/internal/parse/parse_test.go
  • pkg/sessionctx/variable/sysvar.go
  • pkg/infoschema/test/clustertablestest/cluster_tables_test.go

@jiong-nba
Copy link
Copy Markdown
Contributor

/retest

1 similar comment
@jiong-nba
Copy link
Copy Markdown
Contributor

/retest

@jiong-nba
Copy link
Copy Markdown
Contributor

/test fast_test_tiprow_for_release

1 similar comment
@jiong-nba
Copy link
Copy Markdown
Contributor

/test fast_test_tiprow_for_release

@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented Mar 30, 2026

@jiong-nba: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

/test build
/test check-dev
/test check-dev2
/test mysql-test
/test pull-br-integration-test
/test pull-unit-test-ddlv1
/test unit-test

The following commands are available to trigger optional jobs:

/test pull-check-deps
/test pull-common-test
/test pull-e2e-test
/test pull-integration-common-test
/test pull-integration-copr-test
/test pull-integration-ddl-test
/test pull-integration-e2e-test
/test pull-integration-jdbc-test
/test pull-integration-mysql-test
/test pull-integration-nodejs-test
/test pull-integration-python-orm-test
/test pull-integration-tidb-tools-test
/test pull-lightning-integration-test
/test pull-mysql-client-test
/test pull-sqllogic-test

Use /test all to run the following jobs that were automatically triggered:

pingcap/tidb/release-8.5/pull_br_integration_test
pingcap/tidb/release-8.5/pull_build
pingcap/tidb/release-8.5/pull_check
pingcap/tidb/release-8.5/pull_check2
pingcap/tidb/release-8.5/pull_mysql_test
pingcap/tidb/release-8.5/pull_unit_test
pull-check-deps
Details

In response to this:

/test fast_test_tiprow_for_release

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@jiong-nba
Copy link
Copy Markdown
Contributor

/test unit-test

@tiprow
Copy link
Copy Markdown

tiprow Bot commented Mar 30, 2026

@jiong-nba: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

/test fast_test_tiprow_for_release

Use /test all to run all jobs.

Details

In response to this:

/test unit-test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@jiong-nba
Copy link
Copy Markdown
Contributor

/retest

2 similar comments
@jiong-nba
Copy link
Copy Markdown
Contributor

/retest

@jiong-nba
Copy link
Copy Markdown
Contributor

/retest

@D3Hunter
Copy link
Copy Markdown
Contributor

/approve

Copy link
Copy Markdown
Contributor

@yibin87 yibin87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-chi-bot ti-chi-bot Bot added the needs-1-more-lgtm Indicates a PR needs 1 more LGTM. label Mar 31, 2026
@jiong-nba
Copy link
Copy Markdown
Contributor

/retest

@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented Apr 1, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: D3Hunter, nolouch, yibin87
Once this PR has been reviewed and has the lgtm label, please assign yudongusa for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot Bot added lgtm and removed needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Apr 1, 2026
@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented Apr 1, 2026

[LGTM Timeline notifier]

Timeline:

  • 2026-03-31 01:21:34.256411017 +0000 UTC m=+228099.461771074: ☑️ agreed by yibin87.
  • 2026-04-01 06:58:38.771998674 +0000 UTC m=+334723.977358731: ☑️ agreed by nolouch.

@ti-chi-bot ti-chi-bot Bot added cherry-pick-approved Cherry pick PR approved by release team. and removed do-not-merge/cherry-pick-not-approved labels Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cherry-pick-approved Cherry pick PR approved by release team. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm ok-to-test Indicates a PR is ready to be tested. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/planner SIG: Planner size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-8.5 This PR is cherry-picked to release-8.5 from a source PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants