*: support logging session connect attrs to slow query log (#66617)#67323
*: support logging session connect attrs to slow query log (#66617)#67323ti-chi-bot wants to merge 2 commits intopingcap:release-8.5from
Conversation
|
@jiong-nba This PR has conflicts, I have hold it. |
|
@ti-chi-bot: ## If you want to know how to resolve it, please read the guide in TiDB Dev Guide. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughThis PR records MySQL SESSION_CONNECT_ATTRS in slow query logs and information schema, adds parsing, truncation and size-limit policies for handshake connect-attributes, exposes sysvars/metrics to control/report limits, and adds extensive tests and test utilities validating formatting, parsing, truncation, and cluster-table exposure. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Server
participant Parser
participant Executor
participant Logger
participant Reader
Client->>Server: Handshake (may include ClientConnectAttrs)
Server->>Parser: decodeConnAttrs(data)
Parser->>Parser: applyConnAttrsPolicyAndMetrics (limit, truncate, warn)
Parser-->>Server: attrs map (+ warningsText)
Server->>Executor: store attrs in SessionVars.ConnectionInfo.Attributes
Executor->>Logger: SetSlowLogItems(... SessionConnectAttrs)
Logger->>Logger: serialize JSON "# Session_connect_attrs: ..."
Logger-->>Logger: write slow log file
Reader->>Logger: read slow log entries
Reader->>Executor: parseLog() finds Session_connect_attrs field
Executor->>Parser: parse JSON/BinaryJSON -> column value
Reader-->>Client: exposed via information_schema.slow_query / cluster_slow_query
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.4)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (2)
pkg/testkit/testutil/require.go (1)
92-100: Mark this assertion helper witht.Helper().Without
t.Helper(), failures are attributed topkg/testkit/testutil/require.goinstead of the calling test, which makes these new assertions harder to debug.Suggested change
func RequireContainsDefaultSessionConnectAttrs(t testing.TB, attrsText string) { + t.Helper() require.Contains(t, attrsText, `"_client_name"`) require.Contains(t, attrsText, `"Go-MySQL-Driver"`) require.Contains(t, attrsText, `"_os"`) require.Contains(t, attrsText, `"linux"`) require.Contains(t, attrsText, `"app_name"`) require.Contains(t, attrsText, `"test_app"`) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/testkit/testutil/require.go` around lines 92 - 100, The helper RequireContainsDefaultSessionConnectAttrs should call t.Helper() so failures are reported at the caller; update the function RequireContainsDefaultSessionConnectAttrs(t testing.TB, attrsText string) to invoke t.Helper() as the first statement, then keep the existing require.Contains assertions (references: RequireContainsDefaultSessionConnectAttrs and parameter t) unchanged.pkg/sessionctx/variable/tests/variable_test.go (1)
121-146: Uset.Cleanupfor temporary sysvar registration.Both tests mutate the global sysvar registry and only unregister on the happy path. Any earlier
requirefailure leaves the registry polluted for later tests. Apply the same pattern to bothmynewsysvarandnewinstancesysvar.♻️ Suggested cleanup pattern
variable.RegisterSysVar(&sv) +t.Cleanup(func() { + variable.UnregisterSysVar("mynewsysvar") +}) require.Len(t, variable.GetSysVars(), count+1) @@ -// Test unregistration restores previous count -variable.UnregisterSysVar("mynewsysvar") require.Equal(t, len(variable.GetSysVars()), count)Based on learnings,
**/*_test.go : Keep test changes minimal and deterministic; avoid broad golden/testdata churn unless required.Also applies to: 578-603
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/sessionctx/variable/tests/variable_test.go` around lines 121 - 146, The test registers a global sysvar via variable.RegisterSysVar(&sv) (and similarly for newinstancesysvar) but only calls variable.UnregisterSysVar on the happy path, which can leak state on failures; change the test to call t.Cleanup to unregister the sysvar immediately after RegisterSysVar (e.g., call t.Cleanup(func(){ variable.UnregisterSysVar("mynewsysvar") })) so the registry is always restored even if require assertions fail; do this for both "mynewsysvar" and "newinstancesysvar" and keep the rest of the assertions (GetSysVars, GetSysVar, Validate, SetSessionFromHook) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/executor/cluster_table_test.go`:
- Around line 35-39: Remove the leftover git conflict markers and restore both
imports so compilation succeeds: delete the <<<<<<</=======/>>>>>>> lines and
include both "github.com/pingcap/tidb/pkg/util" and
"github.com/pingcap/tidb/pkg/testkit/testutil" in the import block (no alias
needed), ensuring references like util.ProcessInfo used by createRPCServer
remain valid and testkit/testutil helpers stay available.
In `@pkg/executor/slow_query_test.go`:
- Around line 181-187: Remove the Git conflict markers and duplicate blocks in
the expected record string constants in slow_query_test.go and keep the merged
expectation that includes the new Session_connect_attrs column (the variant
containing the extra comma and the "null" token), i.e., delete the <<<<<<<,
=======, and >>>>>>> sections and ensure the test record string used by the test
functions (the expected record literal around the sample lines and the second
occurrence noted at the other location) matches the merged form with
Session_connect_attrs present.
In `@pkg/executor/slow_query.go`:
- Around line 706-714: The parseLog function contains Git conflict markers
around the warnings-handling block; remove the unresolved markers and retain the
added handlers for Session_connect_attrs and DB so the branch after warnings
parsing continues with the new checks. Specifically, in parseLog, delete the
<<<<<<<, =======, and >>>>>>> lines and ensure the else-if chain includes the
blocks that check strings.HasPrefix(line,
variable.SlowLogSessionConnectAttrs+variable.SlowLogSpaceMarkStr) (calling
e.setColumnValue with variable.SlowLogSessionConnectAttrs) and
strings.HasPrefix(line, variable.SlowLogDBStr+variable.SlowLogSpaceMarkStr)
(calling e.setColumnValue with variable.SlowLogDBStr) so the function compiles
and the new columns are parsed.
In `@pkg/infoschema/test/clustertablestest/cluster_tables_test.go`:
- Around line 53-57: Remove the leftover merge markers and restore both imports:
keep "github.com/pingcap/tidb/pkg/util" (used for util.ProcessInfo and
util.SessionManager) and add "github.com/pingcap/tidb/pkg/testkit/testutil";
ensure the import block no longer contains <<<<<<</=======/>>>>>>> and that
usages of util.ProcessInfo/util.SessionManager and testutil remain unchanged and
compile.
In `@pkg/server/conn_test.go`:
- Around line 2359-2718: Add the missing import for package vardef so the tests
referencing vardef.ConnectAttrsSize, vardef.ConnectAttrsLongestSeen, and
vardef.ConnectAttrsLost compile; update the import block in
pkg/server/conn_test.go to include
"github.com/pingcap/tidb/pkg/sessionctx/vardef" (or the module path used in this
repo) alongside the existing imports so the TestParseHandshakeAttrsTruncation
cases can access the vardef symbols.
In `@pkg/sessionctx/variable/statusvar.go`:
- Around line 132-144: Remove the merge conflict markers in
pkg/sessionctx/variable/statusvar.go, add the missing import
"pkg/sessionctx/vardef", and ensure the two new status keys
(Performance_schema_session_connect_attrs_longest_seen and
Performance_schema_session_connect_attrs_lost, or their vardef constants) are
present in the defaultStatus map with appropriate scope (vardef.ScopeGlobal) and
initial int64(0) values; keep the existing Ssl_* entries using
vardef.ScopeGlobal | vardef.ScopeSession so GetScope() can safely dereference
defaultStatus[status].Scope without risking a nil lookup.
In `@pkg/sessionctx/variable/tests/variable_test.go`:
- Around line 1-13: The file pkg/sessionctx/variable/tests/variable_test.go has
an outdated license header (2022); replace the header with the standard TiDB
Apache-2.0 license header used by nearby Go files (copy from an existing *.go
file in the repo) and update the copyright year(s) to the current year or range
as used elsewhere so the header matches project conventions.
---
Nitpick comments:
In `@pkg/sessionctx/variable/tests/variable_test.go`:
- Around line 121-146: The test registers a global sysvar via
variable.RegisterSysVar(&sv) (and similarly for newinstancesysvar) but only
calls variable.UnregisterSysVar on the happy path, which can leak state on
failures; change the test to call t.Cleanup to unregister the sysvar immediately
after RegisterSysVar (e.g., call t.Cleanup(func(){
variable.UnregisterSysVar("mynewsysvar") })) so the registry is always restored
even if require assertions fail; do this for both "mynewsysvar" and
"newinstancesysvar" and keep the rest of the assertions (GetSysVars, GetSysVar,
Validate, SetSessionFromHook) unchanged.
In `@pkg/testkit/testutil/require.go`:
- Around line 92-100: The helper RequireContainsDefaultSessionConnectAttrs
should call t.Helper() so failures are reported at the caller; update the
function RequireContainsDefaultSessionConnectAttrs(t testing.TB, attrsText
string) to invoke t.Helper() as the first statement, then keep the existing
require.Contains assertions (references:
RequireContainsDefaultSessionConnectAttrs and parameter t) unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 696c8d69-3804-4ca1-bf80-4bea77b30e20
📒 Files selected for processing (24)
pkg/executor/BUILD.bazelpkg/executor/adapter_slow_log.gopkg/executor/cluster_table_test.gopkg/executor/slow_query.gopkg/executor/slow_query_sql_test.gopkg/executor/slow_query_test.gopkg/infoschema/tables.gopkg/infoschema/test/clustertablestest/BUILD.bazelpkg/infoschema/test/clustertablestest/cluster_tables_test.gopkg/infoschema/test/clustertablestest/tables_test.gopkg/server/conn_test.gopkg/server/internal/parse/BUILD.bazelpkg/server/internal/parse/parse.gopkg/server/internal/parse/parse_test.gopkg/sessionctx/vardef/sysvar.gopkg/sessionctx/vardef/tidb_vars.gopkg/sessionctx/variable/noop.gopkg/sessionctx/variable/session_test.gopkg/sessionctx/variable/slow_log.gopkg/sessionctx/variable/statusvar.gopkg/sessionctx/variable/sysvar.gopkg/sessionctx/variable/tests/BUILD.bazelpkg/sessionctx/variable/tests/variable_test.gopkg/testkit/testutil/require.go
| func TestParseHandshakeAttrsTruncation(t *testing.T) { | ||
| // Save and restore global atomic counters. | ||
| origSize := vardef.ConnectAttrsSize.Load() | ||
| origLongest := vardef.ConnectAttrsLongestSeen.Load() | ||
| origLost := vardef.ConnectAttrsLost.Load() | ||
| defer func() { | ||
| vardef.ConnectAttrsSize.Store(origSize) | ||
| vardef.ConnectAttrsLongestSeen.Store(origLongest) | ||
| vardef.ConnectAttrsLost.Store(origLost) | ||
| }() | ||
|
|
||
| t.Run("exceeds limit truncation", func(t *testing.T) { | ||
| vardef.ConnectAttrsSize.Store(5) // very small limit | ||
| vardef.ConnectAttrsLongestSeen.Store(0) | ||
| vardef.ConnectAttrsLost.Store(0) | ||
|
|
||
| // Construct payload | ||
| // Capability: ClientProtocol41 | ClientConnectAtts | ||
| var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts | ||
|
|
||
| var buf bytes.Buffer | ||
| // Header (32 bytes) | ||
| binary.Write(&buf, binary.LittleEndian, clientCap) | ||
| binary.Write(&buf, binary.LittleEndian, uint32(0)) // MaxPacketSize | ||
| buf.WriteByte(0) // Collation | ||
| buf.Write(make([]byte, 23)) // Reserved | ||
|
|
||
| // Body | ||
| buf.WriteString("root") | ||
| buf.WriteByte(0) // User null-term | ||
|
|
||
| buf.WriteByte(0) // Auth null-term | ||
|
|
||
| // Attrs: "ab":"cd" (4), "ef":"gh" (4). Total = 8. Limit = 5. | ||
| attrsBuf := bytes.NewBuffer(nil) | ||
| // K1 | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("ab") | ||
| // V1 | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("cd") | ||
| // K2 | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("ef") | ||
| // V2 | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("gh") | ||
|
|
||
| attrsBytes := attrsBuf.Bytes() | ||
| buf.WriteByte(byte(len(attrsBytes))) // 12 bytes total length (includes len enc overhead or just payload?) | ||
| // ParseLengthEncodedInt parses the integer. It returns the integer value. | ||
| // HandshakeResponseBody uses this value as the length of the *bytes* to consume for attributes. | ||
| // The bytes consumed are then passed to parseAttrs. | ||
| // parseAttrs expects `[len][str][len][str]`. | ||
| // So `attrsBytes` is correct. | ||
| buf.Write(attrsBytes) | ||
|
|
||
| data := buf.Bytes() | ||
|
|
||
| var p handshake.Response41 | ||
| offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data) | ||
| require.NoError(t, err) | ||
|
|
||
| err = parse.HandshakeResponseBody(context.Background(), &p, data, offset) | ||
| require.NoError(t, err) | ||
|
|
||
| // Check truncation | ||
| require.Len(t, p.Attrs, 2) | ||
| require.Equal(t, "cd", p.Attrs["ab"]) | ||
| val, ok := p.Attrs["_truncated"] | ||
| require.True(t, ok) | ||
| require.Equal(t, "4", val) | ||
|
|
||
| require.Equal(t, int64(1), vardef.ConnectAttrsLost.Load()) | ||
| require.Equal(t, int64(8), vardef.ConnectAttrsLongestSeen.Load()) // 4+4=8 | ||
| }) | ||
|
|
||
| t.Run("limit 0 disables collection", func(t *testing.T) { | ||
| vardef.ConnectAttrsSize.Store(0) | ||
| vardef.ConnectAttrsLongestSeen.Store(0) | ||
| vardef.ConnectAttrsLost.Store(0) | ||
|
|
||
| var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts | ||
| var buf bytes.Buffer | ||
| binary.Write(&buf, binary.LittleEndian, clientCap) | ||
| binary.Write(&buf, binary.LittleEndian, uint32(0)) | ||
| buf.WriteByte(0) | ||
| buf.Write(make([]byte, 23)) | ||
| buf.WriteString("root") | ||
| buf.WriteByte(0) | ||
| buf.WriteByte(0) | ||
|
|
||
| attrsBuf := bytes.NewBuffer(nil) | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("ab") | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("cd") | ||
| attrsBytes := attrsBuf.Bytes() | ||
| buf.WriteByte(byte(len(attrsBytes))) | ||
| buf.Write(attrsBytes) | ||
| data := buf.Bytes() | ||
|
|
||
| var p handshake.Response41 | ||
| offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data) | ||
| require.NoError(t, err) | ||
|
|
||
| err = parse.HandshakeResponseBody(context.Background(), &p, data, offset) | ||
| require.NoError(t, err) | ||
|
|
||
| require.Len(t, p.Attrs, 0) | ||
| require.Equal(t, int64(0), vardef.ConnectAttrsLost.Load()) | ||
| require.Equal(t, int64(0), vardef.ConnectAttrsLongestSeen.Load()) | ||
| }) | ||
|
|
||
| t.Run("limit 65536 acceptance", func(t *testing.T) { | ||
| vardef.ConnectAttrsSize.Store(65536) | ||
| vardef.ConnectAttrsLongestSeen.Store(0) | ||
| vardef.ConnectAttrsLost.Store(0) | ||
|
|
||
| var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts | ||
| var buf bytes.Buffer | ||
| binary.Write(&buf, binary.LittleEndian, clientCap) | ||
| binary.Write(&buf, binary.LittleEndian, uint32(0)) | ||
| buf.WriteByte(0) | ||
| buf.Write(make([]byte, 23)) | ||
| buf.WriteString("root") | ||
| buf.WriteByte(0) | ||
| buf.WriteByte(0) | ||
|
|
||
| attrsBuf := bytes.NewBuffer(nil) | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("ab") | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("cd") | ||
| attrsBytes := attrsBuf.Bytes() | ||
| buf.WriteByte(byte(len(attrsBytes))) | ||
| buf.Write(attrsBytes) | ||
| data := buf.Bytes() | ||
|
|
||
| var p handshake.Response41 | ||
| offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data) | ||
| require.NoError(t, err) | ||
|
|
||
| err = parse.HandshakeResponseBody(context.Background(), &p, data, offset) | ||
| require.NoError(t, err) | ||
|
|
||
| // 65536 should easily accept everything | ||
| require.Len(t, p.Attrs, 1) | ||
| require.Equal(t, "cd", p.Attrs["ab"]) | ||
|
|
||
| require.Equal(t, int64(0), vardef.ConnectAttrsLost.Load()) | ||
| require.Equal(t, int64(4), vardef.ConnectAttrsLongestSeen.Load()) | ||
| }) | ||
|
|
||
| t.Run("limit 1MiB rejection", func(t *testing.T) { | ||
| // Construct payload declaring > 1 MiB of attributes. | ||
| var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts | ||
|
|
||
| var buf bytes.Buffer | ||
| // Header (32 bytes) | ||
| binary.Write(&buf, binary.LittleEndian, clientCap) | ||
| binary.Write(&buf, binary.LittleEndian, uint32(0)) | ||
| buf.WriteByte(0) | ||
| buf.Write(make([]byte, 23)) | ||
| // Body | ||
| buf.WriteString("root") | ||
| buf.WriteByte(0) | ||
| buf.WriteByte(0) | ||
|
|
||
| // Encode a length-encoded integer of 1<<20 + 1 (1 MiB + 1 byte). | ||
| // 0xfd prefix = 3-byte little-endian integer (range 65536–16777215). | ||
| // 1048577 = 0x100001 → LE bytes: 0x01, 0x00, 0x10. | ||
| buf.WriteByte(0xfd) | ||
| buf.WriteByte(0x01) | ||
| buf.WriteByte(0x00) | ||
| buf.WriteByte(0x10) | ||
|
|
||
| // The 1 MiB check fires before the bounds-check on | ||
| // data[offset : offset+num], so no panic is expected. | ||
|
|
||
| data := buf.Bytes() | ||
| var p handshake.Response41 | ||
| offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data) | ||
| require.NoError(t, err) | ||
|
|
||
| err = parse.HandshakeResponseBody(context.Background(), &p, data, offset) | ||
| require.Error(t, err) | ||
| require.Contains(t, err.Error(), "connection refused: session connection attributes exceed the 1 MiB hard limit") | ||
| }) | ||
|
|
||
| t.Run("no limit", func(t *testing.T) { | ||
| vardef.ConnectAttrsSize.Store(-1) // -1 means no limit up to 64KB | ||
| vardef.ConnectAttrsLongestSeen.Store(0) | ||
| vardef.ConnectAttrsLost.Store(0) | ||
|
|
||
| var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts | ||
|
|
||
| var buf bytes.Buffer | ||
| // Header (32 bytes) | ||
| binary.Write(&buf, binary.LittleEndian, clientCap) | ||
| binary.Write(&buf, binary.LittleEndian, uint32(0)) | ||
| buf.WriteByte(0) | ||
| buf.Write(make([]byte, 23)) | ||
| // Body | ||
| buf.WriteString("root") | ||
| buf.WriteByte(0) | ||
| buf.WriteByte(0) | ||
|
|
||
| // Attrs: "ab":"cd", "ef":"gh". Total = 8. | ||
| attrsBuf := bytes.NewBuffer(nil) | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("ab") | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("cd") | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("ef") | ||
| attrsBuf.WriteByte(2) | ||
| attrsBuf.WriteString("gh") | ||
|
|
||
| attrsBytes := attrsBuf.Bytes() | ||
| buf.WriteByte(byte(len(attrsBytes))) | ||
| buf.Write(attrsBytes) | ||
|
|
||
| data := buf.Bytes() | ||
| var p handshake.Response41 | ||
| offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data) | ||
| require.NoError(t, err) | ||
|
|
||
| err = parse.HandshakeResponseBody(context.Background(), &p, data, offset) | ||
| require.NoError(t, err) | ||
|
|
||
| // All attrs should be accepted, no truncation. | ||
| require.Len(t, p.Attrs, 2) | ||
| require.Equal(t, "cd", p.Attrs["ab"]) | ||
| require.Equal(t, "gh", p.Attrs["ef"]) | ||
| _, hasTruncated := p.Attrs["_truncated"] | ||
| require.False(t, hasTruncated) | ||
|
|
||
| require.Equal(t, int64(0), vardef.ConnectAttrsLost.Load()) | ||
| require.Equal(t, int64(8), vardef.ConnectAttrsLongestSeen.Load()) | ||
| }) | ||
|
|
||
| t.Run("longest_seen not updated for large payloads", func(t *testing.T) { | ||
| // Attrs >= 64KB should NOT update LongestSeen. | ||
| vardef.ConnectAttrsSize.Store(-1) // Limit mapped to 65536 max internally | ||
| vardef.ConnectAttrsLongestSeen.Store(100) | ||
| vardef.ConnectAttrsLost.Store(0) | ||
|
|
||
| var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts | ||
|
|
||
| var buf bytes.Buffer | ||
| binary.Write(&buf, binary.LittleEndian, clientCap) | ||
| binary.Write(&buf, binary.LittleEndian, uint32(0)) | ||
| buf.WriteByte(0) | ||
| buf.Write(make([]byte, 23)) | ||
| buf.WriteString("root") | ||
| buf.WriteByte(0) | ||
| buf.WriteByte(0) | ||
|
|
||
| // Build attrs: one key "k" with a 70000-byte value (total > 64KB). | ||
| attrsBuf := bytes.NewBuffer(nil) | ||
| attrsBuf.WriteByte(1) // key length = 1 | ||
| attrsBuf.WriteByte('k') | ||
| // value length = 70000 → needs 3-byte length-encoded int: 0xfd + 3 LE bytes | ||
| valLen := 70000 | ||
| attrsBuf.WriteByte(0xfd) | ||
| attrsBuf.WriteByte(byte(valLen)) | ||
| attrsBuf.WriteByte(byte(valLen >> 8)) | ||
| attrsBuf.WriteByte(byte(valLen >> 16)) | ||
| attrsBuf.Write(make([]byte, valLen)) // value payload | ||
|
|
||
| attrsBytes := attrsBuf.Bytes() | ||
| // Encode overall attrs length as 3-byte length-encoded int. | ||
| attrsLen := len(attrsBytes) | ||
| buf.WriteByte(0xfd) | ||
| buf.WriteByte(byte(attrsLen)) | ||
| buf.WriteByte(byte(attrsLen >> 8)) | ||
| buf.WriteByte(byte(attrsLen >> 16)) | ||
| buf.Write(attrsBytes) | ||
|
|
||
| data := buf.Bytes() | ||
| var p handshake.Response41 | ||
| offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data) | ||
| require.NoError(t, err) | ||
|
|
||
| err = parse.HandshakeResponseBody(context.Background(), &p, data, offset) | ||
| require.NoError(t, err) | ||
|
|
||
| // Attrs are truncated (totalSize=70001 > effectiveLimit=65536, because | ||
| // limit=-1 maps to effectiveLimit=65536 internally), but LongestSeen | ||
| // should NOT be updated because totalSize >= 64KB. | ||
| require.Equal(t, int64(100), vardef.ConnectAttrsLongestSeen.Load(), | ||
| "LongestSeen should not be updated for payloads >= 64KB") | ||
| }) | ||
|
|
||
| t.Run("underscore-prefixed attrs include reserved key", func(t *testing.T) { | ||
| // Keep underscore-prefixed attributes from clients for MySQL parity and | ||
| // observability. The client-provided "_truncated" is retained when no | ||
| // truncation occurs; if truncation happens, server may overwrite it. | ||
| vardef.ConnectAttrsSize.Store(-1) // Limit mapped to 65536 max internally | ||
| vardef.ConnectAttrsLongestSeen.Store(0) | ||
| vardef.ConnectAttrsLost.Store(0) | ||
|
|
||
| var clientCap uint32 = mysql.ClientProtocol41 | mysql.ClientConnectAtts | ||
|
|
||
| var buf bytes.Buffer | ||
| binary.Write(&buf, binary.LittleEndian, clientCap) | ||
| binary.Write(&buf, binary.LittleEndian, uint32(0)) | ||
| buf.WriteByte(0) | ||
| buf.Write(make([]byte, 23)) | ||
| buf.WriteString("root") | ||
| buf.WriteByte(0) | ||
| buf.WriteByte(0) | ||
|
|
||
| // Attrs: | ||
| // "_client_name":"Go-MySQL-Driver" → must be kept | ||
| // "_os":"linux" → must be kept | ||
| // "_program_name":"mysql" → must be kept | ||
| // "_custom":"val" → must be kept | ||
| // "_truncated":"fake" → kept (may be overwritten by server on truncation) | ||
| // "app_name":"myapp" → must be kept | ||
| attrsBuf := bytes.NewBuffer(nil) | ||
| for _, kv := range [][2]string{ | ||
| {"_client_name", "Go-MySQL-Driver"}, | ||
| {"_os", "linux"}, | ||
| {"_program_name", "mysql"}, | ||
| {"_custom", "val"}, | ||
| {"_truncated", "fake"}, | ||
| {"app_name", "myapp"}, | ||
| } { | ||
| attrsBuf.WriteByte(byte(len(kv[0]))) | ||
| attrsBuf.WriteString(kv[0]) | ||
| attrsBuf.WriteByte(byte(len(kv[1]))) | ||
| attrsBuf.WriteString(kv[1]) | ||
| } | ||
|
|
||
| attrsBytes := attrsBuf.Bytes() | ||
| buf.WriteByte(byte(len(attrsBytes))) | ||
| buf.Write(attrsBytes) | ||
|
|
||
| data := buf.Bytes() | ||
| var p handshake.Response41 | ||
| offset, err := parse.HandshakeResponseHeader(context.Background(), &p, data) | ||
| require.NoError(t, err) | ||
|
|
||
| err = parse.HandshakeResponseBody(context.Background(), &p, data, offset) | ||
| require.NoError(t, err) | ||
|
|
||
| // All keys are kept. | ||
| require.Len(t, p.Attrs, 6) | ||
| require.Equal(t, "Go-MySQL-Driver", p.Attrs["_client_name"]) | ||
| require.Equal(t, "linux", p.Attrs["_os"]) | ||
| require.Equal(t, "mysql", p.Attrs["_program_name"]) | ||
| require.Equal(t, "val", p.Attrs["_custom"]) | ||
| require.Equal(t, "fake", p.Attrs["_truncated"]) | ||
| require.Equal(t, "myapp", p.Attrs["app_name"]) | ||
|
|
||
| require.Equal(t, int64(0), vardef.ConnectAttrsLost.Load()) | ||
| }) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
sed -n '17,64p' pkg/server/conn_test.go
rg -n '\bvardef\.' pkg/server/conn_test.goRepository: pingcap/tidb
Length of output: 3557
🏁 Script executed:
fd -t f vardef.go pkg/sessionctx/Repository: pingcap/tidb
Length of output: 38
🏁 Script executed:
rg -l "ConnectAttrsSize|ConnectAttrsLongestSeen|ConnectAttrsLost" pkg/sessionctx/Repository: pingcap/tidb
Length of output: 280
🏁 Script executed:
rg -n "ConnectAttrsSize|ConnectAttrsLongestSeen|ConnectAttrsLost" pkg/sessionctx/vardef/Repository: pingcap/tidb
Length of output: 1229
🏁 Script executed:
grep -A 2 "var ConnectAttrs" pkg/sessionctx/vardef/*.goRepository: pingcap/tidb
Length of output: 38
Import pkg/sessionctx/vardef before adding these truncation tests.
The test code references vardef.ConnectAttrsSize, vardef.ConnectAttrsLongestSeen, and vardef.ConnectAttrsLost, but the import block does not include github.com/pingcap/tidb/pkg/sessionctx/vardef, so the file will not compile. Add the import to resolve the compilation error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/server/conn_test.go` around lines 2359 - 2718, Add the missing import for
package vardef so the tests referencing vardef.ConnectAttrsSize,
vardef.ConnectAttrsLongestSeen, and vardef.ConnectAttrsLost compile; update the
import block in pkg/server/conn_test.go to include
"github.com/pingcap/tidb/pkg/sessionctx/vardef" (or the module path used in this
repo) alongside the existing imports so the TestParseHandshakeAttrsTruncation
cases can access the vardef symbols.
| <<<<<<< HEAD | ||
| "Ssl_cipher": {ScopeGlobal | ScopeSession, ""}, | ||
| "Ssl_cipher_list": {ScopeGlobal | ScopeSession, ""}, | ||
| "Ssl_verify_mode": {ScopeGlobal | ScopeSession, 0}, | ||
| "Ssl_version": {ScopeGlobal | ScopeSession, ""}, | ||
| ======= | ||
| "Ssl_cipher": {vardef.ScopeGlobal | vardef.ScopeSession, ""}, | ||
| "Ssl_cipher_list": {vardef.ScopeGlobal | vardef.ScopeSession, ""}, | ||
| "Ssl_verify_mode": {vardef.ScopeGlobal | vardef.ScopeSession, 0}, | ||
| "Ssl_version": {vardef.ScopeGlobal | vardef.ScopeSession, ""}, | ||
| "Performance_schema_session_connect_attrs_longest_seen": {vardef.ScopeGlobal, int64(0)}, | ||
| "Performance_schema_session_connect_attrs_lost": {vardef.ScopeGlobal, int64(0)}, | ||
| >>>>>>> dffbd3e60fe (*: support logging session connect attrs to slow query log (#66617)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n '^(<<<<<<<|=======|>>>>>>>)|Performance_schema_session_connect_attrs_(longest_seen|lost)|\bvardef\.' \
pkg/sessionctx/variable/statusvar.go
rg -n 'pkg/sessionctx/vardef' pkg/sessionctx/variable/statusvar.goRepository: pingcap/tidb
Length of output: 864
🏁 Script executed:
head -30 pkg/sessionctx/variable/statusvar.go | cat -nRepository: pingcap/tidb
Length of output: 1154
🏁 Script executed:
grep -n "^import\|^[[:space:]]*\"pkg/sessionctx" pkg/sessionctx/variable/statusvar.go | head -20Repository: pingcap/tidb
Length of output: 67
🏁 Script executed:
rg -n "defaultStatus\[" pkg/sessionctx/variable/statusvar.go -A 2 | head -30Repository: pingcap/tidb
Length of output: 106
🏁 Script executed:
rg -n "func.*GetScope" pkg/sessionctx/variable/statusvar.go -A 10Repository: pingcap/tidb
Length of output: 407
🏁 Script executed:
rg -n "var defaultStatus" pkg/sessionctx/variable/statusvar.go -A 5Repository: pingcap/tidb
Length of output: 350
🏁 Script executed:
sed -n '60,160p' pkg/sessionctx/variable/statusvar.go | cat -nRepository: pingcap/tidb
Length of output: 4104
🏁 Script executed:
rg -n "import.*vardef|from.*vardef" pkg/sessionctx/variable/statusvar.goRepository: pingcap/tidb
Length of output: 38
🏁 Script executed:
wc -l pkg/sessionctx/variable/statusvar.goRepository: pingcap/tidb
Length of output: 96
🏁 Script executed:
tail -50 pkg/sessionctx/variable/statusvar.go | cat -nRepository: pingcap/tidb
Length of output: 2174
Resolve merge conflict, add missing vardef import, and ensure new status keys are in defaultStatus map.
Lines 72–86 contain unresolved merge conflict markers that prevent compilation. Additionally, the file uses vardef.ConnectAttrsLongestSeen and vardef.ConnectAttrsLost on lines 162–163 but does not import pkg/sessionctx/vardef.
More critically, lines 162–163 unconditionally add two status variable names to the returned map, but GetScope() (line 92) directly dereferences defaultStatus[status].Scope without nil checks. If the merge is resolved by dropping the two new keys from defaultStatus, calling GetScope() on those names will panic. Ensure the final merge includes both new keys in the defaultStatus map from the incoming branch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/sessionctx/variable/statusvar.go` around lines 132 - 144, Remove the
merge conflict markers in pkg/sessionctx/variable/statusvar.go, add the missing
import "pkg/sessionctx/vardef", and ensure the two new status keys
(Performance_schema_session_connect_attrs_longest_seen and
Performance_schema_session_connect_attrs_lost, or their vardef constants) are
present in the defaultStatus map with appropriate scope (vardef.ScopeGlobal) and
initial int64(0) values; keep the existing Ssl_* entries using
vardef.ScopeGlobal | vardef.ScopeSession so GetScope() can safely dereference
defaultStatus[status].Scope without risking a nil lookup.
| // Copyright 2022 PingCAP, Inc. | ||
| // | ||
| // Licensed under the Apache License, Version 2.0 (the "License"); | ||
| // you may not use this file except in compliance with the License. | ||
| // You may obtain a copy of the License at | ||
| // | ||
| // http://www.apache.org/licenses/LICENSE-2.0 | ||
| // | ||
| // Unless required by applicable law or agreed to in writing, software | ||
| // distributed under the License is distributed on an "AS IS" BASIS, | ||
| // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| // See the License for the specific language governing permissions and | ||
| // limitations under the License. |
There was a problem hiding this comment.
Update the license header year for this new file.
This file is new in the PR, so the header should be copied from a nearby file with the year adjusted as needed instead of keeping 2022.
As per coding guidelines, **/*.go: For new source files (for example \*.go`), include the standard TiDB license header (copyright + Apache 2.0) by copying from a nearby file and updating year if needed.`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/sessionctx/variable/tests/variable_test.go` around lines 1 - 13, The file
pkg/sessionctx/variable/tests/variable_test.go has an outdated license header
(2022); replace the header with the standard TiDB Apache-2.0 license header used
by nearby Go files (copy from an existing *.go file in the repo) and update the
copyright year(s) to the current year or range as used elsewhere so the header
matches project conventions.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
pkg/ddl/backfilling_operators.go (1)
528-560: Consider movingidxResultdeclaration inside the loop.The
idxResultvariable is declared at line 528 but only used within the loop (lines 550, 555). Moving it inside the loop improves scoping and makes the variable's lifetime clearer.✨ Suggested scope refinement
- var idxResult IndexRecordChunk err := wrapInBeginRollback(w.se, func(startTS uint64) error { // ... for !done { srcChk := w.getChunk() done, err = fetchTableScanResult(w.ctx, w.copCtx.GetBase(), rs, srcChk) if err != nil || w.ctx.Err() != nil { w.recycleChunk(srcChk) terror.Call(rs.Close) return err } - idxResult = IndexRecordChunk{ID: task.ID, Chunk: srcChk, Done: done} + idxResult := IndexRecordChunk{ID: task.ID, Chunk: srcChk, Done: done} if w.cpOp != nil { w.cpOp.UpdateChunk(task.ID, srcChk.NumRows(), done) } w.totalCount.Add(int64(srcChk.NumRows())) sender(idxResult) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/ddl/backfilling_operators.go` around lines 528 - 560, The idxResult variable is declared outside the fetch loop but only used inside it; move its declaration into the loop body where it's assigned and sent to improve scope and lifetime. In the closure passed to wrapInBeginRollback (the anonymous func with startTS uint64) locate idxResult = IndexRecordChunk{...} and change it so you create a new local idxResult inside the for !done { ... } block just before assigning/sending it (refer to functions/getters: getChunk, fetchTableScanResult, sender, and methods cpOp.UpdateChunk and totalCount.Add to ensure no other references rely on the outer variable).pkg/planner/core/rule_eliminate_projection.go (1)
191-207: Avoid full-map prefiltering here; it adds avoidable optimizer overhead.On Lines 193-204, every node scans all
replaceentries against all child schema columns before replacement. Since column replacement already no-ops for missing keys, this extra filtering is redundant and can noticeably increase planning cost on large plans.♻️ Proposed simplification
- // Filter replace map first to make sure the replaced columns - // exist in the child output. - newReplace := make(map[string]*expression.Column, len(replace)) - for code, expr := range replace { -childloop: - for _, child := range p.Children() { - for _, schemaCol := range child.Schema().Columns { - if schemaCol.Equal(nil, expr) { - newReplace[code] = expr - break childloop - } - } - } - } - if len(newReplace) > 0 { - p.ReplaceExprColumns(newReplace) - } + if len(replace) > 0 { + p.ReplaceExprColumns(replace) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/planner/core/rule_eliminate_projection.go` around lines 191 - 207, The prefilter loop building newReplace is redundant and costly because ReplaceExprColumns already ignores missing keys; remove the newReplace construction and instead directly call p.ReplaceExprColumns(replace) when replace is non-empty (check len(replace) > 0), eliminating the nested scans over p.Children() and child.Schema().Columns; update any references to newReplace to use replace and keep p.Children() and p.ReplaceExprColumns(...) as the focal symbols.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/server/internal/parse/parse.go`:
- Around line 198-208: The current fast-path in parseAttrs skips
applyConnAttrsPolicyAndMetrics when variable.ConnectAttrsSize.Load() == 0,
preventing truncation markers and metrics; remove that early return and always
call applyConnAttrsPolicyAndMetrics with the loaded size (even when zero). Keep
decodeConnAttrs(data) and its error handling as-is, then call
applyConnAttrsPolicyAndMetrics(decoded, variable.ConnectAttrsSize.Load()) to get
attrs and warningsText and return them; this ensures the truncation policy and
metrics updating run for the zero-size "drop everything" configuration.
- Around line 174-177: The code currently preserves a client-supplied
"_truncated" key (constant reservedConnAttrTruncated) which lets clients forge
TiDB-injected metadata; update the connection-attribute parsing/merging logic to
drop any client-provided entry whose key equals reservedConnAttrTruncated before
merging into server-side attributes so only the server can set that key, and
apply the same filter in the other parsing path that handles connection
attributes (the secondary block around where connection attributes are
parsed/merged — ensure both code paths remove reservedConnAttrTruncated from
client input).
In `@tests/realtikvtest/addindextest3/ingest_test.go`:
- Around line 474-476: The code sets ingest.ForceSyncFlagForTest then calls
tk.MustExec and clears the flag directly, which can leak the flag if MustExec
aborts; wrap the DDL call in a small closure so you can immediately defer
restoring ingest.ForceSyncFlagForTest to false (i.e., set
ForceSyncFlagForTest.Store(true) inside the closure and call defer
ForceSyncFlagForTest.Store(false) before invoking tk.MustExec("alter table t add
index idx_test(b);")) so the flag is always restored even on failure; locate the
call site using the symbol ingest.ForceSyncFlagForTest and the tk.MustExec call
shown.
---
Nitpick comments:
In `@pkg/ddl/backfilling_operators.go`:
- Around line 528-560: The idxResult variable is declared outside the fetch loop
but only used inside it; move its declaration into the loop body where it's
assigned and sent to improve scope and lifetime. In the closure passed to
wrapInBeginRollback (the anonymous func with startTS uint64) locate idxResult =
IndexRecordChunk{...} and change it so you create a new local idxResult inside
the for !done { ... } block just before assigning/sending it (refer to
functions/getters: getChunk, fetchTableScanResult, sender, and methods
cpOp.UpdateChunk and totalCount.Add to ensure no other references rely on the
outer variable).
In `@pkg/planner/core/rule_eliminate_projection.go`:
- Around line 191-207: The prefilter loop building newReplace is redundant and
costly because ReplaceExprColumns already ignores missing keys; remove the
newReplace construction and instead directly call p.ReplaceExprColumns(replace)
when replace is non-empty (check len(replace) > 0), eliminating the nested scans
over p.Children() and child.Schema().Columns; update any references to
newReplace to use replace and keep p.Children() and p.ReplaceExprColumns(...) as
the focal symbols.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: f1500afc-3bcd-43c2-96d3-e77b8fea6cbd
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (31)
DEPS.bzlgo.modpkg/ddl/backfilling_operators.gopkg/ddl/index_cop.gopkg/executor/adapter_slow_log.gopkg/executor/cluster_table_test.gopkg/executor/infoschema_reader_test.gopkg/executor/slow_query.gopkg/executor/slow_query_test.gopkg/infoschema/test/clustertablestest/cluster_tables_test.gopkg/infoschema/test/clustertablestest/tables_test.gopkg/lightning/backend/local/local.gopkg/owner/manager.gopkg/planner/core/casetest/rule/BUILD.bazelpkg/planner/core/casetest/rule/rule_eliminate_projection_test.gopkg/planner/core/rule_eliminate_projection.gopkg/server/conn_test.gopkg/server/internal/parse/BUILD.bazelpkg/server/internal/parse/parse.gopkg/server/internal/parse/parse_test.gopkg/sessionctx/variable/noop.gopkg/sessionctx/variable/session_test.gopkg/sessionctx/variable/slow_log.gopkg/sessionctx/variable/statusvar.gopkg/sessionctx/variable/sysvar.gopkg/sessionctx/variable/tidb_vars.gopkg/sessionctx/variable/variable_test.gopkg/util/dbterror/ddl_terror.gotests/realtikvtest/addindextest1/disttask_test.gotests/realtikvtest/addindextest3/ingest_test.gotests/realtikvtest/pessimistictest/pessimistic_test.go
💤 Files with no reviewable changes (5)
- pkg/util/dbterror/ddl_terror.go
- pkg/planner/core/casetest/rule/rule_eliminate_projection_test.go
- pkg/executor/infoschema_reader_test.go
- tests/realtikvtest/pessimistictest/pessimistic_test.go
- pkg/sessionctx/variable/noop.go
✅ Files skipped from review due to trivial changes (6)
- pkg/infoschema/test/clustertablestest/tables_test.go
- pkg/planner/core/casetest/rule/BUILD.bazel
- pkg/sessionctx/variable/variable_test.go
- go.mod
- pkg/server/internal/parse/parse_test.go
- pkg/server/conn_test.go
🚧 Files skipped from review as they are similar to previous changes (5)
- pkg/executor/adapter_slow_log.go
- pkg/server/internal/parse/BUILD.bazel
- pkg/executor/cluster_table_test.go
- pkg/sessionctx/variable/sysvar.go
- pkg/sessionctx/variable/slow_log.go
| // reservedConnAttrTruncated is injected by TiDB when connection attributes | ||
| // are truncated. A client-provided key with the same name may be overwritten | ||
| // when truncation happens. | ||
| const reservedConnAttrTruncated = "_truncated" |
There was a problem hiding this comment.
Don't let clients forge the reserved _truncated metadata.
_truncated is documented as TiDB-injected, but a client-supplied key is currently preserved whenever no server-side truncation happens. That makes the slow log and SLOW_QUERY output ambiguous, because consumers can no longer tell whether _truncated came from TiDB or the client.
💡 Suggested fix
if totalSize > effectiveLimit {
if !truncated {
truncated = true
variable.ConnectAttrsLost.Add(1)
}
continue
}
if !truncated {
- attrs[item.key] = item.value
+ if item.key != reservedConnAttrTruncated {
+ attrs[item.key] = item.value
+ }
acceptedSize += kvSize
}
}Also applies to: 262-277
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/server/internal/parse/parse.go` around lines 174 - 177, The code
currently preserves a client-supplied "_truncated" key (constant
reservedConnAttrTruncated) which lets clients forge TiDB-injected metadata;
update the connection-attribute parsing/merging logic to drop any
client-provided entry whose key equals reservedConnAttrTruncated before merging
into server-side attributes so only the server can set that key, and apply the
same filter in the other parsing path that handles connection attributes (the
secondary block around where connection attributes are parsed/merged — ensure
both code paths remove reservedConnAttrTruncated from client input).
| ingest.ForceSyncFlagForTest.Store(true) | ||
| defer ingest.ForceSyncFlagForTest.Store(oldForceSyncFlag) | ||
| tk.MustExec("alter table t add index idx_test(b);") | ||
| ingest.ForceSyncFlagForTest.Store(false) |
There was a problem hiding this comment.
Restore ForceSyncFlagForTest with a scoped defer.
This is package-global state. If tk.MustExec("alter table ...") aborts, Line 476 never runs and the flag leaks into the second call and later tests. Wrap just the DDL in a small closure so the restore still happens on failure without leaving the flag enabled for the subsequent update.
Suggested fix
- ingest.ForceSyncFlagForTest.Store(true)
- tk.MustExec("alter table t add index idx_test(b);")
- ingest.ForceSyncFlagForTest.Store(false)
+ func() {
+ prev := ingest.ForceSyncFlagForTest.Load()
+ ingest.ForceSyncFlagForTest.Store(true)
+ defer func() { ingest.ForceSyncFlagForTest.Store(prev) }()
+ tk.MustExec("alter table t add index idx_test(b);")
+ }()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ingest.ForceSyncFlagForTest.Store(true) | |
| defer ingest.ForceSyncFlagForTest.Store(oldForceSyncFlag) | |
| tk.MustExec("alter table t add index idx_test(b);") | |
| ingest.ForceSyncFlagForTest.Store(false) | |
| func() { | |
| prev := ingest.ForceSyncFlagForTest.Load() | |
| ingest.ForceSyncFlagForTest.Store(true) | |
| defer func() { ingest.ForceSyncFlagForTest.Store(prev) }() | |
| tk.MustExec("alter table t add index idx_test(b);") | |
| }() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/realtikvtest/addindextest3/ingest_test.go` around lines 474 - 476, The
code sets ingest.ForceSyncFlagForTest then calls tk.MustExec and clears the flag
directly, which can leak the flag if MustExec aborts; wrap the DDL call in a
small closure so you can immediately defer restoring ingest.ForceSyncFlagForTest
to false (i.e., set ForceSyncFlagForTest.Store(true) inside the closure and call
defer ForceSyncFlagForTest.Store(false) before invoking tk.MustExec("alter table
t add index idx_test(b);")) so the flag is always restored even on failure;
locate the call site using the symbol ingest.ForceSyncFlagForTest and the
tk.MustExec call shown.
3c09e39 to
6053171
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## release-8.5 #67323 +/- ##
================================================
Coverage ? 55.5072%
================================================
Files ? 1823
Lines ? 655042
Branches ? 0
================================================
Hits ? 363596
Misses ? 264444
Partials ? 27002
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
♻️ Duplicate comments (2)
pkg/server/internal/parse/parse.go (2)
198-208:⚠️ Potential issue | 🟠 MajorKeep the truncation policy active when the limit is
0.The fast path on Line 199 skips decoding and
applyConnAttrsPolicyAndMetrics(), so a supported “drop all attrs” configuration never incrementsPerformance_schema_session_connect_attrs_lost, never updates..._longest_seen, and never emits the server-side_truncatedmarker. Downstream, slow logs and system tables will silently omitSession_connect_attrsinstead of showing truncation metadata.💡 Suggested fix
func parseAttrs(data []byte) (map[string]string, string, error) { - if variable.ConnectAttrsSize.Load() == 0 { - return map[string]string{}, "", nil - } - decoded, err := decodeConnAttrs(data) if err != nil { return map[string]string{}, "", err } attrs, warningsText := applyConnAttrsPolicyAndMetrics(decoded, variable.ConnectAttrsSize.Load())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/server/internal/parse/parse.go` around lines 198 - 208, The early fast-return in parseAttrs when variable.ConnectAttrsSize.Load() == 0 skips decodeConnAttrs and applyConnAttrsPolicyAndMetrics so truncation metrics and markers are never recorded; change parseAttrs to still run the normal path (call decodeConnAttrs and then applyConnAttrsPolicyAndMetrics with the loaded size, even if it's 0) or explicitly invoke applyConnAttrsPolicyAndMetrics with an empty/decoded attrs payload and size 0 so Performance_schema_session_connect_attrs_lost, ..._longest_seen and the server-side _truncated marker are properly updated; locate parseAttrs, decodeConnAttrs, and applyConnAttrsPolicyAndMetrics to implement this behavior.
252-277:⚠️ Potential issue | 🟠 MajorDon't let clients forge the reserved
_truncatedmetadata.When no server-side truncation happens, the merge loop still preserves a client-supplied
_truncatedentry. That makes the slow log andSLOW_QUERYoutput ambiguous, because consumers can no longer tell whether_truncatedwas injected by TiDB or spoofed by the client.💡 Suggested fix
for _, item := range decoded.items { kvSize := int64(len(item.key)) + int64(len(item.value)) totalSize += kvSize if totalSize > effectiveLimit { if !truncated { truncated = true variable.ConnectAttrsLost.Add(1) } continue } if !truncated { - attrs[item.key] = item.value + if item.key != reservedConnAttrTruncated { + attrs[item.key] = item.value + } acceptedSize += kvSize } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/server/internal/parse/parse.go` around lines 252 - 277, The code currently preserves a client-supplied reservedConnAttrTruncated entry when no server truncation happened; update the merge logic in the loop that iterates decoded.items (and/or the post-merge handling of attrs) to ignore or remove any item whose key equals reservedConnAttrTruncated so clients cannot inject `_truncated`, i.e. skip adding that key during the attrs[item.key] = item.value step (and ensure after the loop that if truncated == false there is no attrs[reservedConnAttrTruncated] entry). Ensure references to decoded.items, attrs, truncated, acceptedSize and reservedConnAttrTruncated are used to locate and modify the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@pkg/server/internal/parse/parse.go`:
- Around line 198-208: The early fast-return in parseAttrs when
variable.ConnectAttrsSize.Load() == 0 skips decodeConnAttrs and
applyConnAttrsPolicyAndMetrics so truncation metrics and markers are never
recorded; change parseAttrs to still run the normal path (call decodeConnAttrs
and then applyConnAttrsPolicyAndMetrics with the loaded size, even if it's 0) or
explicitly invoke applyConnAttrsPolicyAndMetrics with an empty/decoded attrs
payload and size 0 so Performance_schema_session_connect_attrs_lost,
..._longest_seen and the server-side _truncated marker are properly updated;
locate parseAttrs, decodeConnAttrs, and applyConnAttrsPolicyAndMetrics to
implement this behavior.
- Around line 252-277: The code currently preserves a client-supplied
reservedConnAttrTruncated entry when no server truncation happened; update the
merge logic in the loop that iterates decoded.items (and/or the post-merge
handling of attrs) to ignore or remove any item whose key equals
reservedConnAttrTruncated so clients cannot inject `_truncated`, i.e. skip
adding that key during the attrs[item.key] = item.value step (and ensure after
the loop that if truncated == false there is no attrs[reservedConnAttrTruncated]
entry). Ensure references to decoded.items, attrs, truncated, acceptedSize and
reservedConnAttrTruncated are used to locate and modify the logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 40922d9a-4d62-42bb-9a01-79803a22f933
📒 Files selected for processing (22)
pkg/executor/BUILD.bazelpkg/executor/adapter_slow_log.gopkg/executor/cluster_table_test.gopkg/executor/slow_query.gopkg/executor/slow_query_sql_test.gopkg/executor/slow_query_test.gopkg/infoschema/tables.gopkg/infoschema/test/clustertablestest/BUILD.bazelpkg/infoschema/test/clustertablestest/cluster_tables_test.gopkg/infoschema/test/clustertablestest/tables_test.gopkg/server/conn_test.gopkg/server/internal/parse/BUILD.bazelpkg/server/internal/parse/parse.gopkg/server/internal/parse/parse_test.gopkg/sessionctx/variable/noop.gopkg/sessionctx/variable/session_test.gopkg/sessionctx/variable/slow_log.gopkg/sessionctx/variable/statusvar.gopkg/sessionctx/variable/sysvar.gopkg/sessionctx/variable/tidb_vars.gopkg/sessionctx/variable/variable_test.gopkg/testkit/testutil/require.go
💤 Files with no reviewable changes (1)
- pkg/sessionctx/variable/noop.go
✅ Files skipped from review due to trivial changes (6)
- pkg/executor/BUILD.bazel
- pkg/sessionctx/variable/variable_test.go
- pkg/testkit/testutil/require.go
- pkg/executor/cluster_table_test.go
- pkg/sessionctx/variable/statusvar.go
- pkg/server/conn_test.go
🚧 Files skipped from review as they are similar to previous changes (8)
- pkg/infoschema/test/clustertablestest/BUILD.bazel
- pkg/server/internal/parse/BUILD.bazel
- pkg/infoschema/tables.go
- pkg/executor/slow_query_test.go
- pkg/sessionctx/variable/slow_log.go
- pkg/server/internal/parse/parse_test.go
- pkg/sessionctx/variable/sysvar.go
- pkg/infoschema/test/clustertablestest/cluster_tables_test.go
|
/retest |
1 similar comment
|
/retest |
|
/test fast_test_tiprow_for_release |
1 similar comment
|
/test fast_test_tiprow_for_release |
|
@jiong-nba: The specified target(s) for The following commands are available to trigger optional jobs: Use DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/test unit-test |
|
@jiong-nba: The specified target(s) for Use DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/retest |
2 similar comments
|
/retest |
|
/retest |
|
/approve |
|
/retest |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: D3Hunter, nolouch, yibin87 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is an automated cherry-pick of #66617
What problem does this PR solve?
Issue Number: close #66616
Problem Summary:
In a Kubernetes deployment environment, container IPs change frequently. The current slow query log records only IP information, which is insufficient to trace the real source or application of SQL statements. Recording
SESSION_CONNECT_ATTRSin the slow log provides critical client metadata (such asapp_name,_os,_client_name, etc.) injected during the connection handshake.What changed and how does it work?
This PR introduces the feature to inject and log
SESSION_CONNECT_ATTRSinto the slow query log and exposes them through system tables:Slow Query Log Enhancement:
Session_connect_attrsfield in JSON format to the slow query file (pkg/sessionctx/variable/slow_log.go).SESSION_CONNECT_ATTRSJSON column to both theSLOW_QUERYandCLUSTER_SLOW_QUERYsystem tables (pkg/infoschema/tables.go).pkg/executor/slow_query.go).System Variables:
performance_schema_session_connect_attrs_sizefrom anoopvariable to a functionalGLOBALvariable (pkg/sessionctx/variable/sysvar.go), controlling the maximum total byte size of attributes per connection (default: 4096).Status Variables & Truncation Logic:
performance_schema_session_connect_attrs_size(pkg/server/internal/parse/parse.go).GLOBALstatus variables:Performance_schema_session_connect_attrs_longest_seenandPerformance_schema_session_connect_attrs_lostto monitor attribute size and truncation stats._truncatedto record discarded bytes.Check List
Tests
Environment and connection settings
unistore127.0.0.1:4000127.0.0.1:10080/tmp/tidb-slow-attrs-demo.logmysql-connector-python 9.6.0127.0.0.14000rootconn_attrs(normal case):{"app_name":"slowlog_demo","client_case":"attrs_normal"}conn_attrs(truncation case):{"app_name":"slowlog_demo","client_case":"attrs_trunc","extra":"x"*256}Variables used in test
Test SQL
Virtual table query SQL
Observed output: information_schema.slow_query
####Observed output: information_schema.cluster_slow_query
Side effects
Documentation
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.
Summary by CodeRabbit