Skip to content

logpuller: add region runtime registry scaffold#4727

Open
lidezhu wants to merge 6 commits intopingcap:masterfrom
lidezhu:ldz/improve-log-puller001
Open

logpuller: add region runtime registry scaffold#4727
lidezhu wants to merge 6 commits intopingcap:masterfrom
lidezhu:ldz/improve-log-puller001

Conversation

@lidezhu
Copy link
Copy Markdown
Collaborator

@lidezhu lidezhu commented Apr 5, 2026

What problem does this PR solve?

Issue Number: close #xxx

What is changed and how it works?

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Questions

Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?

Release note

Please refer to [Release Notes Language Style Guide](https://pingcap.github.io/tidb-dev-guide/contribute-to-tidb/release-notes-style-guide.html) to write a quality release note.

If you don't think this PR needs a release note then fill it with `None`.

Summary by CodeRabbit

  • Infrastructure Improvements
    • Added per-region runtime lifecycle tracking for improved observability of region phases, errors, retries, and snapshots.
  • New Features
    • Runtime records capture phase transitions, timestamps (range-lock/enqueue/send/initialized/replicating), worker assignment, resolved progress, and retry counts.
    • New metric exposing region counts by runtime phase.
  • Tests
    • Added unit and integration tests validating key allocation, state updates, deep-copy safety, snapshots, removals, and error/retry behaviors.

@ti-chi-bot ti-chi-bot Bot added do-not-merge/needs-linked-issue release-note Denotes a PR that will be considered when it comes time to generate release notes. labels Apr 5, 2026
@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented Apr 5, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign wlwilliamx for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a concurrency-safe region runtime registry and integrates it into subscriptionClient, workers, and event handling to track per-(subscription,region,generation) lifecycle, phases, timestamps, errors, retries, and provide allocation, snapshot, and removal APIs; includes unit and integration tests.

Changes

Cohort / File(s) Summary
Region Runtime Registry
logservice/logpuller/region_runtime.go
New registry implementation: regionPhase states, regionRuntimeIdentity, regionRuntimeKey, regionRuntimeState (with deep-copy clone), and regionRuntimeRegistry with alloc/upsert/transition/get/snapshot/remove APIs.
Registry Unit Tests
logservice/logpuller/region_runtime_test.go
Unit tests for allocKey generation, lifecycle updates, snapshot deep-copy semantics, and removal-by-subscription.
Subscription Client integration
logservice/logpuller/subscription_client.go
subscriptionClient now owns a regionRuntimeRegistry; helpers added to allocate/update/transition/remove runtime entries and integrated across scheduling, locking, failure handling, and table drain.
Region Event Handling
logservice/logpuller/region_event_handler.go
Event and resolved-ts paths now update runtime last-event, initialized/replicating, and resolved-ts timestamps when registry key is present.
Region Request Worker
logservice/logpuller/region_request_worker.go
Worker records enqueue/send times, assigns worker ID, and transitions phases in the registry when requests are queued/sent.
Integration Tests
logservice/logpuller/region_runtime_integration_test.go
Integration tests for scheduling, on-region-fail, resolved-ts handling, retry/remove behaviors, and registry interactions.
Region State Struct
logservice/logpuller/region_state.go
Added runtimeKey regionRuntimeKey to regionInfo and helper methods on regionFeedState to forward runtime updates to the registry.
Metrics
pkg/metrics/log_puller.go
New Prometheus gauge ticdc_subscription_client_region_runtime_phase_count (SubscriptionClientRegionRuntimePhaseCount) to export region counts by phase.

Sequence Diagram(s)

sequenceDiagram
    participant Client as SubscriptionClient
    participant Registry as RegionRuntimeRegistry
    participant Queue as TaskQueue
    participant Worker as RegionRequestWorker
    participant Store as Store/RPC

    Client->>Registry: allocKey(sub, region)
    Client->>Registry: updateRegionInfo(key, regionInfo)
    Client->>Queue: scheduleRegionRequest(region)
    Queue->>Worker: popTask()
    Worker->>Registry: setRequestEnqueueTime(key)
    Worker->>Registry: setRangeLockAcquiredTime(key)
    Worker->>Store: send request (RPC)
    Store-->>Worker: response / error
    alt success
        Worker->>Registry: setRequestSendTime(key)
        Worker->>Registry: transition(key, regionPhaseWaitInitialized)
    else failure
        Worker->>Registry: recordError(key, err)
        Worker->>Registry: incRetry(key)
        Worker->>Registry: transition(key, regionPhaseRetryPending)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested labels

lgtm, approved

Suggested reviewers

  • hongyunyan
  • asddongmen
  • wk989898

Poem

🐰 I hop from alloc to queued and then to send,

generations counted, each runtime a friend.
Timestamps tucked in my burrow, neat and deep,
Deep-copied spans let me safely keep.
A carrot for every phase I tend! 🥕

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely a template with no actual implementation details, design rationale, issue reference, or answers to required questions. All key sections are missing or contain only placeholders like 'Issue Number: close #xxx'. Fill in the PR description with: a valid issue reference, explanation of the problem being solved, details of the changes made, answers to all checklist and question items, and a release note. Remove unfilled template sections.
Docstring Coverage ⚠️ Warning Docstring coverage is 5.88% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding a region runtime registry scaffold to the logpuller component, which matches the primary purpose of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ti-chi-bot ti-chi-bot Bot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Apr 5, 2026
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a regionRuntimeRegistry to track and manage the state of regions within the logpuller service. It includes a comprehensive set of methods for updating region metadata, tracking lifecycle phases, and recording performance metrics. A review comment suggests optimizing the removeBySubscription method by implementing a secondary index to avoid O(N) iterations over all states while holding a lock, which could be a bottleneck when managing a large number of regions.

Comment on lines +321 to +334
func (r *regionRuntimeRegistry) removeBySubscription(subID SubscriptionID) int {
r.mu.Lock()
defer r.mu.Unlock()

removed := 0
for key := range r.states {
if key.subID != subID {
continue
}
delete(r.states, key)
removed++
}
return removed
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation of removeBySubscription iterates over all states in the registry while holding a lock. If the number of regions is large (e.g., millions), this can block all other registry operations for a significant amount of time, potentially causing performance issues.

Consider using a secondary index to improve the performance of this operation. For example, you could maintain a map from SubscriptionID to a set of regionRuntimeKeys.

Example:

type regionRuntimeRegistry struct {
	mu sync.RWMutex

	states        map[regionRuntimeKey]*regionRuntimeState
	statesBySubID map[SubscriptionID]map[regionRuntimeKey]struct{}
	generations   map[regionRuntimeIdentity]uint64
}

With this structure, removeBySubscription can be implemented more efficiently without iterating through all states. You would need to update upsert and remove to maintain this secondary index.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
logservice/logpuller/subscription_client.go (1)

192-193: Registry field is initialized but unused in current codebase.

The regionRuntimeRegistry is declared and initialized at line 245 but is never referenced or used in this file or anywhere else in production code. None of its methods (allocKey, transition, removeBySubscription) are called outside of tests.

When this registry is wired up in follow-up PRs, ensure to add cleanup logic (call to removeBySubscription) in the Unsubscribe and Close methods to prevent unbounded memory growth.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@logservice/logpuller/subscription_client.go` around lines 192 - 193, The
regionRuntimeRegistry field is initialized but unused; either remove the
declaration/initialization of regionRuntimeRegistry from subscription_client.go
to avoid dead state, or wire it into lifecycle methods: call
regionRuntimeRegistry.allocKey/transition where runtime allocation/updates occur
and ensure regionRuntimeRegistry.removeBySubscription is invoked from
Unsubscribe and Close to prevent memory leaks; reference the
regionRuntimeRegistry symbol and its methods allocKey, transition,
removeBySubscription, and the Unsubscribe and Close methods when making the
change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@logservice/logpuller/region_runtime.go`:
- Around line 310-334: The remove and removeBySubscription methods on
regionRuntimeRegistry currently delete entries from r.states but leave
corresponding entries in r.generations, causing an unbounded memory leak; update
both regionRuntimeRegistry.remove and removeBySubscription to also delete the
matching generation entry from r.generations (using the same region
identifier/key used in generations) under the same mutex lock after removing
from r.states; ensure for removeBySubscription you delete the generations entry
for each removed state (incrementing removed as before) and keep the lock/defer
semantics unchanged; if preserving generation counters is intentional, instead
add a documented GC path or comment explaining why generations are retained.

---

Nitpick comments:
In `@logservice/logpuller/subscription_client.go`:
- Around line 192-193: The regionRuntimeRegistry field is initialized but
unused; either remove the declaration/initialization of regionRuntimeRegistry
from subscription_client.go to avoid dead state, or wire it into lifecycle
methods: call regionRuntimeRegistry.allocKey/transition where runtime
allocation/updates occur and ensure regionRuntimeRegistry.removeBySubscription
is invoked from Unsubscribe and Close to prevent memory leaks; reference the
regionRuntimeRegistry symbol and its methods allocKey, transition,
removeBySubscription, and the Unsubscribe and Close methods when making the
change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 93650f12-26dd-490f-ad2a-8076dd56828b

📥 Commits

Reviewing files that changed from the base of the PR and between 165df37 and 7241f6a.

📒 Files selected for processing (3)
  • logservice/logpuller/region_runtime.go
  • logservice/logpuller/region_runtime_test.go
  • logservice/logpuller/subscription_client.go

Comment on lines +310 to +334
func (r *regionRuntimeRegistry) remove(key regionRuntimeKey) bool {
r.mu.Lock()
defer r.mu.Unlock()

if _, ok := r.states[key]; !ok {
return false
}
delete(r.states, key)
return true
}

func (r *regionRuntimeRegistry) removeBySubscription(subID SubscriptionID) int {
r.mu.Lock()
defer r.mu.Unlock()

removed := 0
for key := range r.states {
if key.subID != subID {
continue
}
delete(r.states, key)
removed++
}
return removed
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Memory leak: generations map entries are never cleaned up.

Both remove and removeBySubscription only delete from the states map but leave stale entries in the generations map. Over time, as regions are allocated and removed, the generations map will grow unbounded.

Consider cleaning up generations entries when removing states:

🐛 Proposed fix to clean up generations map
 func (r *regionRuntimeRegistry) remove(key regionRuntimeKey) bool {
 	r.mu.Lock()
 	defer r.mu.Unlock()

 	if _, ok := r.states[key]; !ok {
 		return false
 	}
 	delete(r.states, key)
+	identity := regionRuntimeIdentity{subID: key.subID, regionID: key.regionID}
+	delete(r.generations, identity)
 	return true
 }

 func (r *regionRuntimeRegistry) removeBySubscription(subID SubscriptionID) int {
 	r.mu.Lock()
 	defer r.mu.Unlock()

 	removed := 0
 	for key := range r.states {
 		if key.subID != subID {
 			continue
 		}
 		delete(r.states, key)
+		identity := regionRuntimeIdentity{subID: key.subID, regionID: key.regionID}
+		delete(r.generations, identity)
 		removed++
 	}
 	return removed
 }

Note: If the intent is to preserve generation counters to ensure monotonically increasing generations even after removal, consider documenting this design decision and adding periodic GC for stale entries to bound memory growth.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@logservice/logpuller/region_runtime.go` around lines 310 - 334, The remove
and removeBySubscription methods on regionRuntimeRegistry currently delete
entries from r.states but leave corresponding entries in r.generations, causing
an unbounded memory leak; update both regionRuntimeRegistry.remove and
removeBySubscription to also delete the matching generation entry from
r.generations (using the same region identifier/key used in generations) under
the same mutex lock after removing from r.states; ensure for
removeBySubscription you delete the generations entry for each removed state
(incrementing removed as before) and keep the lock/defer semantics unchanged; if
preserving generation counters is intentional, instead add a documented GC path
or comment explaining why generations are retained.

@ti-chi-bot ti-chi-bot Bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Apr 6, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
logservice/logpuller/region_request_worker.go (1)

519-523: Minor: The ok && err == nil guard is redundant.

Based on requestCache.add implementation, when ok is true, err is always nil. The double-check is harmless (defensive coding), but could be simplified to just ok.

♻️ Optional simplification
 func (s *regionRequestWorker) add(ctx context.Context, region regionInfo, force bool) (bool, error) {
 	ok, err := s.requestCache.add(ctx, region, force)
-	if ok && err == nil && s.client != nil && s.client.regionRuntimeRegistry != nil && region.runtimeKey.isValid() {
+	if ok && s.client != nil && s.client.regionRuntimeRegistry != nil && region.runtimeKey.isValid() {
 		s.client.regionRuntimeRegistry.setRequestEnqueueTime(region.runtimeKey, time.Now())
 	}
 	return ok, err
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@logservice/logpuller/region_request_worker.go` around lines 519 - 523, The
guard "ok && err == nil" is redundant because requestCache.add guarantees err ==
nil when ok is true; simplify the conditional to just check ok before calling
s.client.regionRuntimeRegistry.setRequestEnqueueTime. Update the block around
the call that uses the variables ok, err, s.requestCache.add, s.client and
region.runtimeKey.isValid so it reads logically as: call add, and if ok (and
s.client and s.client.regionRuntimeRegistry and region.runtimeKey.isValid())
then call setRequestEnqueueTime; leave the return of ok, err unchanged.
logservice/logpuller/region_event_handler.go (2)

255-257: Consider extracting the repeated guard pattern into a helper.

The same nil-check chain (state.region.runtimeKey.isValid() && state.worker != nil && state.worker.client != nil && state.worker.client.regionRuntimeRegistry != nil) appears three times. A helper method on regionFeedState could reduce duplication and improve readability.

♻️ Suggested helper extraction
// Add to regionFeedState or as a package-level helper
func (s *regionFeedState) getRuntimeRegistry() *regionRuntimeRegistry {
    if !s.region.runtimeKey.isValid() || s.worker == nil || s.worker.client == nil {
        return nil
    }
    return s.worker.client.regionRuntimeRegistry
}

Then usage becomes:

-if state.region.runtimeKey.isValid() && state.worker != nil && state.worker.client != nil && state.worker.client.regionRuntimeRegistry != nil {
-    state.worker.client.regionRuntimeRegistry.updateLastEvent(state.region.runtimeKey, time.Now())
-}
+if registry := state.getRuntimeRegistry(); registry != nil {
+    registry.updateLastEvent(state.region.runtimeKey, time.Now())
+}

Also applies to: 283-289, 377-379

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@logservice/logpuller/region_event_handler.go` around lines 255 - 257, The
repeated nil-check chain on state.region.runtimeKey.isValid(), state.worker,
state.worker.client and state.worker.client.regionRuntimeRegistry should be
extracted into a helper on regionFeedState (e.g., func (s *regionFeedState)
getRuntimeRegistry() *regionRuntimeRegistry) that returns the registry or nil;
replace the three occurrences (the block that calls
state.worker.client.regionRuntimeRegistry.updateLastEvent(...), and the ones
around lines referenced) with a call to this helper and a single nil-check
before invoking updateLastEvent or other registry methods to reduce duplication
and improve readability.

283-289: Phase ordering may be non-deterministic.

The transition to regionPhaseReplicating happens when INITIALIZED is received, but regionPhaseWaitInitialized is set in region_request_worker.go after the request is sent. Due to concurrency, if the INITIALIZED event arrives and is processed before the worker records the WaitInitialized phase, that intermediate phase will be skipped in the registry.

If strict phase ordering is needed for diagnostics, consider validating/logging phase transitions. For a scaffold where phases are informational only, this may be acceptable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@logservice/logpuller/region_event_handler.go` around lines 283 - 289, The
event handler in region_event_handler.go may skip the intermediate
regionPhaseWaitInitialized because INITIALIZED can arrive before the worker code
in region_request_worker.go records the wait phase; modify the logic so that
before calling registry.setInitializedTime, setReplicatingTime and
registry.transition(..., regionPhaseReplicating) you first ensure the registry
has recorded regionPhaseWaitInitialized for state.region.runtimeKey (e.g., call
registry.transition(state.region.runtimeKey, regionPhaseWaitInitialized, now) if
current phase is unset or earlier), or alternatively validate/log unexpected
transitions and backfill the missing regionPhaseWaitInitialized via
registry.transition, using the existing registry methods setInitializedTime,
setReplicatingTime and transition to preserve strict phase ordering for
diagnostics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@logservice/logpuller/region_event_handler.go`:
- Around line 255-257: The repeated nil-check chain on
state.region.runtimeKey.isValid(), state.worker, state.worker.client and
state.worker.client.regionRuntimeRegistry should be extracted into a helper on
regionFeedState (e.g., func (s *regionFeedState) getRuntimeRegistry()
*regionRuntimeRegistry) that returns the registry or nil; replace the three
occurrences (the block that calls
state.worker.client.regionRuntimeRegistry.updateLastEvent(...), and the ones
around lines referenced) with a call to this helper and a single nil-check
before invoking updateLastEvent or other registry methods to reduce duplication
and improve readability.
- Around line 283-289: The event handler in region_event_handler.go may skip the
intermediate regionPhaseWaitInitialized because INITIALIZED can arrive before
the worker code in region_request_worker.go records the wait phase; modify the
logic so that before calling registry.setInitializedTime, setReplicatingTime and
registry.transition(..., regionPhaseReplicating) you first ensure the registry
has recorded regionPhaseWaitInitialized for state.region.runtimeKey (e.g., call
registry.transition(state.region.runtimeKey, regionPhaseWaitInitialized, now) if
current phase is unset or earlier), or alternatively validate/log unexpected
transitions and backfill the missing regionPhaseWaitInitialized via
registry.transition, using the existing registry methods setInitializedTime,
setReplicatingTime and transition to preserve strict phase ordering for
diagnostics.

In `@logservice/logpuller/region_request_worker.go`:
- Around line 519-523: The guard "ok && err == nil" is redundant because
requestCache.add guarantees err == nil when ok is true; simplify the conditional
to just check ok before calling
s.client.regionRuntimeRegistry.setRequestEnqueueTime. Update the block around
the call that uses the variables ok, err, s.requestCache.add, s.client and
region.runtimeKey.isValid so it reads logically as: call add, and if ok (and
s.client and s.client.regionRuntimeRegistry and region.runtimeKey.isValid())
then call setRequestEnqueueTime; leave the return of ok, err unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0cd38ca1-aac1-48b5-b7f6-2df9489baecc

📥 Commits

Reviewing files that changed from the base of the PR and between 595bb36 and 30f6c75.

📒 Files selected for processing (6)
  • logservice/logpuller/region_event_handler.go
  • logservice/logpuller/region_request_worker.go
  • logservice/logpuller/region_runtime.go
  • logservice/logpuller/region_runtime_integration_test.go
  • logservice/logpuller/region_state.go
  • logservice/logpuller/subscription_client.go
✅ Files skipped from review due to trivial changes (1)
  • logservice/logpuller/region_runtime.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • logservice/logpuller/subscription_client.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@logservice/logpuller/region_state.go`:
- Around line 216-221: markRuntimeReplicating currently performs three separate
registry calls (setInitializedTime, setReplicatingTime, transition) which can
interleave; change it to perform a single atomic update on the runtime registry
by adding a method on regionRuntimeRegistry (e.g., markReplicating(key
regionRuntimeKey, now time.Time) regionRuntimeState) that uses the existing
upsert pattern to set initializedTime, replicatingTime, phase, and
phaseEnterTime in one locked callback, then have markRuntimeReplicating call
registry.markReplicating(s.region.runtimeKey, now) instead of the three separate
methods.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e49484de-ad3c-4bd7-ae14-01f6e7d1ccb0

📥 Commits

Reviewing files that changed from the base of the PR and between 962e988 and 3392683.

📒 Files selected for processing (3)
  • logservice/logpuller/region_event_handler.go
  • logservice/logpuller/region_request_worker.go
  • logservice/logpuller/region_state.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • logservice/logpuller/region_event_handler.go
  • logservice/logpuller/region_request_worker.go

Comment thread logservice/logpuller/region_state.go
@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot Bot commented Apr 10, 2026

[FORMAT CHECKER NOTIFICATION]

Notice: To remove the do-not-merge/needs-linked-issue label, please provide the linked issue number on one line in the PR body, for example: Issue Number: close #123 or Issue Number: ref #456.

📖 For more info, you can check the "Contribute Code" section in the development guide.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/needs-linked-issue release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant