feat: add HMAC-authenticated analytics API endpoints#63
Conversation
Add HMAC-authenticated analytics API for transaction queries and user history. Filter empty/ghost telebot errors from being sent to the TG error log group, and improve LNbits error handling to always return meaningful error details.
📝 WalkthroughWalkthroughThis PR introduces transaction analytics endpoints with HMAC-authenticated access, enabling filtering and pagination of external and internal transaction data. New configuration types support analytics enablement and HMAC key management. The LNbits client is extended with configurable pagination. Two analytics endpoints are conditionally registered in the HTTP server when analytics is enabled. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Middleware
participant Analytics
participant DB
participant LNbits
Client->>Middleware: GET /api/v1/analytics/transactions<br/>(X-Timestamp, X-HMAC-Signature)
Middleware->>Middleware: Validate timestamp &<br/>calculate HMAC-SHA256
alt Invalid/Expired
Middleware-->>Client: 401 Unauthorized
else Valid Signature
Middleware->>Analytics: Request with auth context
Analytics->>DB: Resolve target user(s)
Analytics->>LNbits: Fetch payments with limit/offset
LNbits-->>Analytics: Payment data
Analytics->>DB: Query internal transactions<br/>(if include_internal)
DB-->>Analytics: Transaction records
Analytics->>Analytics: Filter by date/type,<br/>paginate, compute summary
alt CSV Format Requested
Analytics->>Analytics: Sanitize fields,<br/>format CSV
else JSON Format
Analytics->>Analytics: Serialize response
end
Analytics-->>Client: Response (JSON/CSV)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~35 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.gitignore:
- Around line 13-15: The .gitignore currently lists ANALYTICS_API.md and
ANALYTICS_QUICKSTART.md even though those files do not exist; either remove
those two entries from .gitignore or add and commit the corresponding
documentation files (e.g., ANALYTICS_API.md and ANALYTICS_QUICKSTART.md)
documenting the analytics endpoints referenced in internal/api/analytics.go so
they are tracked; update the repo accordingly and ensure references in
internal/api/analytics.go (if any) point to the committed docs.
In `@config.yaml.example`:
- Around line 44-49: The sample analytics config uses enabled: true but provides
hmac_secret too short for the validation in internal/config.go (which rejects
secrets < 32 chars), causing an error and skipped route registration; change the
example to enabled: false (or provide a placeholder secret >= 32 characters) and
add a comment explaining that analytics must be explicitly enabled only after
generating a 32+ char HMAC secret so internal/config.go will accept the api_keys
-> data-team -> hmac_secret value.
In `@internal/api/analytics.go`:
- Around line 202-223: The handler is making one PaymentsWithOptions call per
user (iterating targetUsers) when includeExternal is true, which can hammer
LNbits and produce incorrect paging because pagination is per-wallet; modify the
logic in the analytics transactions handler (use of includeExternal,
targetUsers, uniqueUserMap and PaymentsWithOptions with limit+offset) so that if
includeExternal is requested you either require a specific user filter (return a
4xx error when no user is specified) or implement global paging from a locally
indexed payments table instead of per-wallet calls; ensure the page-full check
breaks the outer loop correctly and stop issuing further PaymentsWithOptions
calls once the response is filled (respect limit/offset globally rather than
per-wallet).
- Around line 145-151: The end_date parsing uses parseDate which returns
midnight, causing exclusion of the rest of that day; update the end_date
handling in the blocks that call parseDate (the endDate variable assignments) to
treat a date-only value as end-of-day (e.g., set endDate to the parsed date plus
23:59:59.999 or to the next day and compare using < nextDay) instead of using
the midnight value; update the branches that call parseDate (the current endDate
handling and the other occurrences around the parseDate calls referenced in the
review) to adjust the parsed value before downstream comparisons and keep the
same error handling (RespondError) on parse failures.
- Around line 409-411: The single-user external history call is passing
limit+offset to PaymentsWithOptions which causes earlier rows to be re-fetched;
change the call to pass the page size and offset separately (i.e., replace
PaymentsWithOptions(*user.Wallet, limit+offset, 0) with
PaymentsWithOptions(*user.Wallet, limit, offset)) so the handler returns the
correct paginated page; update any local variables used (limit, offset) and keep
the PaymentsWithOptions invocation in the same conditional that checks
user.Wallet.
- Around line 456-461: The DB query populating internalTxs discards the result
so errors are ignored; change the call that does
s.Bot.DB.Transactions.Where(...).Order(...).Limit(limit).Offset(offset).Find(&internalTxs)
to capture the returned *gorm.DB (e.g., res := ...Find(&internalTxs)) and check
res.Error; on error return a non-200 response (and log the error) instead of
proceeding to return status=ok with a partial export. Ensure you use the same
variables internalTxs and the existing request/response flow so callers receive
an error status when the DB query fails.
In `@internal/api/middleware.go`:
- Around line 248-258: The timestamp validation only rejects requests that are
too old; modify the check around now, timestamp and tolerance so it also rejects
timestamps too far in the future (e.g. if timestamp-now > tolerance) or use an
absolute age check like abs(now-timestamp) > tolerance; update the log to
indicate whether the timestamp is too old or too far in the future and return
the same unauthorized response when the future-bound is exceeded (referencing
the variables now, timestamp, tolerance in the middleware).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bfa884ba-4860-4c4a-890f-7f18956593ba
📒 Files selected for processing (7)
.gitignoreconfig.yaml.exampleinternal/api/analytics.gointernal/api/middleware.gointernal/config.gointernal/lnbits/lnbits.gomain.go
| analytics_requirements.txt | ||
| ANALYTICS_API.md | ||
| ANALYTICS_QUICKSTART.md |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if analytics documentation files exist and preview their content
echo "=== Checking for analytics documentation files ==="
for file in ANALYTICS_API.md ANALYTICS_QUICKSTART.md; do
if [ -f "$file" ]; then
echo -e "\n📄 Found: $file"
echo "Size: $(wc -l < "$file") lines"
echo "First 10 lines:"
head -n 10 "$file"
else
echo -e "\n❌ Not found: $file"
fi
doneRepository: CeyLabs/BitcoinDeepaBot
Length of output: 186
🏁 Script executed:
# Search for any references to analytics documentation
rg -i "ANALYTICS_API|ANALYTICS_QUICKSTART" --type md --type txt --type pyRepository: CeyLabs/BitcoinDeepaBot
Length of output: 49
🏁 Script executed:
# Check if there are other analytics-related files
fd "analytics" -iRepository: CeyLabs/BitcoinDeepaBot
Length of output: 92
🏁 Script executed:
# Check the git diff for the .gitignore change to understand context
git diff HEAD -- .gitignoreRepository: CeyLabs/BitcoinDeepaBot
Length of output: 49
🏁 Script executed:
# Check the analytics.go file to understand what it does
head -n 50 internal/api/analytics.goRepository: CeyLabs/BitcoinDeepaBot
Length of output: 1828
🏁 Script executed:
# Search for any existing API documentation patterns
fd "\.md$" | head -20Repository: CeyLabs/BitcoinDeepaBot
Length of output: 99
🏁 Script executed:
# Check if there's a docs directory or README with API documentation
find . -type f \( -name "README*" -o -name "CONTRIBUTING*" -o -name "API*" \) | head -20Repository: CeyLabs/BitcoinDeepaBot
Length of output: 103
🏁 Script executed:
# Check the main README for API documentation
cat README.mdRepository: CeyLabs/BitcoinDeepaBot
Length of output: 7939
🏁 Script executed:
# Check the .gitignore context - look at the full file to understand the pattern
cat .gitignoreRepository: CeyLabs/BitcoinDeepaBot
Length of output: 312
🏁 Script executed:
# Check if API_SEND_DOCUMENTATION.md exists (referenced in README)
ls -la API_SEND_DOCUMENTATION.md 2>/dev/null || echo "File not found"Repository: CeyLabs/BitcoinDeepaBot
Length of output: 81
🏁 Script executed:
# Check git history to understand when these entries were added
git log --oneline -S "ANALYTICS_API.md" -- .gitignoreRepository: CeyLabs/BitcoinDeepaBot
Length of output: 146
Remove unnecessary .gitignore entries for non-existent analytics documentation files.
ANALYTICS_API.md and ANALYTICS_QUICKSTART.md do not exist in the repository and are not referenced anywhere in the codebase. The .gitignore entries for these files appear to be preventative but serve no current purpose.
Either:
- Remove these
.gitignoreentries if documentation is not planned, or - Create and commit these documentation files if they should document the new analytics API endpoints
If these files are intended as documentation for the analytics functionality in internal/api/analytics.go, they should be tracked in version control.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 13 - 15, The .gitignore currently lists
ANALYTICS_API.md and ANALYTICS_QUICKSTART.md even though those files do not
exist; either remove those two entries from .gitignore or add and commit the
corresponding documentation files (e.g., ANALYTICS_API.md and
ANALYTICS_QUICKSTART.md) documenting the analytics endpoints referenced in
internal/api/analytics.go so they are tracked; update the repo accordingly and
ensure references in internal/api/analytics.go (if any) point to the committed
docs.
| enabled: true | ||
| timestamp_tolerance: 300 # seconds (5 minutes) | ||
| api_keys: | ||
| data-team: | ||
| name: "Data Team" | ||
| hmac_secret: "your-analytics-hmac-secret-here" |
There was a problem hiding this comment.
The sample analytics config disables itself as written.
internal/config.go rejects analytics secrets shorter than 32 chars, so this example plus enabled: true will log an error and skip route registration. At minimum, default the example to disabled until a real secret is generated.
Suggested fix
- enabled: true
+ enabled: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| enabled: true | |
| timestamp_tolerance: 300 # seconds (5 minutes) | |
| api_keys: | |
| data-team: | |
| name: "Data Team" | |
| hmac_secret: "your-analytics-hmac-secret-here" | |
| enabled: false | |
| timestamp_tolerance: 300 # seconds (5 minutes) | |
| api_keys: | |
| data-team: | |
| name: "Data Team" | |
| hmac_secret: "your-analytics-hmac-secret-here" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config.yaml.example` around lines 44 - 49, The sample analytics config uses
enabled: true but provides hmac_secret too short for the validation in
internal/config.go (which rejects secrets < 32 chars), causing an error and
skipped route registration; change the example to enabled: false (or provide a
placeholder secret >= 32 characters) and add a comment explaining that analytics
must be explicitly enabled only after generating a 32+ char HMAC secret so
internal/config.go will accept the api_keys -> data-team -> hmac_secret value.
| if endDateStr != "" { | ||
| endDate, err = parseDate(endDateStr) | ||
| if err != nil { | ||
| RespondError(w, "Invalid end_date format. Use YYYY-MM-DD or Unix timestamp") | ||
| return | ||
| } | ||
| } |
There was a problem hiding this comment.
Date-only end_date drops the rest of that day.
parseDate("2026-03-26") returns midnight, so the current comparisons exclude anything after 00:00:00 on the requested end date. Treat date-only end values as end-of-day or compare against the next day.
Suggested fix
if endDateStr != "" {
endDate, err = parseDate(endDateStr)
if err != nil {
RespondError(w, "Invalid end_date format. Use YYYY-MM-DD or Unix timestamp")
return
}
+ if len(endDateStr) == len("2006-01-02") && !strings.Contains(endDateStr, "T") {
+ endDate = endDate.Add(24*time.Hour - time.Nanosecond)
+ }
}Also applies to: 235-236, 303-305
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/api/analytics.go` around lines 145 - 151, The end_date parsing uses
parseDate which returns midnight, causing exclusion of the rest of that day;
update the end_date handling in the blocks that call parseDate (the endDate
variable assignments) to treat a date-only value as end-of-day (e.g., set
endDate to the parsed date plus 23:59:59.999 or to the next day and compare
using < nextDay) instead of using the midnight value; update the branches that
call parseDate (the current endDate handling and the other occurrences around
the parseDate calls referenced in the review) to adjust the parsed value before
downstream comparisons and keep the same error handling (RespondError) on parse
failures.
| // Get all users if no specific user requested | ||
| var allUsers []*lnbits.User | ||
| tx := s.Bot.DB.Users.Find(&allUsers) | ||
| if tx.Error != nil { | ||
| RespondError(w, "Error fetching users") | ||
| return | ||
| } | ||
| targetUsers = allUsers | ||
| } | ||
|
|
||
| uniqueUserMap := make(map[int64]bool) | ||
|
|
||
| // Fetch external payments from LNbits with configurable limit+offset | ||
| if includeExternal { | ||
| for _, user := range targetUsers { | ||
| if user.Wallet == nil { | ||
| continue | ||
| } | ||
|
|
||
| uniqueUserMap[user.Telegram.ID] = true | ||
|
|
||
| payments, err := s.Bot.Client.PaymentsWithOptions(*user.Wallet, limit+offset, 0) |
There was a problem hiding this comment.
/analytics/transactions can overload LNbits and still return the wrong page.
Without a user filter, this loads every user and then fans out one LNbits call per wallet. The returned page depends on user iteration order rather than global transaction time, and the page-full check only breaks the inner loop, so the handler keeps hitting LNbits after the response is already full. At the current caps, limit+offset can reach 110k rows per wallet. Before release, either require a specific user for external exports or page from a locally indexed copy of LNbits payments.
Also applies to: 250-253
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/api/analytics.go` around lines 202 - 223, The handler is making one
PaymentsWithOptions call per user (iterating targetUsers) when includeExternal
is true, which can hammer LNbits and produce incorrect paging because pagination
is per-wallet; modify the logic in the analytics transactions handler (use of
includeExternal, targetUsers, uniqueUserMap and PaymentsWithOptions with
limit+offset) so that if includeExternal is requested you either require a
specific user filter (return a 4xx error when no user is specified) or implement
global paging from a locally indexed payments table instead of per-wallet calls;
ensure the page-full check breaks the outer loop correctly and stop issuing
further PaymentsWithOptions calls once the response is filled (respect
limit/offset globally rather than per-wallet).
| // Fetch external payments from LNbits | ||
| if user.Wallet != nil { | ||
| payments, err := s.Bot.Client.PaymentsWithOptions(*user.Wallet, limit+offset, 0) |
There was a problem hiding this comment.
Single-user external history ignores offset.
This handler already has one target wallet, so PaymentsWithOptions(*user.Wallet, limit+offset, 0) just re-fetches earlier rows and inflates the upstream payload instead of returning the requested page.
Suggested fix
- payments, err := s.Bot.Client.PaymentsWithOptions(*user.Wallet, limit+offset, 0)
+ payments, err := s.Bot.Client.PaymentsWithOptions(*user.Wallet, limit, offset)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Fetch external payments from LNbits | |
| if user.Wallet != nil { | |
| payments, err := s.Bot.Client.PaymentsWithOptions(*user.Wallet, limit+offset, 0) | |
| // Fetch external payments from LNbits | |
| if user.Wallet != nil { | |
| payments, err := s.Bot.Client.PaymentsWithOptions(*user.Wallet, limit, offset) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/api/analytics.go` around lines 409 - 411, The single-user external
history call is passing limit+offset to PaymentsWithOptions which causes earlier
rows to be re-fetched; change the call to pass the page size and offset
separately (i.e., replace PaymentsWithOptions(*user.Wallet, limit+offset, 0)
with PaymentsWithOptions(*user.Wallet, limit, offset)) so the handler returns
the correct paginated page; update any local variables used (limit, offset) and
keep the PaymentsWithOptions invocation in the same conditional that checks
user.Wallet.
| // Fetch internal transactions | ||
| var internalTxs []telegram.Transaction | ||
| s.Bot.DB.Transactions.Where("from_id = ? OR to_id = ?", userID, userID). | ||
| Order("time desc"). | ||
| Limit(limit).Offset(offset). | ||
| Find(&internalTxs) |
There was a problem hiding this comment.
Don't return status=ok on a failed internal query.
The Find(&internalTxs) result is discarded here, so a DB error still yields HTTP 200 with a partial export.
Suggested fix
// Fetch internal transactions
var internalTxs []telegram.Transaction
- s.Bot.DB.Transactions.Where("from_id = ? OR to_id = ?", userID, userID).
+ txResult := s.Bot.DB.Transactions.Where("from_id = ? OR to_id = ?", userID, userID).
Order("time desc").
Limit(limit).Offset(offset).
Find(&internalTxs)
+ if txResult.Error != nil {
+ log.Errorf("[Analytics] Error fetching internal transactions for user %d: %v", userID, txResult.Error)
+ RespondError(w, "Error fetching internal transactions")
+ return
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/api/analytics.go` around lines 456 - 461, The DB query populating
internalTxs discards the result so errors are ignored; change the call that does
s.Bot.DB.Transactions.Where(...).Order(...).Limit(limit).Offset(offset).Find(&internalTxs)
to capture the returned *gorm.DB (e.g., res := ...Find(&internalTxs)) and check
res.Error; on error return a non-200 response (and log the error) instead of
proceeding to return status=ok with a partial export. Ensure you use the same
variables internalTxs and the existing request/response flow so callers receive
an error status when the DB query fails.
| // Check if request is not too old (prevent replay attacks) | ||
| now := time.Now().Unix() | ||
| tolerance := internal.Configuration.API.Analytics.TimestampTolerance | ||
| if tolerance == 0 { | ||
| tolerance = 300 | ||
| } | ||
|
|
||
| if now-timestamp > tolerance { | ||
| log.Warnf("[Analytics] Request timestamp too old (age: %d seconds)", now-timestamp) | ||
| http.Error(w, "Request expired", http.StatusUnauthorized) | ||
| return |
There was a problem hiding this comment.
Reject future timestamps too.
This check only expires old requests. A request signed with a far-future timestamp is still accepted until that timestamp falls back inside the window, which weakens the replay bound.
Suggested fix
- if now-timestamp > tolerance {
- log.Warnf("[Analytics] Request timestamp too old (age: %d seconds)", now-timestamp)
+ if timestamp < now-tolerance || timestamp > now+tolerance {
+ log.Warnf("[Analytics] Request timestamp outside tolerance (now: %d, ts: %d, tolerance: %d)", now, timestamp, tolerance)
http.Error(w, "Request expired", http.StatusUnauthorized)
return
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/api/middleware.go` around lines 248 - 258, The timestamp validation
only rejects requests that are too old; modify the check around now, timestamp
and tolerance so it also rejects timestamps too far in the future (e.g. if
timestamp-now > tolerance) or use an absolute age check like abs(now-timestamp)
> tolerance; update the log to indicate whether the timestamp is too old or too
far in the future and return the same unauthorized response when the
future-bound is exceeded (referencing the variables now, timestamp, tolerance in
the middleware).
Summary
/healthendpoint for monitoringPaymentsWithOptionsto LNbits client for configurable limit/offsetNew Endpoints
GET /api/v1/analytics/transactions— query all transactions with filters (user, date range, type, pagination)GET /api/v1/analytics/user/{user_id}/transactions— query transactions for a specific userFiles Changed
internal/api/analytics.go— analytics endpoint handlersinternal/api/middleware.go— HMAC authentication middleware for analyticsinternal/config.go— analytics configuration supportconfig.yaml.example— analytics config examplemain.go— register analytics routesinternal/lnbits/lnbits.go—PaymentsWithOptionsmethodTest plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Configuration