in_forward: Handle limit of incoming payloads#11530
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds size-safe buffer types and read bookkeeping, bounds and overflow guards in Forward protocol parsing and decompression, consolidates decompressed output buffering, and adds runtime tests that assert rejection of oversized PackedForward payloads (compressed and uncompressed). Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Conn as Connection
participant Prot as ProtocolParser
participant Decomp as Decompressor
participant App as append_log
Client->>Conn: send TCP data (PackedForward)
Conn->>Prot: deliver buffer bytes
Prot->>Prot: validate rest/offset vs buf_len
alt frame is compressed
Prot->>Decomp: feed compressed chunks
Decomp->>Decomp: decompress chunk (check size/overflow)
Decomp->>Prot: return decompressed chunk
Prot->>Prot: memcpy into decoded_payload (guarded by buffer_max_size)
else frame uncompressed
Prot->>Prot: use buffer slice (check payload size)
end
Prot->>App: append_log(decoded_payload or buffer)
App-->>Prot: success or failure
Prot-->>Conn: accept or drop (reject oversized)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/in_forward/fw_conn.c`:
- Around line 101-106: struct fw_conn stores buf_len and buf_size as int but
flb_in_fw_config exposes buffer_chunk_size and buffer_max_size as size_t; to
fix, either widen fw_conn->buf_len and fw_conn->buf_size to size_t (preferred)
or add explicit bounds checks before assigning ctx->buffer_chunk_size /
ctx->buffer_max_size to conn->buf_size/conn->buf_len, rejecting configs >
INT_MAX; update all uses (e.g., available = conn->buf_size - conn->buf_len, the
post-read INT_MAX check around fw_conn_del, and any arithmetic) to use the new
size_t types or guarded values, and ensure logging returns error when config
values exceed INT_MAX.
In `@plugins/in_forward/fw_prot.c`:
- Around line 1600-1614: The code only bounds compressed input via required_size
but does not limit cumulative decompressed output, allowing decompression bombs;
modify the decompression loop in flb_decompress()/the caller that uses
conn->d_ctx so it tracks a per-frame total_decompressed counter (reset when
starting a new frame) and on each decompression iteration adds the newly
produced bytes, then compare against ctx->buffer_max_size and bail to
cleanup_decompress (logging an error) if the total exceeds ctx->buffer_max_size
before calling append_log(); ensure the same check is applied in the code paths
referenced around required_size and in the block currently around lines
1633-1654 so no decompressed output can grow past the configured limit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7a535394-71eb-4724-b241-52d931292b4d
📒 Files selected for processing (2)
plugins/in_forward/fw_conn.cplugins/in_forward/fw_prot.c
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/runtime/in_forward.c (1)
843-909: Test correctly validates oversized PackedForward rejection.The test configuration is well-designed:
- 1K buffer limit vs 16×128 bytes (~2048+ bytes with msgpack overhead) ensures the payload exceeds
buffer_max_size- This triggers the check in
fw_prot.cat Line 1539:if (len > ctx->buffer_max_size)- Asserting
num == 0confirms the payload was rejectedOne minor observation: if
create_large_packedforwardfails (line 886-887), the test continues with an empty buffer. Consider adding an early exit for robustness:🔧 Optional: Early exit on helper failure
msgpack_sbuffer_init(&sbuf); ret = create_large_packedforward(&sbuf, 16, 128); - TEST_CHECK(ret == 0); + if (!TEST_CHECK(ret == 0)) { + flb_socket_close(fd); + msgpack_sbuffer_destroy(&sbuf); + test_ctx_destroy(ctx); + exit(EXIT_FAILURE); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/runtime/in_forward.c` around lines 843 - 909, If create_large_packedforward(&sbuf, 16, 128) fails (ret != 0) the test proceeds with an empty buffer; after the call to create_large_packedforward add an early-failure branch that logs a TEST_MSG, destroys the msgpack_sbuffer via msgpack_sbuffer_destroy(&sbuf) (if initialized), closes the socket fd with flb_socket_close(fd), tears down the test context with test_ctx_destroy(ctx), and exits with exit(EXIT_FAILURE); reference the create_large_packedforward call and cleanup helpers (msgpack_sbuffer_destroy, flb_socket_close, test_ctx_destroy) to locate where to add this branch.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/runtime/in_forward.c`:
- Around line 843-909: If create_large_packedforward(&sbuf, 16, 128) fails (ret
!= 0) the test proceeds with an empty buffer; after the call to
create_large_packedforward add an early-failure branch that logs a TEST_MSG,
destroys the msgpack_sbuffer via msgpack_sbuffer_destroy(&sbuf) (if
initialized), closes the socket fd with flb_socket_close(fd), tears down the
test context with test_ctx_destroy(ctx), and exits with exit(EXIT_FAILURE);
reference the create_large_packedforward call and cleanup helpers
(msgpack_sbuffer_destroy, flb_socket_close, test_ctx_destroy) to locate where to
add this branch.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3ed94213-a4c9-4c7d-ad9f-1ad086b72df9
📒 Files selected for processing (1)
tests/runtime/in_forward.c
c6d18fb to
c91d379
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/runtime/in_forward.c (1)
839-843: Minor:sizefield value may be misleading.Line 843 packs
entries.size(the size of the entire[tag, entries, {}]frame) as the uncompressed size, but the actual compressed data is onlypayload.via.bin.size(the entries blob). This discrepancy doesn't affect test correctness since thesizefield is informational, but for accuracy it could usepayload.via.bin.sizeinstead.🔧 Suggested fix
+ size_t original_size = payload.via.bin.size; ret = flb_gzip_compress(payload.via.bin.ptr, payload.via.bin.size, (void **) &compressed_buf, &compressed_size); ... msgpack_pack_str_with_body(&pck, "size", 4); - msgpack_pack_uint64(&pck, entries.size); + msgpack_pack_uint64(&pck, original_size);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/runtime/in_forward.c` around lines 839 - 843, The "size" field currently packs entries.size (the whole [tag, entries, {}] frame) which is misleading; update the pack so msgpack_pack_uint64(&pck, ...) uses the actual compressed entries blob length (payload.via.bin.size) instead of entries.size so the "compressed"/"size" pair correctly reflects the payload length; locate the block that packs "compressed" and "size" (msgpack_pack_map, msgpack_pack_str_with_body for "compressed" and "size", and msgpack_pack_uint64) and replace the value passed to msgpack_pack_uint64 with payload.via.bin.size.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/runtime/in_forward.c`:
- Around line 839-843: The "size" field currently packs entries.size (the whole
[tag, entries, {}] frame) which is misleading; update the pack so
msgpack_pack_uint64(&pck, ...) uses the actual compressed entries blob length
(payload.via.bin.size) instead of entries.size so the "compressed"/"size" pair
correctly reflects the payload length; locate the block that packs "compressed"
and "size" (msgpack_pack_map, msgpack_pack_str_with_body for "compressed" and
"size", and msgpack_pack_uint64) and replace the value passed to
msgpack_pack_uint64 with payload.via.bin.size.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 884fa606-0be6-4928-8503-2494b89d9d8f
📒 Files selected for processing (4)
plugins/in_forward/fw_conn.cplugins/in_forward/fw_conn.hplugins/in_forward/fw_prot.ctests/runtime/in_forward.c
🚧 Files skipped from review as they are similar to previous changes (1)
- plugins/in_forward/fw_conn.c
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
c91d379 to
c3a122f
Compare
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Added limitation checkers for incoming payloads onto in_forward.
These are needed to handle incoming payloads precisely.
Enter
[N/A]in the box, if an item is not applicable to your change.Testing
Before we can approve your change; please submit the following in a comment:
If this is a change to packaging of containers or native binaries then please confirm it works for all targets.
ok-package-testlabel to test for all targets (requires maintainer to do).Documentation
Backporting
Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.
Summary by CodeRabbit
Bug Fixes
Refactor
Tests