#88-add llvm git pr preprocessor#93
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughIntroduces GitPrPreprocessor, a new Python module that loads GitHub PR JSON files from the filesystem, converts them to markdown content via existing utilities, validates content length, constructs metadata with timestamps, and returns a list of Document objects. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
pinecone_rag/preprocessor/git_pr_preprocessor.py (2)
49-55: Redundantor []beforeisinstanceguard.
data.get("comments") or []already returns[]for falsy values (None,"",0,[]). The subsequentisinstance(comments, list)check then re-assigns to[]for any truthy non-list value. Theor []is unnecessary — theisinstanceguard alone is sufficient.♻️ Proposed simplification
- comments = data.get("comments") or [] - if not isinstance(comments, list): - comments = [] - - reviews = data.get("reviews") or [] - if not isinstance(reviews, list): - reviews = [] + raw_comments = data.get("comments") + comments = raw_comments if isinstance(raw_comments, list) else [] + + raw_reviews = data.get("reviews") + reviews = raw_reviews if isinstance(raw_reviews, list) else []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pinecone_rag/preprocessor/git_pr_preprocessor.py` around lines 49 - 55, The code redundantly uses `or []` before an `isinstance` guard for `comments` and `reviews`; change each block to fetch the raw value (e.g., comments = data.get("comments")) and then only use the type check to normalize non-list values (if not isinstance(comments, list): comments = []), and do the same for `reviews`, removing the unnecessary `or []` parts.
100-108:limitcaps files scanned, not documents returned.The
limitis sliced onjson_paths(line 102) before loading, so when files are skipped due to short content or missingpr_info, the caller receives fewer thanlimitdocuments with no indication. If the intent is to guarantee at mostlimitdocuments out, the limit should be applied to the accumulateddocumentslist instead (or both behaviors documented clearly).♻️ Proposed fix to apply limit to output documents
json_paths = sorted(self.data_dir.rglob("*.json")) - if limit is not None: - json_paths = json_paths[:limit] documents: List[Document] = [] for json_path in json_paths: doc = _load_pr_document(json_path, self.min_content_length) if doc is not None: documents.append(doc) + if limit is not None and len(documents) >= limit: + break🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pinecone_rag/preprocessor/git_pr_preprocessor.py` around lines 100 - 108, The current code slices json_paths before loading, so files skipped by _load_pr_document reduce the returned documents count; change the loop to collect documents until we reach limit (if provided) instead of slicing json_paths: iterate over the full sorted json_paths, call _load_pr_document(json_path, self.min_content_length), append non-None results to documents, and break once len(documents) == limit; keep limit None behavior as "no cap". Ensure you reference json_paths, _load_pr_document, documents, and limit when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pinecone_rag/preprocessor/git_pr_preprocessor.py`:
- Around line 38-47: The code in _load_pr_document assumes json.loads returns a
dict and calls data.get("pr_info"), which raises AttributeError for JSON roots
that are lists/strings; modify _load_pr_document to validate that `data` is a
dict before accessing keys (e.g., check isinstance(data, dict) after
json.loads), log and return None for non-dict JSON roots (use logger.debug("Skip
%s: non-object JSON root", json_path.name) or include exc/context), and keep the
existing exception handling around json.loads/json_path.read_text to avoid
breaking load_documents when non-object JSON files are encountered.
- Around line 1-10: The module docstring's glob pattern is inconsistent with the
implementation: update the top-level docstring string (module header) to use the
same "pr/*.json" pattern used in GitPrPreprocessor.__init__ and the
load_documents docstring; specifically change `data/github/**/prs/*.json` to
`data/github/**/pr/*.json` so the documentation matches the actual glob used by
GitPrPreprocessor.__init__ and load_documents.
- Around line 78-80: The metadata currently sets "closed_at" by always calling
get_timestamp_from_date(pr_info.get("closed_at")), which returns 0.0 for None
and makes open PRs indistinguishable from the 1970 epoch; change the logic
around the "closed_at" key so you first check pr_info.get("closed_at") and only
call get_timestamp_from_date when it is not None (or set "closed_at" to
None/omit the key for open PRs) so open PRs don't get a 0.0 timestamp; update
the code that builds the PR metadata (reference: get_timestamp_from_date and the
"closed_at" entry using pr_info.get("closed_at")) accordingly.
---
Nitpick comments:
In `@pinecone_rag/preprocessor/git_pr_preprocessor.py`:
- Around line 49-55: The code redundantly uses `or []` before an `isinstance`
guard for `comments` and `reviews`; change each block to fetch the raw value
(e.g., comments = data.get("comments")) and then only use the type check to
normalize non-list values (if not isinstance(comments, list): comments = []),
and do the same for `reviews`, removing the unnecessary `or []` parts.
- Around line 100-108: The current code slices json_paths before loading, so
files skipped by _load_pr_document reduce the returned documents count; change
the loop to collect documents until we reach limit (if provided) instead of
slicing json_paths: iterate over the full sorted json_paths, call
_load_pr_document(json_path, self.min_content_length), append non-None results
to documents, and break once len(documents) == limit; keep limit None behavior
as "no cap". Ensure you reference json_paths, _load_pr_document, documents, and
limit when making the change.
Summary by CodeRabbit
Release Notes