Conversation
- Fix rule_resume.py: Replace invalid eval_details/eval_status with proper EvalDetail fields (status, label, reason) - Fix mcp_server.py: Add environment variables to force ThreadPool usage and prevent ProcessPool fd inheritance * Set LOCAL_DEPLOYMENT_MODE=true to use ThreadPoolExecutor instead of ProcessPoolExecutor * Set TQDM_DISABLE=1 to prevent progress bar output pollution * Add stdout/stderr redirection with StringIO for better error handling * Fix rule name extraction logic to handle both class and instance types * Add defensive checks for Model.prompt_name_map access * Change default rule group fallback from 'default' to 'sft' to avoid buggy Resume rules
- Add noqa: E402 comments to imports that must stay after os.environ setup - Fix E261: Add proper spacing before inline comments - Fix F841: Remove unused variable 'inner_e' in exception handler - Fix E115: Correct indentation of FIX END comment
- Keep noqa: E402 comments for imports after os.environ setup - Maintain proper import order and formatting
Summary of ChangesHello @Kylie-dot-s, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces the LLMScout module, enhancing the Dingo ATS resume optimization tools with strategic job hunting analysis. It also refactors existing LLM modules to use EvalDetail consistently and provides comprehensive documentation and examples. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant new feature, LLMScout, for strategic job hunting analysis. The implementation is comprehensive, including detailed prompting, scoring logic, and robust processing of the LLM response. The feature is well-supported with thorough unit tests, a clear example script, and updated documentation.
The pull request also refactors LLMKeywordMatcher and LLMResumeOptimizer to remove legacy code supporting ModelRes, which simplifies the codebase and improves consistency by using EvalDetail exclusively.
My main feedback is on a minor code duplication issue in the eval methods of both LLMScout and LLMKeywordMatcher. Extracting the error handling logic into a helper method would improve maintainability.
Overall, this is a high-quality contribution that adds valuable functionality and improves the existing code.
| if not input_data.content: | ||
| if USE_EVAL_DETAIL: | ||
| result = EvalDetail(metric=cls.__name__) | ||
| result.status = True | ||
| result.label = [f"QUALITY_BAD.{cls.__name__}"] | ||
| result.reason = ["Resume text (content) is required but was not provided"] | ||
| return result | ||
| else: | ||
| return ModelRes( | ||
| error_status=True, | ||
| type="KEYWORD_MATCH_ERROR", | ||
| name="MISSING_RESUME", | ||
| reason=["Resume text (content) is required but was not provided"] | ||
| ) | ||
| result = EvalDetail(metric=cls.__name__) | ||
| result.status = True | ||
| result.label = [f"QUALITY_BAD.{cls.__name__}"] | ||
| result.reason = ["Resume text (content) is required but was not provided"] | ||
| return result | ||
|
|
||
| # Validate that prompt (JD) is provided | ||
| if not input_data.prompt: | ||
| if USE_EVAL_DETAIL: | ||
| result = EvalDetail(metric=cls.__name__) | ||
| result.status = True | ||
| result.label = [f"QUALITY_BAD.{cls.__name__}"] | ||
| result.reason = ["Job description (prompt) is required but was not provided"] | ||
| return result | ||
| else: | ||
| return ModelRes( | ||
| error_status=True, | ||
| type="KEYWORD_MATCH_ERROR", | ||
| name="MISSING_JD", | ||
| reason=["Job description (prompt) is required but was not provided"] | ||
| ) | ||
| result = EvalDetail(metric=cls.__name__) | ||
| result.status = True | ||
| result.label = [f"QUALITY_BAD.{cls.__name__}"] | ||
| result.reason = ["Job description (prompt) is required but was not provided"] | ||
| return result |
There was a problem hiding this comment.
There's some code duplication in the eval method for handling input validation errors. The logic for creating and returning an EvalDetail object is nearly identical for missing content and missing prompt. To improve maintainability and reduce redundancy, consider extracting this logic into a private helper method.
For example, you could create a helper like _create_error_detail(cls, reason: str) -> EvalDetail.
| if not input_data.content: | ||
| result = EvalDetail(metric=cls.__name__) | ||
| result.status = True | ||
| result.label = [f"QUALITY_BAD.{cls.__name__}"] | ||
| result.reason = ["行业报告 (content) 是必需的但未提供"] | ||
| return result | ||
|
|
||
| # Validate that prompt (user profile) is provided | ||
| if not input_data.prompt: | ||
| result = EvalDetail(metric=cls.__name__) | ||
| result.status = True | ||
| result.label = [f"QUALITY_BAD.{cls.__name__}"] | ||
| result.reason = ["用户画像 (prompt) 是必需的但未提供"] | ||
| return result | ||
|
|
There was a problem hiding this comment.
Similar to LLMKeywordMatcher, there's code duplication in the eval method for input validation. The error handling logic for missing content and prompt is repeated. This could be refactored into a private helper method to improve code clarity and maintainability.
A helper method like _create_error_detail(cls, reason: str) -> EvalDetail could encapsulate the creation of the EvalDetail object for error cases.
Features: