Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions prompts/CodeReviewer/versions/v009.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
You are a code review assistant specialized in {{programming_language}}. Please review the following code snippet and provide feedback on {{review_focus}}:

```{{programming_language}}
{{code_snippet}}
```
7 changes: 7 additions & 0 deletions prompts/ComplexCodeReviewer/versions/v008.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
You are a code review assistant with active tools: {{active_tools}}. Specialized in {{programming_language}}. Review the code with {{severity}} scrutiny focusing on {{review_focus}}:

```{{programming_language}}
{{code_snippet}}
```

Provide feedback in: 'Summary', 'Critical Issues', 'Improvements', 'Positives'.
1 change: 1 addition & 0 deletions prompts/SimpleChat/versions/v009.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
You are a helpful AI assistant named {{assistant_name}}. Your goal is to provide clear and concise answers to {{user_name}}'s questions.
16 changes: 16 additions & 0 deletions prompts/TemplateDemo/versions/v008.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
You are creating a {{content_type}} about {{theme}}.

{% if difficulty == 'beginner' %}
Keep it simple and accessible for beginners.
{% elif difficulty == 'intermediate' %}
Include some advanced concepts but explain them clearly.
{% else %}
Don't hold back on technical details and advanced concepts.
{% endif %}
Comment on lines +3 to +9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

git ls-files | grep -E '(v008|template|difficulty)' | head -20

Repository: Nisarg38/promptix-python

Length of output: 473


🏁 Script executed:

cat -n prompts/TemplateDemo/versions/v008.md

Repository: Nisarg38/promptix-python

Length of output: 634


🏁 Script executed:

# Search for any validation logic around difficulty parameter
rg -n -i 'difficulty' --type-list | head

Repository: Nisarg38/promptix-python

Length of output: 457


🏁 Script executed:

# Look for schema definitions, validation, or type hints
rg -n 'difficulty' -A 3 -B 3 --type=py --type=json --type=yaml --type=toml

Repository: Nisarg38/promptix-python

Length of output: 19347


🏁 Script executed:

cat -n prompts/TemplateDemo/config.yaml

Repository: Nisarg38/promptix-python

Length of output: 1142


🏁 Script executed:

cat -n tests/fixtures/test_prompts/TemplateDemo/config.yaml

Repository: Nisarg38/promptix-python

Length of output: 1115


🏁 Script executed:

cat -n src/promptix/core/components/template_renderer.py | head -100

Repository: Nisarg38/promptix-python

Length of output: 4118


🏁 Script executed:

# Search for validation or schema checking logic
rg -n 'validate|enum|schema' src/promptix/core --type=py -A 2 | head -80

Repository: Nisarg38/promptix-python

Length of output: 5000


🏁 Script executed:

cat -n src/promptix/core/validation.py

Repository: Nisarg38/promptix-python

Length of output: 18500


🏁 Script executed:

cat -n src/promptix/core/base.py | sed -n '210,230p'

Repository: Nisarg38/promptix-python

Length of output: 956


🏁 Script executed:

# Search for where schema is loaded and how it's structured
rg -n 'get_prompt_config\|version_data\|schema' src/promptix/core/base.py -A 2 | head -50

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Check how version_data is loaded and what structure it has
rg -n 'version_data.*=' src/promptix/core/base.py -B 2 -A 2

Repository: Nisarg38/promptix-python

Length of output: 692


🏁 Script executed:

# Search for where config.yaml is loaded and parsed
rg -n 'load.*config\|parse.*schema\|ConfigLoader' src/promptix --type=py -A 3 | head -100

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Find the version manager implementation
fd -t f 'version.*manager\|version_manager' src/promptix

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Search for ConfigLoader or similar
fd -t f 'loader\|config' src/promptix --type f

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Look for where config.yaml is loaded and parsed
rg -n 'config.yaml\|\.yaml\|load.*config' src/promptix --type=py -B 1 -A 3 | head -80

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Search for schema loading more broadly
rg -n 'schema' src/promptix/core --type=py | grep -i 'load\|get\|parse' | head -20

Repository: Nisarg38/promptix-python

Length of output: 1908


🏁 Script executed:

# Look at the storage/loaders module which was mentioned in validation.py comments
fd -t f '.*loader' src/promptix

Repository: Nisarg38/promptix-python

Length of output: 150


🏁 Script executed:

# Check the entire src structure
find src/promptix -type f -name '*.py' | head -30

Repository: Nisarg38/promptix-python

Length of output: 1181


🏁 Script executed:

cat -n src/promptix/core/components/prompt_loader.py | sed -n '270,310p'

Repository: Nisarg38/promptix-python

Length of output: 2240


🏁 Script executed:

# Look at the test to see what structure is expected
cat -n tests/fixtures/test_prompts/TemplateDemo/config.yaml | sed -n '8,36p'

Repository: Nisarg38/promptix-python

Length of output: 755


🏁 Script executed:

# Check if there's validation that uses schema.get("properties") with enum
rg -n 'properties.*enum\|enum.*properties' src/promptix/core --type=py -B 2 -A 2

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Look at variable_validator.py to see if there's different logic
cat -n src/promptix/core/components/variable_validator.py

Repository: Nisarg38/promptix-python

Length of output: 3128


🏁 Script executed:

# Check what structure VariableValidationStrategy actually expects vs what config.yaml provides
cat -n src/promptix/core/validation.py | sed -n '94,155p'

Repository: Nisarg38/promptix-python

Length of output: 2923


🏁 Script executed:

# Search for where "types" dict is created in schema
rg -n '"types"' src/promptix --type=py -B 2 -A 2

Repository: Nisarg38/promptix-python

Length of output: 425


🏁 Script executed:

# Check if there's schema transformation between config loading and validation
rg -n 'schema.*types\|types.*schema' src/promptix/core --type=py -B 1 -A 1

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Check what the actual schema structure looks like when loaded
cat -n src/promptix/core/builder.py | sed -n '85,92p'

Repository: Nisarg38/promptix-python

Length of output: 464


🏁 Script executed:

# Look at test to see what happens with "expert" difficulty
cat -n tests/quality/test_edge_cases.py | sed -n '306,325p'

Repository: Nisarg38/promptix-python

Length of output: 1149


🏁 Script executed:

# Check if BuilderValidationStrategy is being used instead for validation
rg -n 'BuilderValidationStrategy\|validate_builder' src/promptix/core --type=py -B 1 -A 1

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Look at the actual test results - does "expert" pass or fail?
cat -n tests/quality/test_edge_cases.py | sed -n '298,310p'

Repository: Nisarg38/promptix-python

Length of output: 766


🏁 Script executed:

# Check if there's any schema validation that actually uses the "properties" field with enum
rg -n "schema.get.*properties\|properties.*enum" src/promptix/core --type=py -B 2 -A 2

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Run the test to see what actually happens
cd tests && python -m pytest quality/test_edge_cases.py::TestEdgeCases::test_template_with_extreme_conditionals -v 2>&1 | head -100

Repository: Nisarg38/promptix-python

Length of output: 298


🏁 Script executed:

# Check what happens when we call Promptix.get_prompt with invalid difficulty
python3 << 'EOF'
import sys
sys.path.insert(0, '/repo/src')
from promptix import Promptix

try:
    prompt = Promptix.get_prompt(
        prompt_template="TemplateDemo",
        content_type="tutorial",
        theme="Test",
        difficulty="invalid_level"
    )
    print("SUCCESS: Got prompt with invalid_level difficulty")
    print("Prompt length:", len(prompt))
    print("First 200 chars:", prompt[:200])
except Exception as e:
    print(f"ERROR: {type(e).__name__}: {e}")
EOF

Repository: Nisarg38/promptix-python

Length of output: 188


🏁 Script executed:

# Check what schema is actually passed to validation
# by looking at what version_data contains
cat -n src/promptix/core/components/version_manager.py

Repository: Nisarg38/promptix-python

Length of output: 7987


🏁 Script executed:

# Trace through: version_data is loaded by version_manager
# Let's see what get_version_data returns
rg -n 'get_version_data\|_load_versions' src/promptix/core/components/version_manager.py -A 10 | head -60

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Let me check if BuilderValidationStrategy is what validates the difficulty field
rg -n 'validate_builder_field\|_validate_field' src/promptix/core --type=py -B 3 -A 5 | head -100

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Check if there's any transformation of schema before validation
rg -n 'types_dict\|transform.*schema\|schema.*transform' src/promptix/core --type=py -B 1 -A 1

Repository: Nisarg38/promptix-python

Length of output: 50


🏁 Script executed:

# Let's see the actual structure - does VariableValidationStrategy ever validate enum?
rg -n 'enum' src/promptix/core/validation.py

Repository: Nisarg38/promptix-python

Length of output: 436


Verify that the difficulty parameter is validated upstream.

The else clause (lines 7-8) catches all values other than 'beginner' and 'intermediate', including any invalid or misspelled values (e.g., 'expert', 'invalid_level', null). While config.yaml defines an enum for difficulty with only three allowed values, the validation framework does not enforce this constraint—the schema loader provides the enum definition via the properties field, but the validator searches for constraints in a types dictionary that is never populated. This allows any string value to silently fall through to the catch-all else clause without raising an error.

🤖 Prompt for AI Agents
In @prompts/TemplateDemo/versions/v008.md around lines 3 - 9, The template's
else branch currently accepts any value for the difficulty parameter; ensure
upstream validation enforces the enum from config.yaml or add an explicit guard
before rendering: either (1) fix the validator to read allowed values from the
schema's properties field into the types/constraints lookup it uses (populate
the types dictionary or change lookup code to consult properties), or (2) add an
explicit validation function that checks the difficulty value against the
allowed set ('beginner','intermediate','advanced') and throws/returns an error
for invalid values prior to calling the template renderer so the else branch
cannot silently catch invalid inputs.


{% if elements|length > 0 %}
Be sure to include the following elements:
{% for element in elements %}
- {{element}}
{% endfor %}
{% endif %}
11 changes: 11 additions & 0 deletions prompts/simple_chat/versions/v008.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
You are a {{personality}} assistant specialized in {{domain}}.

Your role is to provide helpful, accurate, and engaging responses to user questions and requests. Always maintain a professional and friendly tone while adapting to the user's needs.

Key guidelines:
- Be concise but thorough in your explanations
- Ask clarifying questions when needed
- Provide examples when helpful
- Stay focused on the {{domain}} domain when specified
Comment on lines +1 to +9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor inconsistency: domain appears required but is described as optional.

Line 1 unconditionally includes {{domain}} in the persona, but line 9 says "Stay focused on the {{domain}} domain when specified," implying it's optional. This could confuse users about whether the domain parameter is required.

📝 Proposed fix to clarify domain usage

Option 1: If domain is always required, remove "when specified":

 - Stay focused on the {{domain}} domain when specified
+ - Stay focused on the {{domain}} domain

Option 2: If domain should be optional, make it conditional:

-You are a {{personality}} assistant specialized in {{domain}}.
+You are a {{personality}} assistant{% if domain %} specialized in {{domain}}{% endif %}.

and

-- Stay focused on the {{domain}} domain when specified
+{% if domain %}- Stay focused on the {{domain}} domain{% endif %}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
You are a {{personality}} assistant specialized in {{domain}}.
Your role is to provide helpful, accurate, and engaging responses to user questions and requests. Always maintain a professional and friendly tone while adapting to the user's needs.
Key guidelines:
- Be concise but thorough in your explanations
- Ask clarifying questions when needed
- Provide examples when helpful
- Stay focused on the {{domain}} domain when specified
You are a {{personality}} assistant specialized in {{domain}}.
Your role is to provide helpful, accurate, and engaging responses to user questions and requests. Always maintain a professional and friendly tone while adapting to the user's needs.
Key guidelines:
- Be concise but thorough in your explanations
- Ask clarifying questions when needed
- Provide examples when helpful
- Stay focused on the {{domain}} domain
Suggested change
You are a {{personality}} assistant specialized in {{domain}}.
Your role is to provide helpful, accurate, and engaging responses to user questions and requests. Always maintain a professional and friendly tone while adapting to the user's needs.
Key guidelines:
- Be concise but thorough in your explanations
- Ask clarifying questions when needed
- Provide examples when helpful
- Stay focused on the {{domain}} domain when specified
You are a {{personality}} assistant{% if domain %} specialized in {{domain}}{% endif %}.
Your role is to provide helpful, accurate, and engaging responses to user questions and requests. Always maintain a professional and friendly tone while adapting to the user's needs.
Key guidelines:
- Be concise but thorough in your explanations
- Ask clarifying questions when needed
- Provide examples when helpful
{% if domain %}- Stay focused on the {{domain}} domain{% endif %}
🤖 Prompt for AI Agents
In @prompts/simple_chat/versions/v008.md around lines 1 - 9, The template is
inconsistent about the {{domain}} parameter: decide whether {{domain}} is
required or optional and update the prompt accordingly; either (A) make
{{domain}} required by removing "when specified" from the last guideline so the
persona line and guidelines match, or (B) make {{domain}} optional by rendering
the persona line conditionally (include {{domain}} only if provided) and update
the guidelines to refer to focusing on the domain only when present; adjust the
prompt text around the tokens {{personality}} and {{domain}} to reflect the
chosen option.


How can I help you today?
126 changes: 119 additions & 7 deletions src/promptix/core/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,15 @@
focused components and dependency injection for better testability and modularity.
"""

from typing import Any, Dict, Optional, List
from typing import Any, Dict, List, Optional, Tuple, Union
from .container import get_container
from .components import (
PromptLoader,
VariableValidator,
TemplateRenderer,
VersionManager,
ModelConfigBuilder
ModelConfigBuilder,
LayerComposer,
)
from .exceptions import PromptNotFoundError, ConfigurationError, StorageError

Expand All @@ -38,16 +39,16 @@ def __init__(self, container=None):
@classmethod
def get_prompt(cls, prompt_template: str, version: Optional[str] = None, **variables) -> str:
"""Get a prompt by name and fill in the variables.

Args:
prompt_template (str): The name of the prompt template to use
version (Optional[str]): Specific version to use (e.g. "v1").
version (Optional[str]): Specific version to use (e.g. "v1").
If None, uses the latest live version.
**variables: Variable key-value pairs to fill in the prompt template

Returns:
str: The rendered prompt

Raises:
PromptNotFoundError: If the prompt template is not found
RequiredVariableError: If required variables are missing
Expand All @@ -56,7 +57,118 @@ def get_prompt(cls, prompt_template: str, version: Optional[str] = None, **varia
"""
instance = cls()
return instance.render_prompt(prompt_template, version, **variables)


@classmethod
def get_composed_prompt(
cls,
prompt_template: str,
version: Optional[str] = None,
layer_versions: Optional[Dict[str, str]] = None,
skip_layers: Optional[List[str]] = None,
_debug: bool = False,
**variables
) -> Union[str, Tuple[str, Any]]:
"""Get a prompt with layer composition.

Layers are automatically selected based on variables that match
layer configuration in config.yaml. For example, if config defines
a layer with variable='oem', passing oem='honda' will apply the
honda layer.

Args:
prompt_template: Name of the prompt template.
version: Base template version (None = current).
layer_versions: Override versions for specific layers.
skip_layers: Layer names to skip.
_debug: If True, returns tuple of (prompt, debug_info).
**variables: Template variables (also used for layer selection).

Returns:
Fully composed and rendered prompt string.
If _debug is True, returns tuple of (prompt, CompositionDebugInfo).

Raises:
PromptNotFoundError: If the prompt template is not found.
LayerRequiredError: If a required layer variable is not provided.
TemplateRenderError: If template rendering fails.
ConfigurationError: If configuration is invalid.

Example:
prompt = Promptix.get_composed_prompt(
prompt_template="ServiceAgent",
store_name="Honda World",
store_type="automotive", # Selects automotive layer
oem="honda", # Selects honda layer
locale="es-MX" # Selects Spanish locale layer
)
"""
instance = cls()
return instance.compose_prompt(
prompt_template=prompt_template,
version=version,
layer_versions=layer_versions,
skip_layers=skip_layers,
_debug=_debug,
**variables
)

def compose_prompt(
self,
prompt_template: str,
version: Optional[str] = None,
layer_versions: Optional[Dict[str, str]] = None,
skip_layers: Optional[List[str]] = None,
_debug: bool = False,
**variables
) -> Union[str, Tuple[str, Any]]:
"""Compose a prompt with layer overrides.

Args:
prompt_template: Name of the prompt template.
version: Base template version (None = current).
layer_versions: Override versions for specific layers.
skip_layers: Layer names to skip.
_debug: If True, returns tuple of (prompt, debug_info).
**variables: Template variables (also used for layer selection).

Returns:
Fully composed and rendered prompt string.
If _debug is True, returns tuple of (prompt, CompositionDebugInfo).
"""
layer_composer = self._container.get_typed("layer_composer", LayerComposer)

return layer_composer.compose(
prompt_name=prompt_template,
variables=variables,
base_version=version,
layer_versions=layer_versions,
skip_layers=skip_layers,
_debug=_debug
)

@classmethod
def list_layers(cls, prompt_template: str) -> Dict[str, List[str]]:
"""List available layers and their values for a prompt.

Args:
prompt_template: Name of the prompt template.

Returns:
Dict mapping layer names to lists of available values.

Example:
layers = Promptix.list_layers("ServiceAgent")
# Returns:
# {
# "store_type": ["automotive", "powersports", "marine"],
# "oem": ["honda", "toyota", "harley"],
# "locale": ["en-US", "es-MX", "fr-CA"]
# }
"""
instance = cls()
layer_composer = instance._container.get_typed("layer_composer", LayerComposer)
return layer_composer.list_layers(prompt_template)

def render_prompt(self, prompt_template: str, version: Optional[str] = None, **variables) -> str:
"""Render a prompt with the provided variables.

Expand Down
6 changes: 4 additions & 2 deletions src/promptix/core/components/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,13 @@
from .template_renderer import TemplateRenderer
from .version_manager import VersionManager
from .model_config_builder import ModelConfigBuilder
from .layer_composer import LayerComposer

__all__ = [
"PromptLoader",
"VariableValidator",
"VariableValidator",
"TemplateRenderer",
"VersionManager",
"ModelConfigBuilder"
"ModelConfigBuilder",
"LayerComposer",
]
Loading
Loading