Skip to content

Add per-engine expected result overrides for known engine-specific behaviors #95

@eerxuan

Description

@eerxuan

Problem

Some test cases have known engine-specific behaviors where the expected result differs between engines (e.g., MongoDB vs DocumentDB). Currently we use pytest.mark.xfail to skip verification, but this:

  1. Doesn't verify the test fails in the expected way
  2. Won't alert us when the engine fixes the issue (xfail will silently start passing with strict=False, or break with strict=True)

Example

$replaceAll with empty find on multibyte characters — MongoDB returns invalid UTF-8 byte sequences instead of correct code-point-boundary insertion. The 3 affected tests (empty_find_multibyte_2byte/3byte/4byte) are currently marked xfail.

Proposed Solution

Add a per-engine expected result override mechanism to BaseTestCase, so tests can specify alternate expected values for specific engines. This would:

  • Still run the test against all engines
  • Verify the actual (buggy) behavior matches the known override
  • Detect when an engine fixes the issue (override no longer matches → test signals for review)

This mechanism should align with the DocumentDB engine test infrastructure being built, so the same override pattern works for both MongoDB and DocumentDB differences.

References

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions