Speed up JSON schema inference by ~2.8x#9494
Open
Rafferty97 wants to merge 6 commits intoapache:mainfrom
Open
Conversation
(`infer.rs`); remove obsolete tests
"scalar-to-array promotion", and adjust tests.
the need to parse rows into `serde_json::Value`s first.
0dad3c7 to
0b881ac
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Which issue does this PR close?
This PR fixes #9484, and also sets the groundwork for implementing #9482. It also delivers an approximate 2.8x speed to JSON schema inference.
I have refactored the code that infers the schema of JSON sources, specifically:
TapeDecoder, eliminating the need to materialise rows intoserde_json::Values firstValueIterinto its own moduleRationale for this change
While working on #9482, I saw a need and opportunity to refactor the schema inference code for JSON schemas. I also discovered the bug detailed in #9484.
These changes not only make the code more readible and predictable by eliminating a lot of special case handling, but make it trivial to create a new inference function for "single field" JSON reading.
They have also provided a significant performance boost to the schema inference functions. I added a simple benchmark for
infer_json_schema, which yielded the following results on my machine, reflecting an approx. 2.8x speed up:Before changes:
infer_json_schema/1000 time: [1.4443 ms 1.4616 ms 1.4793 ms]
thrpt: [85.336 MiB/s 86.366 MiB/s 87.401 MiB/s]
After changes:
infer_json_schema/1000 time: [517.79 µs 519.10 µs 520.54 µs]
thrpt: [242.51 MiB/s 243.18 MiB/s 243.80 MiB/s]
change:
time: [−64.919% −64.485% −64.043%] (p = 0.00 < 0.05)
thrpt: [+178.11% +181.57% +185.06%]
What changes are included in this PR?
At a glance:
Because this is a somewhat sizeable PR, I've done my best to break into a logical sequence of commits to hopefully assist with the review.
Are these changes tested?
Yes, the changes pass all existing unit tests - except for one intentionally removed due to the change in behaviour related to #9484 (removing scalar-to-array promotion).
I have also added an additional benchmark for the schema inference performance.
Are there any user-facing changes?
There are no API changes, except for the addition of the
record_countmethod onValueIter.However, the error messages returned by infer_json_schema and its cousins will significantly change, with most of them condensed to a single "Expected {expected}, found {got}" template.
Finally, some files that used to generate a valid schema will now return errors. However, this is desirable because those files would have failed to be read by the actual JSON reader anyway - due to the lack of support for scalar-to-array promotion in the JSON reader. (See #9484)