Improving the orientation classification model #2008
Unanswered
mislav-zane
asked this question in
Q&A
Replies: 1 comment 2 replies
-
|
Hi @mislav-zane 👋,
Best regards, |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In a project of mine I want to read some randomly rotated numbers from grayscale images.
I am working with
db_resnet34as the detection model, and withcrnn_vgg16_bnas the recognition model.Test Runs
Test Run 1 -- Large Set
I am initializing the
ocr_predictorwithThen I am running the predictor on the preprocessed images. Running this on a dataset of 700 images gives me a success rate of about 52%.
Example from the used data set:

Test Run 2 -- Straightened Set
For comparison I manually rotated a randomly selected subset of 40 images into horizontally straight orientation. Then I ran the predictor initialized with
on these straightened images. The success rate was at 95%.
Example from the used data set (same image from above, but straightened):

Test Run 3 -- Non-Straightened Set
Running the same subset of 40 images, but non-straightened with the same predictor as Test Run 1 yields a success rate of 55%.
My Question
I have two questions:
Beta Was this translation helpful? Give feedback.
All reactions