This is a simple MNIST digit predictor using PyTorch. It uses hyper-parameter tuning to find the best (simple) model.
I have used a lot of machine learning both on my course and for my own personal projects, however, I have never used randomised hyper-parameter tuning, instead I have always happened to use a more systematic approach. I wanted to give it a try and see it's merits and that motivated this small project. This is not a rigorous test, to make it rigorous, you would need to do a more systematic hyper-parameter training method as well as this randomised approach and then compare them. It would also be beneficial to have a different environment where more hyper-parameters could exist.
Really I was just looking for a nice little nostalgic project to do.
To setup, enter the necessary directory then install the requirements
cd mnist
pip install -r requirements.txt
The following are commands to run the code and predict on the best model.
python -m src.main --trials 20 --tune-epochs 3 --final-epochs 8
You will need to modify the 'YYYYMMDD_HHMMSS' section as it will record the time you created it.
python -m src.predict --model runs/mnist_YYYYMMDD_HHMMSS/best_model.pt --from-test --n 16