Conversation
|
Thanks for opening your first pull request in this repository! Someone will review it when they have a chance. In the mean time, please be sure that you've handled the following things, to make the review process quicker and easier:
Thank you again for your contributions! 👍 |
|
@shrit I will add some tests and then we could check the Implementation of Separable Convolution as well on my local machine. |
|
This issue has been automatically marked as stale because it has not had any recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions! 👍 |
|
Thanks for opening your first pull request in this repository! Someone will review it when they have a chance. In the mean time, please be sure that you've handled the following things, to make the review process quicker and easier:
Thank you again for your contributions! 👍 |
rcurtin
left a comment
There was a problem hiding this comment.
Hey there @SIDHARTH-7, really nice to have this support added. Sorry that this took almost four months to get to. I just have some high-level comments for now; I have not checked the implementation in detail. I know my review is late, but if you are still interested in working on it so much later, let me know what you think. 👍
| @@ -0,0 +1,82 @@ | |||
| { | |||
There was a problem hiding this comment.
No need to add this file---we could add it to the .gitignore perhaps?
| template< | ||
| typename MatType = arma::mat | ||
| > | ||
| class MobileNetType : public MultiLayer<MatType> |
There was a problem hiding this comment.
I have a general question about the design. I think it might be cleaner to simply have MobileNetType internally store an FFN that you add the layers to in the constructor. I think several of the other models are built in this way. Is there any particular advantage of this approach that I overlooked? It seems like if you did just hold an FFN internally, you could use default copy/move constructors and operators and reduce the amount of code that you need.
| }; // MobileNetType class | ||
|
|
||
| // convenience typedef. | ||
| typedef MobileNetType<arma::mat> Mobilenet; |
There was a problem hiding this comment.
| typedef MobileNetType<arma::mat> Mobilenet; | |
| typedef MobileNetType<arma::mat> MobileNet; |
| model.InputDimensions() = std::vector<size_t>({224, 224, 3}); | ||
| model.Add<models::Mobilenet>(mobilenetLayer); | ||
| ModelDimTest(model, input, 9216, 10); | ||
| } |
There was a problem hiding this comment.
It would be really great to add a test for models::MobileNetType<arma::fmat>, just to make sure that the model works with low precision. I think so long as you don't test serialization, that should work just fine.
(The reason serialization would have a problem is because you'd need to register all the arma::fmat layers for serialization. That can add a lot of compile time... but I think it's safe to say that if MobileNet serializes correctly with arma::mat, then it will serialize correctly with arma::fmat, assuming that each layer correctly serializes with arma::fmat. From the perspective of this code, that should be a safe assumption---testing individual layer serialization should be done inside of mlpack's core code.)
This PR is to update the Implementation of MobilenetV1 which was implemented in #72 and worked upon by @Aakash-Kaushik , @zoq and @kartikdutt18 . The files are named just mobilenet so that we could include further versions in the same files for better readability.