This pull request is a partial update to #375
As a first step, I cleaned the Autoencoder_Torch Docstring and Initialization in order to make clear and mention, that the learning rate is a valid parameter for the model. In the process, I also cleaned the code in the initialization and changed some wrong links (for tensorflow) to the corresponding Pytorch ones.
I did not alter any functionality, as learning rate was already supported (but very hidden).
With PyTorch it is difficult to pass an optimizer (with several changed parameters) as an instance since you need to tie it immediately to the model parameters. Unfortunately, the model can only be initialized once the amount of features is known, which is only happening once the package user calls the fit function. This is also where the Optimizer gets initialized.
I ran the tests corresponding to the Autoencoder_PyTorch locally and all tests passed on my own fork.
Expect more PRs once I have time, but for now at least the PyTorch version of Autoencoder has an improved docstring mentioning learning rate.
All Submissions Basics:
-
Have you followed the guidelines in our Contributing document? -
Have you checked to ensure there aren't other open Pull Requests for the same update/change? -
Have you checked all Issues to tie the PR to a specific one?
All Submissions Cores:
-
Have you added an explanation of what your changes do and why you'd like us to include them? -
Have you written new tests for your core changes, as applicable? not applicable -
Have you successfully ran tests with your changes locally? -
Does your submission pass tests, including CircleCI, Travis CI, and AppVeyor? -
Does your submission have appropriate code coverage? The cutoff threshold is 95% by Coversall.