All Submissions Basics:
-
Have you followed the guidelines in our Contributing document? -
Have you checked to ensure there aren't other open Pull Requests for the same update/change? -
Have you checked all Issues to tie the PR to a specific one?
All Submissions Cores:
-
Have you added an explanation of what your changes do and why you'd like us to include them? -
Have you written new tests for your core changes, as applicable? -
Have you successfully ran tests with your changes locally? -
Does your submission pass tests, including CircleCI, Travis CI, and AppVeyor? -
Does your submission have appropriate code coverage? The cutoff threshold is 95% by Coversall.
New Model Submissions:
-
Have you created a .py in ~/pyod/models/? -
Have you created a _example.py in ~/examples/? -
Have you created a test_.py in ~/pyod/test/? -
Have you lint your code locally prior to submission?
Description
Quasi-Monte Carlo Discrepancy outlier detection ref: https://www.sciencedirect.com/science/article/pii/S0885064X01905898
The Wrap-around Quasi-Monte Carlo discrepancy is a uniformity criterion which is used to assess the space filling of a number of samples in a hypercube. It quantifies the distance between the continuous uniform distribution on a hypercube and the discrete uniform distribution on distinct sample points. Therefore, lower discrepancy values for a sample point indicates that it provides better coverage of the parameter space with regards to the rest of the samples. This method is kernel based and the a higher discrepancy score is relative to the rest of the samples, the higher the likelihood of it being an outlier.