Proposed change to PCA model decision_scores_ calculation
Created by: ghost
Hi there, The current decision_scores_ calculation in the PCA model measures the euclidean distance from each point in X to the selected principal components, weighted by the components’ explained variance ratio.
For the following scaled X values, the final point X[10] (green arrow) is expected to be given the largest anomaly score. In fact, it is scored as the least anomalous.
This is because the current decision_scores_ calculation computes the summed euclidean distance of each point in X to the tip of each eigenvector (in red). Thus, points X[0] and X[9] are given the largest anomaly score simply because they are furthest from the tip of the two eigenvectors.
In theory, large deviations of X from the mean of the principal components should be anomalous, where deviations to smaller eigenvectors should be given more weight. A revised decision_scores_ formula that satisfies this premise (using sklearn) is:
from sklearn.decomposition import PCA pca = PCA(n_components=None) decision_scores_ = np.sum((pca.fit_transform(X) - pca.fit_transform(X).mean(axis=0))** 2 / pca.explained_variance_ratio_, axis=1).ravel()
where:
- X is the scaled features
- pca.fit_transform(X) is the transformed data
- pca.fit_transform(X).mean(axis=0) is the mean of the principal components; when data is scaled, mean = 0 and becomes redundant
- pca.explained_variance_ratio_: proportion of variance explained by each principal component; used for weighting anomaly scores