You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$M$ is the mixture distribution (mean?), defined as
$M = \frac{1}{2} (P + Q)$
Thus, the JS divergence computes the KL divergence between each distribution and the average of both, making it symmetric and bounded in the range:
$0 \leq JS(P \mid \mid Q) \leq \log 2$
Motivation
The PyTorch Metrics library currently includes Kullback-Leibler (KL) Divergence, which is widely used for measuring differences between probability distributions. However, KL divergence is asymmetric and unbounded, making it sometimes less suitable for certain applications.
Jensen-Shannon (JS) Divergence provides a symmetric and smoothed alternative by averaging KL divergence between two distributions and a mixed distribution. This makes it useful for various tasks, including generative modeling, NLP, and probabilistic machine learning.
Adding JS divergence to PyTorchmetrics would align with existing divergence metrics.
Pitch
I propose adding JS Divergence as a new metric in PyTorch Metrics, similar to how KL Divergence is implemented. My implementation is already available in my forked repository, and I’d be happy to refine it based on community feedback.
Alternatives
Manual computation: Users can manually compute JS divergence using KL divergence, but having a built-in makes it scalable.
Third-party libraries: Some users rely on scipy.spatial.distance.jensenshannon. which I guess is the sqrt of the JSD.
Additional context
N/A
The text was updated successfully, but these errors were encountered:
🚀 Feature
Given two probability distributions$P$ and $Q$ , the JS divergence is defined as:
where:
Thus, the JS divergence computes the KL divergence between each distribution and the average of both, making it symmetric and bounded in the range:
Motivation
The PyTorch Metrics library currently includes Kullback-Leibler (KL) Divergence, which is widely used for measuring differences between probability distributions. However, KL divergence is asymmetric and unbounded, making it sometimes less suitable for certain applications.
Jensen-Shannon (JS) Divergence provides a symmetric and smoothed alternative by averaging KL divergence between two distributions and a mixed distribution. This makes it useful for various tasks, including generative modeling, NLP, and probabilistic machine learning.
Adding JS divergence to PyTorchmetrics would align with existing divergence metrics.
Pitch
I propose adding JS Divergence as a new metric in PyTorch Metrics, similar to how KL Divergence is implemented. My implementation is already available in my forked repository, and I’d be happy to refine it based on community feedback.
Alternatives
Additional context
N/A
The text was updated successfully, but these errors were encountered: