πŸ“„ 0-1 Loss πŸ“„ Absolute Error πŸ“„ Adversarial Loss πŸ“„ Akaike Information Criterion πŸ“„ Attention Alignment πŸ“„ AUC-Borji πŸ“„ AUC-Judd πŸ“„ Bayesian Information Criterion πŸ“„ BCE with Logits πŸ“„ Bhattacharya Distance πŸ“„ Binary Cross Entropy πŸ“„ BLEU πŸ“„ BYOL Loss πŸ“„ BYOL πŸ“„ Chebyshev Distance πŸ“„ Chi Squared Distance πŸ“„ Confusion Matrix πŸ“„ Contrastive Loss πŸ“„ Cosine Distance πŸ“„ Cosine Learning Rate Decay πŸ“„ Cosine Similarity πŸ“„ Cross Entropy πŸ“„ Cross Validation πŸ“„ CTC πŸ“„ Cycle Consistency Loss πŸ“„ Dice Score πŸ“„ Distance Measures πŸ“„ Distillation Loss πŸ“„ Earth Mover’s Distance (EMD) πŸ“„ ELBO loss πŸ“„ Emperical Risk πŸ“„ Euclidean Distance πŸ“„ Focal Loss πŸ“„ GE2E πŸ“„ Hamming Distance πŸ“„ Hausdorff Distance πŸ“„ Haversine Distance πŸ“„ Hinge Loss πŸ“„ Huber πŸ“„ Identity Loss πŸ“„ inter-sentence coherence loss πŸ“„ Intra cluster variance πŸ“„ ITM Loss πŸ“„ Jaccard Distance πŸ“„ Jensen Shannon Divergence Consistency Loss πŸ“„ KL Divergence πŸ“„ Least squares loss πŸ“„ Log likelihood criterion πŸ“„ Log Likelihood Loss πŸ“„ LogCosh πŸ“„ MAE πŸ“„ Mallows Cp Statistic πŸ“„ Manhattan Distance πŸ“„ MAPE πŸ“„ Margin Ranking πŸ“„ Max Margin Loss πŸ“„ Minkowski Distance πŸ“„ MSE πŸ“„ MSLE πŸ“„ Negative Log Likelihood πŸ“„ PatchGAN πŸ“„ Perplexity πŸ“„ Poisson Loss πŸ“„ Precision Recall Curve πŸ“„ Precision πŸ“„ Quadratic Loss πŸ“„ Quantile loss πŸ“„ RAHP πŸ“„ Recall πŸ“„ Recipe for constructing loss functions πŸ“„ Reconstruction loss πŸ“„ ROC Curve πŸ“„ SDR πŸ“„ Sensitivity πŸ“„ Shuffled-AUC πŸ“„ SΓΈrensen-Dice Index πŸ“„ Sparse Dictionary Learning Loss πŸ“„ Specificity πŸ“„ Squared Error πŸ“„ Squared Hinge πŸ“„ SSR πŸ“„ Triplet Loss