Towards Data Science1d ago|Research & PapersProducts & Services

Explainable AI in Production: A Neuro-Symbolic Model for Real-Time Fraud Detection

This article presents a neuro-symbolic model that provides deterministic, human-readable explanations for fraud predictions in 0.9 ms, a 33x speedup over SHAP. The model achieves the same fraud recall as SHAP on the Kaggle Credit Card Fraud dataset.

💡

Why it matters

This research presents a significant performance improvement for explainable AI in production environments, enabling real-time fraud detection with human-readable explanations.

Key Points

  • 1Neuro-symbolic model produces deterministic, human-readable explanations for fraud predictions
  • 2Explanation is generated as a by-product of the forward pass, taking only 0.9 ms
  • 3Achieves the same fraud recall as SHAP on the Kaggle Credit Card Fraud dataset
  • 433x speedup compared to the 30 ms required by SHAP to generate explanations

Details

The article discusses the challenges of using SHAP, a popular explainable AI technique, in production environments. SHAP requires 30 ms to explain a fraud prediction, and the explanation is stochastic and runs after the decision. It also requires maintaining a background dataset at inference time. In contrast, the proposed neuro-symbolic model generates a deterministic, human-readable explanation as a by-product of the forward pass, taking only 0.9 ms. This represents a 33x speedup compared to SHAP. The model achieves the same fraud recall as SHAP on the Kaggle Credit Card Fraud dataset, demonstrating its effectiveness in real-time fraud detection applications.

Like
Save
Read original
Cached
Comments
?

No comments yet

Be the first to comment

AI Curator - Daily AI News Curation

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies