Integrating Machine Learning into a Digital Twin App
This article describes a workflow for integrating machine learning models into a Phoenix app, including exporting data, training models offline, and importing predictions back into the app.
Why it matters
Integrating machine learning into applications in a scalable, maintainable way is a key challenge. This article demonstrates a practical approach for doing so within a Phoenix-based digital twin system.
Key Points
- 1Export telemetry, OEE, anomalies, and other data from the app to CSV files
- 2Train machine learning models offline using Elixir pilots or Python/R/Julia
- 3Import the trained model predictions back into the app as JSON Lines data
Details
The article outlines a 3-step process for incorporating machine learning into a digital twin application built with Phoenix. First, the 'mix export.ml' command exports relevant data (telemetry, OEE, anomalies, etc.) from the app to a directory of CSV files. This provides the historical data needed to train machine learning models. Second, the models can be trained offline using Elixir pilots or other languages/frameworks. The trained models produce JSON Lines output that can be imported back into the app. Finally, the 'mix import.ml.predictions' command bulk-inserts the model predictions into the 'ml_predictions' table, which can then be displayed in the app's 'MlPredictionsLive' LiveView. This closed-loop approach keeps the model training separate from the core app, allowing for production-ready deployment without the need for GPU drivers or full MLOps pipelines.
No comments yet
Be the first to comment