Scaling Seismic Foundation Models on AWS
TGS achieved near-linear scaling for distributed training and expanded context windows for their Vision Transformer-based Seismic Foundation Model (SFM) using Amazon SageMaker HyperPod, cutting training time from 6 months to 5 days.
Why it matters
This news highlights the potential for cloud-based distributed training to accelerate the development of advanced AI models for seismic analysis, with significant time and cost savings.
Key Points
- 1Distributed training of seismic foundation models on AWS using Amazon SageMaker HyperPod
- 2Achieved near-linear scaling, reducing training time from 6 months to 5 days
- 3Enabled analysis of larger seismic volumes than previously possible
Details
This article describes how TGS, a geoscience data company, was able to scale the training of their Vision Transformer-based Seismic Foundation Model (SFM) using Amazon SageMaker HyperPod on AWS. By leveraging distributed training, they achieved near-linear scaling, cutting the training time from 6 months down to just 5 days. This allowed them to expand the context windows of their SFM, enabling analysis of larger seismic volumes than was previously possible. The joint solution from TGS and AWS demonstrates the power of cloud-based distributed training for advancing AI-driven seismic analysis in the oil and gas industry.
No comments yet
Be the first to comment