Meta AI Researchers Release MapAnything: An End-to-End Transformer Architecture that Directly Regresses Factored, Metric 3D Scene Geometry

A staff of researchers from Meta Reality Labs and Carnegie Mellon University has launched MapAnything, an end-to-end transformer structure that straight regresses factored metric 3D scene geometry from photographs and elective sensor inputs. Released beneath Apache 2.0 with full coaching and benchmarking code, MapAnything advances past specialist pipelines by supporting over 12 distinct 3D imaginative and prescient duties in a single feed-forward go.

Why a Universal Model for 3D Reconstruction?
Image-based 3D reconstruction has traditionally relied on fragmented pipelines: function detection, two-view pose estimation, bundle adjustment, multi-view stereo, or monocular depth inference. While efficient, these modular options require task-specific tuning, optimization, and heavy post-processing.
Recent transformer-based feed-forward fashions reminiscent of DUSt3R, MASt3R, and VGGT simplified elements of this pipeline however remained restricted: fastened numbers of views, inflexible digicam assumptions, or reliance on coupled representations that wanted costly optimization.
MapAnything overcomes these constraints by:
- Accepting as much as 2,000 enter photographs in a single inference run.
- Flexibly utilizing auxiliary information reminiscent of digicam intrinsics, poses, and depth maps.
- Producing direct metric 3D reconstructions with out bundle adjustment.
The mannequin’s factored scene illustration—composed of ray maps, depth, poses, and a world scale issue—supplies modularity and generality unmatched by prior approaches.
Architecture and Representation
At its core, MapAnything employs a multi-view alternating-attention transformer. Each enter picture is encoded with DINOv2 ViT-L options, whereas elective inputs (rays, depth, poses) are encoded into the identical latent house through shallow CNNs or MLPs. A learnable scale token permits metric normalization throughout views.
The community outputs a factored illustration:
- Per-view ray instructions (digicam calibration).
- Depth alongside rays, predicted up-to-scale.
- Camera poses relative to a reference view.
- A single metric scale issue changing native reconstructions right into a globally constant body.
This specific factorization avoids redundancy, permitting the identical mannequin to deal with monocular depth estimation, multi-view stereo, structure-from-motion (SfM), or depth completion with out specialised heads.

Training Strategy
MapAnything was skilled throughout 13 various datasets spanning indoor, out of doors, and artificial domains, together with BlendedMVS, Mapillary Planet-Scale Depth, ScanNet++, and TartanAirV2. Two variants are launched:
- Apache 2.0 licensed mannequin skilled on six datasets.
- CC BY-NC mannequin skilled on all 13 datasets for stronger efficiency.
Key coaching methods embody:
- Probabilistic enter dropout: During coaching, geometric inputs (rays, depth, pose) are supplied with various chances, enabling robustness throughout heterogeneous configurations.
- Covisibility-based sampling: Ensures enter views have significant overlap, supporting reconstruction as much as 100+ views.
- Factored losses in log-space: Depth, scale, and pose are optimized utilizing scale-invariant and sturdy regression losses to enhance stability.
Training was carried out on 64 H200 GPUs with blended precision, gradient checkpointing, and curriculum scheduling, scaling from 4 to 24 enter views.
Benchmarking Results
Multi-View Dense Reconstruction
On ETH3D, ScanNet++ v2, and TartanAirV2-WB, MapAnything achieves state-of-the-art (SoTA) efficiency throughout pointmaps, depth, pose, and ray estimation. It surpasses baselines like VGGT and Pow3R even when restricted to photographs solely, and improves additional with calibration or pose priors.
For instance:
- Pointmap relative error (rel) improves to 0.16 with solely photographs, in comparison with 0.20 for VGGT.
- With photographs + intrinsics + poses + depth, the error drops to 0.01, whereas reaching >90% inlier ratios.
Two-View Reconstruction
Against DUSt3R, MASt3R, and Pow3R, MapAnything persistently outperforms throughout scale, depth, and pose accuracy. Notably, with further priors, it achieves >92% inlier ratios on two-view duties, considerably past prior feed-forward fashions.
Single-View Calibration
Despite not being skilled particularly for single-image calibration, MapAnything achieves an common angular error of 1.18°, outperforming AnyCalib (2.01°) and MoGe-2 (1.95°).
Depth Estimation
On the Robust-MVD benchmark:
- MapAnything units new SoTA for multi-view metric depth estimation.
- With auxiliary inputs, its error charges rival or surpass specialised depth fashions reminiscent of MVSA and Metric3D v2.
Overall, benchmarks verify 2× enchancment over prior SoTA strategies in lots of duties, validating the advantages of unified coaching.
Key Contributions
The analysis staff spotlight 4 main contributions:
- Unified Feed-Forward Model able to dealing with greater than 12 downside settings, from monocular depth to SfM and stereo.
- Factored Scene Representation enabling specific separation of rays, depth, pose, and metric scale.
- State-of-the-Art Performance throughout various benchmarks with fewer redundancies and better scalability.
- Open-Source Release together with information processing, coaching scripts, benchmarks, and pretrained weights beneath Apache 2.0.
Conclusion
MapAnything establishes a brand new benchmark in 3D imaginative and prescient by unifying a number of reconstruction duties—SfM, stereo, depth estimation, and calibration—beneath a single transformer mannequin with a factored scene illustration. It not solely outperforms specialist strategies throughout benchmarks but additionally adapts seamlessly to heterogeneous inputs, together with intrinsics, poses, and depth. With open-source code, pretrained fashions, and help for over 12 duties, MapAnything lays the groundwork for a very general-purpose 3D reconstruction spine.
Check out the Paper, Codes and Project Page. Feel free to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to observe us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our Newsletter.
The submit Meta AI Researchers Release MapAnything: An End-to-End Transformer Architecture that Directly Regresses Factored, Metric 3D Scene Geometry appeared first on MarkTechPost.