Snap Research unveils advancements in AR, AI, and creative tools at major conferences

Evan Spiegel
Evan Spiegel
0Comments

The Snap Research team is presenting new work in augmented reality (AR), generative AI, recommendation systems, and creative tools at several industry conferences in 2025.

At SIGGRAPH 2025 in Vancouver, Canada, the team showcased research including “Nested Attention,” a method to improve identity preservation in image generation models. This approach helps generate consistent and accurate images of specific subjects across different styles and scenes. Another project, “InstantRestore,” offers a single-step solution for restoring degraded face images while retaining identity-specific features. The “Set-and-Sequence” framework was also introduced to address video generation with dynamic concepts by learning unique motion patterns over time.

Other highlights from SIGGRAPH include “DuetGen,” which generates synchronized two-person dance motions from music, and “Be Decisive,” which uses a neural network to guide multi-subject image generation for clearer boundaries between subjects.

At KDD 2025 in Toronto, Ontario, Canada, Snap Research presented GiGL, an open-source library that enables large-scale graph neural networks (GNNs) to handle hundreds of millions of nodes and billions of edges. GiGL supports key machine learning applications at Snap such as user growth and content ranking. The team also introduced PRISM, a strategy designed to make recommendation model training more efficient by replacing embedding weight decay with a simpler computation at the start of training.

Additional work included AutoCDSR for improving cross-domain sequential recommendation accuracy by sharing relevant knowledge while filtering out noise. The SnapGen model was highlighted as a high-performance text-to-image system that runs on mobile devices and generates images in under two seconds. Its extension, SnapGen-V, produces five-second videos on mobile devices within five seconds.

Other research covered new models like 4Real-Video for realistic 4D video diffusion; Stable Flow for easy image editing without complex hardware or training; Omni-ID for holistic facial representation in generative tasks; PrEditor3D for quick 3D shape editing; MM-Graph as a benchmark combining visual and textual data for multimodal graph learning; Video Alchemist for generating videos from text prompts and reference images; Mind the Time for temporally controlled video generation; Video Motion Transfer using diffusion transformers; Wonderland for creating 3D scenes from single photos; and AC3D for improved camera control in video generation models.

According to Snap Research, all models and work described are intended solely for research purposes.



Related

George M. Cook, Performing the Duties of the Director

Census Bureau releases 2025 U.S. population estimates by age and sex

The U.S. Census Bureau has released new resident population estimates by age and sex for July 1, 2025. More detailed housing unit and demographic breakdowns are expected in future releases.

Ron S. Jarmin, Director

U.S. Census Bureau releases new Business Trends and Outlook Survey data on April 9

The U.S. Census Bureau has published new findings from its Business Trends and Outlook Survey as of April 9. The ongoing survey tracks key economic indicators among employer businesses nationwide every two weeks.

Brian Bryant, International President

U.S. Census Bureau releases new Business Trends and Outlook Survey data on March 26

The U.S. Census Bureau has released updated data from its Business Trends and Outlook Survey as of March 26. The survey offers biweekly insights into business conditions nationwide with upcoming results on artificial intelligence topics.

Trending

The Weekly Newsletter

Sign-up for the Weekly Newsletter from IE Commercial News.