After big concerts and festivals, your Snapchat feed is likely full of footage of the stage interspersed with your friends rocking out. With its new Crowd Surf feature, Snapchat wants to take your second-hand viewing experience to the next level.
Built in-house by Snap’s research team, the proprietary machine learning technology stitches together Snaps submitted to Our Story, and uses geolocation and timestamps to piece the audio together into a semi-seamless video.
The technology was put to the test this weekend during Lorde’s performance at San Francisco’s Outside Lands.
A button at the bottom of the screen lets viewers watch from multiple perspectives. During Lorde’s performance you get views from the VIP section and people scattered at various angles around the stage.
The technology relies on people taking Snaps at the same time from a lot of different places. In the case of the Lorde concert, you only get snippets of full songs, and while the audio is seamless, the video jumping from person to person is disorientating. You also get the classic front facing shots of a random person’s face.
For the video to actually be something worth watching, a lot of people would need to be taking steady Snaps, but in theory the technology could be used for most audio-centric events like concerts and speeches.
A Snapchat spokesperson told Business Insider that Crowd Surf will begin slowly rolling-out at select events.
Crowd Surf is just the latest update to Snap’s story feature. Since launching in 2014, Snap has tried to pull viewers in with curated event coverage and its Discover content. The company has struggled since its March IPO, and continues to search for new ways to serve ads to the millions of viewers that open the app everyday.
Snap has faced tough competition from Facebook after it launched its own Stories feature on both Facebook and Instagram. Facebook hasn’t made any moves into editorial curation, and for the moment Snap rules the space.