AR prototyping in Lens Studio

Last week, I dabbled in lens studio to get an understanding of what this tool can do. 

Very quickly, i realized Lens Studio goes beyond simple face painting or 3D animal visualization.

Right away, I noticed the ML function which allows users to import custom models in Lens Studio. This opens doors to a variety of advanced experiences.

Within a few hours, I was able to create a Hi-fi color decoder prototype, recognizing an object and displaying its corresponding color codes automatically.

Besides ML function, customizable UI buttons and API integration also stand out as creators can expand a lens’ functionality beyond Snap Chat. AR navigation by example, integrates with Google Map API and uses native UI buttons to trigger events.

Upon digging further, the persistent storage function caught my attention because it can selectively read and write data between sessions. Using the navigation example, if you accidentally close the Snap Chat app, your navigation will still be there when you reenter the app.

Exploring Lens Studio over the last few days has given me a glimpse of what the tool is capable of. It’s time to move on.

Next week, I will be exploring SparkAR, a similar tool to Lens Studio.

Previous
Previous

Prototyping in SparkAR

Next
Next

Simplifying The Boarding Process (STBP) - AR design project