In depth

Bang & Olufsen - See yourself in Sound 

HELLO MONDAY/DEPT® was approached by Bang & Olufsen with a brief to create a social first brand awareness campaign. The primary objective was to create a captivating and distinctively sensory musical experience that extended beyond the product itself. The challenge was to create a web app that could work for both Spotify and non Spotify users and to generate a shareable asset with comparable avatar output as the app experience.


The ultimate goal was to generate an infinite number of avatars, each accompanied by custom motion capture dance and sound.


Behind the technology

The Bang & Olufsen - Sound and Vision for Life web app was built with both WebGL and Svelte. For the 3D graphics, we've adopted OGL - a lightweight WebGL framework that allowed us to deliver rich visual experiences without compromising on load times or performance.


The experience is hosted on Google AppEngine, with Firestore as our database solution. This combination provided a highly scalable and reliable infrastructure, ensuring our service remains accessible and robust.


Tech Breakdown:



  • Website built using SvelteKit w/ TypeScript and SCSS for styling

  • WebGL framework is OGL: OGL runs in an offscreen canvas to prevent UI lag

  • Physics engine is Rapier – run using WASM

  • Experience Hosted on Google AppEngine, with Firestore as the database.

  • GitHub for code hosting and CI using GitHub Actions

  • GSAP for animations in combination with CSS animations when possible for improved performance.

  • Lottie for some UI elements such as the logo hover and emoji animations.


Mood Concept

We created a diverse component library of assets to represent the different moods and placed them along the mood matrix. The base variables that determine the “mood matrix” are set in stone and cannot be changed. Boxing each mood and using this as a base also allowed us to determine a unique aesthetic and specific visual style/3D assets for each of the 12 moods. It also allowed us to clearly brief the dancers when creating 12 unique Motion capture dance sets and guided the sound designer when creating 12 unique audio libraries - all synced to each avatar mood. Users can also customize different variables within this configuration, but they can not jump or be allocated to a different mood.


Rokoko Motion Capture

As we wanted the experience to feel customized and accurately represent the 12 moods in the matrix, we produced a series of custom dances with two dancers from Copenhagen Contemporary dance school and using Rokoko Motion capture smartsuit and gloves. We explored multiple different solutions but found this the most versatile as the suit is equipped with a series of sensors strategically placed on the suit to capture the movements of the wearer and communicate wirelessly with a computer or mobile device to transmit the captured motion data in real time.


We also chose this motion capture suit because unlike traditional motion captures systems that require a dedicated studio with multiple cameras and a more complex setup this suit was portable and the software provided us with the tools to refine and edit the motion capture data afterwards including cleaning up the noise and and exporting the motion data in a compatible format.


Avatar generation approach

All texture, dances, sounds and 3D assets are sorted and placed “on” the mood-matrix. When generating the avatar we used this two-dimensional “board” to pick assets based on where the users were on the mood-matrix. Each mood had its own pool of assets which the avatar-generator picked from. Assets were furthermore sorted within each mood to make sure we picked assets that perfectly matched the users musical profile.


Shareable Assets

The shareable asset is generated client-side and shared using the Web Share API.
This is entirely done in the user's browser without the need for a backend. This not only provides a better user experience but is also extremely cost effective. We didn’t have to upload and download video data to and from the client, or execute long running video-editing tasks on the server.


We are running the video generation in a Web Worker to make it as performant as possible without making the interface feel laggy.


We had to use cutting-edge browser APIs to generate the videos in order to capture the WebGL and also generate UI to composite on top of the video. This had to work across all major browsers and devices and resulted in us building 3 different implementations.


Type:

Experiences

Client:

Bang & Olufsen

Deliverables:

3D Modeling, Animation, Concept, Design, Development, UI, UX, Copywriting

Bang & Olufsen - See yourself in Sound 

Bang & Olufsen partnered with HELLO MONDAY/DEPT® to create a web app that gives every user a uniquely sensory experience. ‘See Yourself in Sound’ is designed to generate a vibrant, one-of-a-kind avatar for every visitor. Each character is crafted in real-time by analyzing your Spotify sound profile and aligning your overall mood and energy levels to all the aspects of a 3D character: texture, shapes, body movement and more. Visitors without Spotify can also generate their avatar via a fun, emoji-driven process. After your 3D character is generated, it can be shared with the world via link or a video that’s created just for you.

In depth

01

Behind the technology

02

Mood Concept

03

Rokoko Motion Capture

04

Avatar generation approach

05

Shareable Assets

Mood concept

The mood-profile for each user is defined based on the average valence and energy generated from their Spotify top tracks, or through a short questionnaire in the alternate flow. We used a mood matrix to craft 12 base avatars that each have a specific aesthetic and a unique set of 3D components mapped to them.

Shareable assets

The last step of the experience allows the user to share a recording of their avatar on social media. For this we built a custom rendering engine that made use of cutting-edge web APIs to capture, composite and encode the video directly in the browser.

Non Spotify flow

Non Spotify users also have the opportunity to answer a short questionnaire to define their musical profile to receive a unique avatar. For this experience, we created a custom set of B&O emojis.

Rokoko Motion Capture

Using the Rokoko Motion Capture smart suit and gloves and in collaboration with Copenhagen Contemporary Dance School, we produced a series of custom dances to represent the 12 moods in the matrix.

Infinite Avatar Possibilities

Each avatar is based on the user's mood-matrix, which is generated based on their Spotify top tracks and completely unique to them. Furthermore we used the creation timestamp as a seed-value for a pseudo-random number generator. This allowed us to add even more uniqueness to the experience while still being able to reproduce that exact character in the future when shared on social media.

Avatar Result Page

The combination of the mood matrix, custom soundtracks, motion capture dance and 3D base avatar resulted in a truly unique avatar with a custom title.

Under the hood

For the 3D graphics, we've adopted OGL. This lightweight WebGL framework allowed us to deliver rich visual experiences without compromising on load times or performance. Read more behind the scenes.

Other WebGL experiences:

Something something
We creates joyful digital ideas, products, brand identities and experiences that connect the hearts of brands to the hearts of their audiences.