VR Mixer

A virtual reality music sequencer

This music creation tool allows the user to manipulate multiple pieces of sound from a "sound palette" that s/he has on the left hand. Each piece of sound is represented with a figure and has a different sound attached depending on its shape.
Once created, user can place it to a VR sequencer to perform a music composition. Users can also edit the shape of the sound figure in order to alter pitch and volume of the embedded sound.

This tool serves as a base to develop a VR platform for music creation. It was designed, developed and tested during Sonar Innovation Challenge in 4 days during Sónar+D Festival at Barcelona.


More info

HACKATHON
Inflight VR | Sónar+D
12 June, 2017 - 16 June, 2017

Challenge

Define an immersive VR experience that can be used during a flight journey

As the space is limited, movement is also restricted, so we want to explore interactive applications that can be performed while we are on a seat with other people around us. Here is also interesting to think about how our experience can invite other users to participate remotely.

Overview

Tangible sound manipulation on a virtual reality music composer

Challenged by Inflight VR on the context of Sónar+D music and technology festival, we decided to create an immersive tool for music creators and compositors. We considered Sónar, the perfect scenario for our experience and to get our first insights on our creation.

triem screens

Sonar+D visitor with an Oculus with controllers

Our VR music sequencer allows users to direcly grab a piece of audio from a sound palette and placed it into a sequencer. Users can init a loop and listen their composition on the go using our interface. We also allow to edit sounds with our sound altar and placed later placed in the sequencer.

VR Mixer was tested and presented at Sónar+D with great acceptance to both general public and compositors.

Design process

First of all we brainstormed possible experiences VR experiences to be performed on a plane. Once done it we decided to classify them depending users level of interaction with the virtual space.

  • Role of the user: users are placed inside an environment, but what they will do there?. Can user modify it or are simple spectators? How they do it?
  • Interactivity: how does users interact with the environment? Minor interaction done with the head? Do they use their hands? Can they speak with the interface?
  • Collaborative: can multiple users interact with the same enviroment? How they perceive and interact with each other

In this phase, we analysed the benefits and constrains of each variable defined in order to estimate viability of each idea. We decided to create a multimodal experience where both audio and vision are combined. We also decided the inputs from the user: hands and head movements. We believed that if we ensure a correct placement of elements, users would had a controlled space to engage with our proposal . Moreover, we discarted voice interaction to avoid disturbing other people while we are inside the app. Finally we contemplated the possibility to allow collaboration with multiple users, placing them in the same environment. However as you will see this part is not integrated in the final solution.

After this analysis we decided to create an experience that could allow music compositors to create pieces of music using their hands. To do that, we researched current methodologies to compose and decided which one could benefit from our approach. At a glance we summarized in three steps.

  • Creation: audio is recorded for later be processed
  • Edition & Composition: recorded audio is edited and mixed in order to create a composition
  • Play: final piece of music that can be played to spectator
Since we don't expect users to record sounds (right now) neither only listen final compositions on our proposal, we decided to assume that we'll access to a library of classified sounds and edit them to compose music.

Example of an step sequencer

Example step-squencer

We decided to simplify the model of interaction imitating current step sequencers in the market. As you can see, you can assign a certain sound to any of the available slots in the matrix. When played, it starts a sequence at a certain velocity that loops through the matrix. If there's an step that we assigned a certain sound it will be played.

Moreover, to do this assignation, it's common to navigate through a library of different sounds categorized by families or type of sound. A possible classification could be drums, guitars, bass, vocals... However, when assigned, this information is lost. We decided that would be also useful to include this information about what category of sound is placed in order to benefit users memory recall. To do that, we defined that, each category would have a 3D figure assigned to it. So everytime that user for example sees a Cube will know that it corresponds to drums.
Step sequencer animation

At left, animation of an step-squencer. At right, example of possible instrument family assignation to shape

In our proposal, we also define a way to access to this sound library using our hands. Here we are inspired by painters to create the concept of the sound palette. As painters with lots of colors in the palette we can define a number of categories of sound to be placed in our left hand. Then, with the right hand we can touch any of these category access to get a sound in our hands for placing it in the sequencer.

Step sequencer animation

Step sequencer animation
Step sequencer animation

Sound palette concept. At bottom, a real demonstration of our prototype

Allowing users to grab a sound and placing it anywhere they want creates a more engaging experience than tapping on a position of the sequencer matrix.
We also simplify the experience of selecting a sound from a family category on a typical menu which creates a more playful and explorative experience. As we'll see on evaluation this creates an opportunity for non-experts to compose in a easy way

Finally, we placed all together priorizing which capabilities will be available for our first prototype. We decided to create a minimal interface to do some basic actions common while composing a piece of music. This possible actions with our palette and sounds representations are the following:

  • Create & place: grab a new sound to place by taping 3 seconds on a quick access
  • Discard: just throw the object at any direction
  • Sound edit: you can modify components of the original sound using your hands
  • General controls: you can play/pause and also modify the tempo of the sequencer

Step placing example

Sound placing from palette

Sound editor

Sound editor

Implementation

Our definition of interactivity directly influenced on the development of this project. We explored various commercial devices such as Samsung Gear, Oculus or Google Daydream. Some of these incorporate also controllers for interacting with the world.

However we decided to directly track users hands using a Leap motion mounted on an Oculus headset. With this decision we allow a natural experience while interacting since users hand is directly mapped into the environment
Here it was also interesting to see one participant with some fingers missing that tried our prototype. The system, interpolated and estimated three fingers based on the rest of the hand with pretty well results. Based on this user comments we could explore more on this, but it would be another research project to explore... :)

Rock it gesture with hand tracking

Finger mapping using Leap motion

Coming back to implementation, we coded all the behaviors present in our prototype using Unity and supported assets both for Leap Motion and Kinect. We downloaded a series of sounds of three different categories (drums, guitar, synthetic) and assigned a shape to them. We prepared the system to place a random sound when user taps on the corresponding area of the sound palette.

Users position is always fixed and the sequencer is placed vertically on a closer distance from them. When sound is activated, we direcly play it accessing to the corresponding audio from our library

Finally, regarding sound edition mode, it is activated when user puts a shape on the altar from bottom. Then we directly map users horitzontal and vertical hand movements to change parameters of the sound.

Step sequencer animation

User interacting with the our app


Results

Multiple users tried our prototype during Sónar+D and we extracted many valuable data on that. More specifically, we ran a more formal test on interaction model to evaluate our design and interaction model.
Ten users, were evaluated to perform multiple actions with our interface. Many of them were not user compositors but we encourage them to try it as non-expert users

We also asked to rate te overall experience after completing the test and some questions relative to the possible usage of our proposal. As you see, they considered easy to use and intuitive.


Would use it during its flight experience
39%
Would recommend it to a friend
47%
Would like to share creation
59%

Finally we also asked them to rank the top features and worst features and also suggest features that they would like to see in possible future versions

likes dislikes of our app

Likes and dislikes of our proposal

The results of the overall process indicates possible interact on expanding this prototype on a commercial product. Some features that can be interesting for target users could be the following:

  • Cloning sounds
  • Multiuser collaboration
  • Whisper sounds for other passengers
  • Sharing capabilities

As described before, it was a great experience to direcly share our proposal with music creators. I would like to quote one of the sentences that encouraged us the most:

" This is the first step to a powerful tool for tangible music creation”

Contribution

I was part of a group of five members of the challenge. My responsability was to define the interaction model and design of the application together with the team. I also conducted the in-site test to evaluate our proposal with multiple users. Finally I also contributed on programming the experience on Unity.

Press & Awards

Sónar+D Innovation Challenge - Our final presentation on stage to the audience