Part 5 – Audio Development – Sound and Physical Movement

Exploring sound with relation to mediation and yoga practice. Allowing organic physical movement to trigger audio playback through recognition of poses and key points of the body using PoseNet as well as a wii balance board where weight distribution data controls sound output.

Tuesday 23rd February

Calender: Show and Tell at 2pm

I have been thinking about exploring sound for a while now, I think I was just apprehensive about getting started but it was mentioned during the show and tell and I agree that sound will work very well alongside the experiments I’m already working on.


The science of singing bowls


I began by researching a little about how sound and frequencies relate to yoga. Yoga and sound/music is a huge topic to explore so I decided to start by focusing on frequencies so that I wouldn’t get overwhelmed. I found sources which connected the 7 chakras to certain frequencies so I decided to create the tones using audition with the help of the simple tutorial linked below.


I wanted to create a sketch in processing sketch where moving the mouse over the colours representative of each of the 7 chakras would trigger the corresponding frequency. Unfortunately it didn’t work because playing the audio in the draw loop meant it played on top of it creating a horrendous sound. In the end I didn’t figure out how to make this sketch work and I felt I was spending too much time on it especially because it was just meant to be a small experiment to start working with sound so I decided to move on. I also didn’t want to work with the pure tones of the frequencies it didn’t sound very calming and I wasn’t enjoying it.

Wednesday 24th February

I decided to choose a copy right free meditative song and use this as the sound right now before I decided if I want to make my own music or use something specific. I wanted to move from processing to P5.js so that I could combine audio and the PoseNet sketches I have been working on.

which chakras with which yoga pose :

In this sketch I looked at Dan Shiffman’s code for identifying specific keypoints in the poseNet array. I used the nose keypoint just like the example code to begin with to start implementing audio. I wanted movement to change the amplification of the audio so I mapped the position of the nose on the y-axis to values of amplitude i.e. changing the volume of the audio by moving the nose up and down in space. This worked well and the huge red ellipse was a visual representation of the volume being found when on top of the canvas and quiet when at the bottom.

Music Source: Music by TimMoor from Pixabay

This video shows how the volume is changed by the moment of the nose on the y axis (I made the red dot smaller so that movement was more visible). This sketch is only an experiment/test I hope to develop this further.

Sunday 28th February

I wanted to work on the code and change the visuals to represent something more subtle and delicate than the red nose keypoint I used for my test. I decided to work on code which detects 3 poses, when a pose is detected the skeleton and keypoints are drawn in a grey colour with the value of 100. The right wrist on the y axis controls the volume of the audio. When a pose is detected the background changes to an assigned colour value, for the 3 poses there are 3 colours of blue (a subtle delicate change – I hope to develop this code to maybe lerp the values). For this sketch the posenet image of the person isn’t drawn over itself like in pervious sketches instead I hope to document it in the form of a video to represent the fluidity and movement even thought the skeleton is quite stiff and the lines make it seem quite rigid, it’s an interesting contrast. The keypoint for the right wrist which controls the volume of the audio is slightly bigger than the other key pints and its black (darker to everything else drawn on the screen).

The right wrist is controlling the volume of the audio and the background colour changes with detected poses.

Monday 15th March

Ailsa gave me some of the code she has been working with which triggers sounds so that I could re-visit the frequency tones I was trying to make work in processing when I first attempted work with sound. From what I understand of her sketch the code tracks movement and triggers audio files mapped to certain points on the canvas. She felt this could relate to my work by having the video be of me practicing yoga and this would trigger those different sounds or frequencies or tonnes. This sketch seems really complex and I don’t have a full understanding of it but I’m grateful Ailsa lent me it so I could explore it with my concept.

This video is a short clip of me testing the code with a highlighter. The next step would be to test how it reacts to a video of a person practicing yoga. I also want to maybe record new audio samples for this sketch changing the audio that will ultimately play back.

April 2021

I am re-visiting sound now as I am working on the ‘immersive projections’ work and after trying recordings of the ocean and waves, then using pure tones and frequencies I have settled on the Kalimba.

My boyfriend helped me to record some chords and that is what I will use in my processing code adjusting Pau’ls code so that around 9 sounds are loaded into an array and movement on the wii balance board will change the audio playing through the chords at varying volume.

Final use for these sounds was part of my ‘Immersive Projections’ work in Part 10 on my learning journal.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s