Machine Learning Model Trained on Images of Me Practicing Yoga using Runway ML. I was curious to find out how a machine may interpret images of me while I practice yoga and mindfulness and I wanted to explore the generated visual outcomes.
EXPERIMENT: Runway ML, Yoga Personal
Wednesday 10 February
Since exploring and experimenting with PoseNet in P5.js I felt it would be interesting to also look at a different kind of machine learning training process and work in Runway ML again. When I used Runway in the past to train female album covers it was a very diverse dataset which worked for that particular project however this time I want to really narrow down the dataset I choose and I also want to make it personal to me. My aim is to take photographs of myself doing yoga and have the camera in the same place and angle, I will wear the same clothes so that the actual poses are the only changing factor. I look forward to exploring the latent space and GAN outputs to see if maybe the model starts to mesh and create new poses and since it isn’t trained on what a body can and cannot do I look forward to some wild results.
The first hurdle was having a space with as blank a background as possible in my small flat. At first I thought I would be able to stage a white floor with a blanket or a bedsheet but these were too small and also slippery which would be a health hazard for the more complicated poses. In the end I decided to use my yoga mat and to keep the wooden floor in the frame. I had to move all my furniture around to get to a blank wall but after a lot of struggle I ended up finding a good enough space and setting up my tripod and camera. Before I started any of this I also had a look at my camera settings and picked the best quality photograph setting I could and luckily the camera had an option to set a 1:1 ratio which is what runway accepts as it currently trains models in a square format, this way I will have a minimal amount of cut off feet or hands since I will mostly be able to see the photographs and runway won’t need to crop them.
Thursday 11 February
I will put a little sample of some of the dataset below. I need up with 778 images in total after I deleted some I didn’t think would work. Overall I am pretty happy with what I managed to achieve and how diverse the poses are I’m just really curious how the model will turn out. I decided to add some images of transitions and stumbles to represent that yoga isn’t only about holding the perfect pose but about the process and practice.


Held poses:
I really wanted to feed in a mix of different poses that showed different heights and movement hopefully this will result in more diverse results from Runway outputs.
Some falls:
I decided to include some of the transitions and falls as to me these are just as important as holding complicated poses. Yoga is all about practice so it makes sense sometimes we will stumble and there are transitions between poses that are very important because this is how we prevent injury.


Results
I enjoy how the mat has a texture almost like water ripples or subtle waves.
Saturday 13th February
I had a look through the latent vector space for the model I trained and exported some more images, more varied results.
Francis Bacon
PAINTING 1946
Oil and pastel on canvasFRAGMENT OF A CRUCIFIXION 1950
Oil and cotton wool on canvas

The PoseNet model is on runway so I thought it would interesting to run the model outcome images from my yoga poses model and see if it sees anything in the images. It didn’t see anything in some images but in others it did and I felt this provided really interesting results.
There is also an option to export a css or Jason file from the PoseNet model and this sounds like it would come in handy and be quite interesting but I think I will ask Jen how I could use this data, I know I can easily load css or Jason files into P5.js so I’m sure there is some creative data visualisation space to explore there.
Monday 15th February
I realised there is another model that may be very interesting as an output from my yoga poses model image outputs; Detectron2. This model is an object detection model which would further explore the question go how does the machine view and see yoga? What will it see in these images that were outputs of training a model in runway and then comparing these to real pictures of me doing yoga.
From these 4 images we had results of: dog, bird and bed. These results are already interesting to me even purely as a visual I enjoy this models vivid colours. When seeing the word dog it was instantly funny to me because downward dog and upward dog are known poses in yoga. The bird seemed like an interesting outcome as well as to me it makes me think of freedom and flying and also a certain kind of strength which all works alongside yoga. And finally the word bed which to me instantly had heavy connotations of maybe tackling a depressive state through yoga so this idea of physically dragging yourself out of the bed and onto the yoga mat almost as if that ‘bed’ is still part of your practice. Also this idea of Shavasana (corpse pose) where at the end of practice you enter a state of stillness and meditation laying flat on your back which is something reminiscent of laying in bed.
Now I’m interested to see how the model analyses traditional yoga poses by feeding in pictures from the dataset I created and used to train my Yoga poses model. I am curious if there was enough yoga mats and similar poses and in the training dataset of Detectron2 to recognise that I am a human and that the mat is a yoga mat.
So evidently it can tell I’m human. The results for the real images of me doing yoga is ‘person’ with varying percentages of certainty. The image where I am doing a headstand is the lowest percentage at 68% and this what I think sometimes happens in the PoseNet model too, some of the poses in yoga aren’t very common and clearly the images the dataset was trained on didn’t include many images of people doing handstands. It’s fair to say I enjoy the previous results with answers like dog and bird more.
Monday 1st March
I wanted to create a processing sketch which would display some of these images. I hoped to have 3 sliders and possible buttons to be able to look through an array of images, unfortunately I couldn’t figure it out and then I seen a forum which mentioned that sliders don’t work with arrays so this is something I will ask the tutors about in the future. I still wanted to create an array though so I went ahead and did a test with a folder of 4 images and managed to display image[0] when mouse was on the left side of the screen and image[1] when the mouse was on the right. This is obviously quite basic but it took me a while to get all the code sorted out especially since I didn’t have an example which stepped through an array of images so I had to piece a few example sketches together to get to this point.


I then created a sketch with a dataset array of 50 images exported from Runway ML. The sketch steps through these images choosing random ones with 2 frames per second. I think this has a lot of potential to start exploring moving image with could work very well with the subject matter of human movement, I like the idea of this being displayed as a projection maybe even experimenting with what it is projected on in the future.
I want to do some more runway object detection and then turn the outcomes into vector graphics and possible a chart. I think the first time I seen the outcomes from the images of me doing yoga I kind of discarded them as I was more interested in the Francis Bacon like outcomes and that runway detected ‘dog’ ‘bed’ and ‘bird’ in them. When I fed in pictures of the original images of me runway seen a person with varying percentages and considering this now I think this is a really interesting thing to explore.
I began by choosing around 20 images to run through Image detection as this is a reasonable amount to turn into a chart. However they were too large to run through runway so I found a tutorial on how to batch resize images through Photoshop – I think this is a useful thing to know for the future and I got the idea from the last show and tell as Rebecca and Deniz mentioned it to me.
Once I had the images successfully resized I ran them through Detectron2 like before. Here are some results:
Most of the results are pretty accurate, they all detected that I was a person with varying percentages like before but they sometimes missed body parts or generalised wider areas and only one result was very bizarre.

Tuesday 2nd March
Working in Photoshop to isolate the detected shapes using the colours of the boarders from the runway object detection outputs.
I enjoy these outcomes of very simple and minimalistic shapes of by body as seen by the machine. I continue this experiment further in Part 6 where I use processing to create collages using these images/shapes and delve further into object detection and shapes datasets.