This year I looked at mindfulness and machines considering how computers and machines may view, try to understand and envision human mindfulness practice in the form of meditation, yoga and guided breathing.
What do machines ‘see’ when we practice mindfulness and mediation in-front of them? How could we visualise this?
I created four works titled Captured Transition, Moving Paintings, Morphing Figures and Immersive Projections using various software and programs such as Processing, P5.js and Runway ML and tools such as the Wii Balance Board, Projectors and Arduino in turn producing a range of diverse outcomes.
My end of year documentation video is split into exploring four final works and includes development of concept, shows the creation of works using programs and software such as P5.js and RunwayML and provides stagings, mock-ups and documentation of finalised work.
The PoseNet model is very good at recognising people however with more complex yoga poses it would sometimes miss limbs in turn creating abstract outcomes and drawings.
The white lines represent the pose which the machine was trained to recognise while the black lines show the transition through to the second pose not taught to the machine. The abstract drawings vary between more delicate and subtle to bolder more obvious outcomes where a human body is more easily identified.
The work is presented as large-scale prints and contact sheets. I created the contact sheets to consider the sketches/outcomes in more detail displayed in a timeline format while the large prints are a human scale display of the drawings.
Intrigued by the idea of merging mindfulness and computers I wanted to delve into machine learning and how a machine could potentially envision yoga practice. When training new machine learning models, it is often common to use an enormous image dataset of varying results to produce a very diverse model. This challenge often results in bias which is difficult to avoid. I decided to make my work personal to me eliminating this problem as all I am interested in are the visual outputs based on the machine’s interpretation of my physical body.
I trained a model using a dataset of images taken while I was practicing yoga. The results were generated by the machine learning model breeding images of my deformed body with soft and rounded features often missing limbs.
From these results I painted a triptych using acrylic paint inspired by Francis Bacon. The final piece is a moving painting where I project a video of the generated latent space walk onto the paintings. These videos were created the same way as the Morphing Figures outcome but when choosing the images which created the generated latent space walk videos I was careful to select images which were very similar to the figures in the corresponding paintings. This achieved an outcome where the moving projection is a deeper and more extensive exploration of the still figure in the painting.
Morphing Figures consists of generated latent space walk videos from a machine learning model based on images of me practicing yoga. These videos show the morphing of one generated ‘pose’ to another resembling transitions within yoga as well as the subtle trembling of the body when holding a pose or breathing.
The machine and software, Runway ML, I used had no knowledge of yoga and the anatomy of the body which is why these results are fascinating. By only training the model on 777 images of me while I practiced yoga, these latent space walk videos resemble movement and transitions which are very similar to yoga almost morphing from one surreal pose to another. To me this is an illusion of a machine trying to understand human mindfulness and movement.
In this work I create an immersive experience allowing physical movement to directly change and impact a person’s immediate surroundings. By practicing mediation or yoga on a Wii Balance Board the values of weight distribution have an influence on the position, scale and rotation of projected shapes. I created the shapes dataset by using my machine learning generated model and scanning it through object detection which is where the machine looks for things it recognises in an image and highlights the shape with a label and confidence percentage. The audio consists of recorded chords played on a Kalimba, the steady chords being played back allow for guided breathing practice. The projection of the shapes was influenced by Henri Matisse and his cut-outs collage works. Created using Processing.org, Runway ML and OSCulator.
This small virtual gallery was build in Unity, I wanted to create a space where all my final works could exist together…even if only in a virtual 3D environment. There’s a trigger when stepping toward the imagined ‘balance board’ showing the shapes drawing on the wall from the work Immersive Projections and a floating cube showing the videos from Morphing Figures. The Moving Paintings are situated on a black wall while Captured Transition is displayed as a large print.
Throughout this year I kept a personal sketchbook where I would make plans, write down notes and critiques from tutors and peers, document ideas, create mind maps, develop research, make to do lists and sketch.