How S.E.E.D. is using AI, art, and academia for art therapy


What do the technology, academic, and fine art worlds have in common? For Devin Fleenor and his Stellar Emissions Encapsulations Device (S.E.E.D.), all three come together in a device that is part art installation, part psychological research, and part machine learning with the potential to enhance mood as a form of art therapy.

We caught up with Fleenor while he was visiting Montreal for Mutek to find out what exactly S.E.E.D. is and what it has the potential to be.

“I am trying to create a platform for therapy and for exploration,” Fleenor told MTLinTECH. “It’s an art piece, and I think it can stand alone as simply an aesthetically rich interactive piece that you could experience in a museum. And some might be able to have that kind of experience with it. But I think my intention was to allow people to go much deeper. In some ways I think it’s a portal to faraway places, or even a portal to places deep inside of you.”

Physically, S.E.E.D. is an eight foot cube of glass positioned within a larger space. It has motion sensors that allow it to understand when people are in the space in great number, or when they first walk into the space.

“The secondary level is a touch interface on the surface, six pressure sensitive touch sensors on the front of the glass. At that point, when someone gets right in or on top of the device, when they’ve entered touch mode as we call it, they’re unlocking a whole other level of intensity and creativity and communication in that device.”

One of the people on the S.E.E.D. team had attended grad school at Ryerson University and reached out to Dr. Krishnan in the electrical and computer engineering department. They had been looking for an academic partner to start doing peer-reviewed academic research based on initial goals of trying to explore the boundaries and explore the relationship between these immersive experiences and enhanced wellness in the people who are experiencing.

“We’re just getting started. We’ve just submitted our first abstract and ethics paperwork so that we can begin initial foundational research. The very first step is just to be able to show a clear physiological and psychological enhancement through multiple sessions with the device. That’s not really all that groundbreaking, because it’s a very immersive and intense and foreign experience, so you have a pretty good shot of giving somebody an experience that involves awe and delight and a general boost in their mood. But we have to start somewhere with a foundation of measuring what kinds of physiological effect and how much. We’re measuring heart rate variability in our participants, which tells us quite a bit. We’re also measuring their positive emotion via facial recognition. So we’re tracking a lot of biodata from the people who are using it. This first semester is a foundational beginning.”

Using data from facial recognition has less to do with controlling or interfacing with the device and more to do with monitoring their positive emotion. Not only is the S.E.E.D. team recording all that data for research purposes, they are taking note of the types of factors, like color, hue, tempo, and other certain baseline factors, and doing real time analysis. That way, they can get an idea of what changes result in greater enhancement of the positive emotions in a specific subject. For example, if someone really likes red-hued imagery and light, S.E.E.D. can give them more red.

“Right now it’s a very low level of artificial intelligence or machine learning. We’re doing very simple baseline data for facial recognition which was calculated by more elemental machine learning. So you get a body of information that allows the system to be able to understand the human face. This is all machine learning that’s been stored and pre-prepared for people, it’s a part of the open-source library. So we’re taking the machine learning that’s already happened, and then what we’re doing is we’re looking at all of the other factors. So if somebody is smiling, we look at which music progressions are playing right now, what’s the overall intensity of the piece, the tempo, the color hue relationships, and we can cross-analyze all those things and try to find a significant correlation between single variables and combinations of variables. And then we can store all those relationships into a database. So we’re not sure how much we’re going to allow that to modify this experience in this initial run, but we are going to analyze all of this data with grad school students in the next few months, and it will allow us to understand a lot more about how these things are happening and what’s working and what’s not.”

The team is essentially still laying the groundwork for future research, but the possibilities for the future are endless.

“Over the course of the next year or two we have plans to really open up the floodgate of what we’re going to try to do with machine learning and take it to a whole other level. To be able to really do some incredible things using the best technology available to try to learn more about each individual participant. Their reactions, their biofeedback, really trying to correlate between all their physiological measurements and their emotional states and their physiological states. We’ve got a lot of potential and a lot of things in store for further generations.”

 

 

 

 

+ There are no comments

Add yours