One of the most interesting presentations I saw during the Collision Conference in Toronto was that of Poppy Crum, PhD. Neuroscientist and Technologist, of Dolby Labs. The session's overview: "In her presentation, Poppy talks about optimizing the human experience by using intelligent audio and video technologies, sensors, and computational advancements. She will discuss how ML/AI has enabled and democratized a transformation in the quality and capability of experiences of the elite to the global consumer."
Poppy began with some classical music.
Why? Because JS Bach was not just a musician, a composer - he was a storyteller.
“Bach was a master of understanding human experience and human emotion: unlike vision where our brain is getting a representation of space, with sound it’s different. You have all these different sound waves that your brain has to unpack and then reassign to create the experiences we have. How he did it became the rules of counterpoint, and for centuries composers have used them as a recipe book, because there are so many different ways our brains can unpack those sounds. If I’m a composer, and I follow these rules, you as a human will experience sound in this way.”
This isn't limited to JS Bach: these types of storytelling techniques through sound can be heard in Angolese storytelling music as well.
And when you're hearing music, what are you truly hearing?
“What you’re hearing is your brain reorganizing that information…artists and storytellers have this amazing understanding of the brain that gets passed on.”
“Leo Tolstoy wrote an essay called “what is art,”in which he defined art as existing when the intent of the creator is experienced by the individual on the other end. And today, we can actually know what you’re feeling. With the ubiquity of censors in your environment, we can understand the human experience, and take Tolstoy’s musings to a completely different level.”
How does technology factor into this?