In the last class we learned about how our teacher, Morton Subotnick experienced the transition from physical instruments to computer ones, in particular from the flute to the analog synthesizer. He taughts us that up until computers were introduced, the form of instruments and consequently their musical scale, were determined by the physical form factor and the ability for the human finger to reach buttons that would affect the note. Once the synthesizer came around, this constraint went away and music was no longer limited to musical scale and the speed in which human body parts could move.
This makes me thing, with the recent rapid advance of research in artificial intelligence and the machine learning, how can we leverage these to create music not limited by what the human brain is capable of? This semester I would like to explore how ai and machine learning can be used make computers augment the brains functions in both musical composition and live performance and go beyond these limits.