In audio and video, we have been using machine learning for decades to synthesize, model, emulate and create the sights and sounds that shape human culture and express human emotion on a universal scale.
Today, we can use artificial intelligence to synthesize a human voice, iterate on a melody or emulate nearly any sound we choose. We can create a name, image and likeness based on a few simple descriptors.
So the question arises: having meticulously recorded, preserved and cataloged over a century of humanity’s greatest audio and motion art, do we have the resources necessary to create a project that evokes a certain time, place and feeling without actually sampling the art being referenced?
In this experiment, we trained our AI on folk, soul, rock and country songwriters of the 1960s and 1970s. We fed it specific vintage instrument tones, taught it preferred progressions, structures and styles and trained it to emulate vintage live recording methods. We instructed it to write modern lyrics in the style of classic lyricists. We fed it imperfect parts and imperfect voices, and we made it sing.
You are now free to explore the results.
When a machine makes art, just like when a person makes art, it is standing on the shoulders of giants. It is the sum of its influences and the stories it knows. To quote James Murphy, slightly out of context:
“It’s the memory of our betters that are keeping us on our feet.”
Ladies and gentlemen, Johnny CashApp.
The debut album from Johnny CashApp. Modeling and emulating classic styles from the dead center of the American Century.
Want to contribute to development or learn more about the project? Follow Johnny Cashapp on Soundcloud, Instagram and YouTube.
Tips are appreciated, but never expected.