This is part of a series on Opportunities for AI in Music Production.
Problem / Working with software synthesizers (as opposed to sample clips or a fully sample-based synth) gives you more control over your sound. For example, an analog synth emulation will be more flexible, dynamic, and tweak-able. Tweaking a synth also lets you create a more unique sound.
However, finding the right synth preset (by clicking through thousands of them) is difficult and will definitely break your artistic flow. Especially if you have to open multiple synthesizer apps and play through presets sequentially.
There’s no cloud storage, management, ranking, and community sharing for presets, let alone one that spans synthesizers from multiple companies. Native Instruments NKS does make it easier to host, browse, and control synths. Arturia’s Analog Lab does have a “show me other presets that sound like this one” button.
There is some standardization around preset naming (“BA” for bass, “LD” for lead), but none for genres, and no cross-synth comparison. Eg. which synth and preset did Phil Collins use on “In the Air Tonight?” Which preset that I own is the closest?
Finally, synth presets are still stored as proprietary binary files. So, even though you could describe Phil’s synth as “Oscilator 1: sign wave 50%, Oscilator 2: square wav 30%, A: 0.01 D: 1 S: 2 R: 0.5, LFO: 4% -> Oscilator 1” and programmatically translate that to whatever synth you actually own, it’s not possible based on the proprietary format.
Solution / A cloud based synth preset browser.
- A back-end data pipeline renders presets to audio files for previewing.
- An algorithm categorizes and arranges similar sounds into a surfable space. I love XO’s XLN Drum Machine visualization for drum samples. I want the same thing for recordings of synth presets that when I click on it opens the preset.
- Users pick presets based on similar sounds, genre, type (BA, LD, ARP, PL, etc)
- The browser lets you filter the filter list by synths you have access to (give me the closest thing to Phil’s sound based on what I own).
- Break the proprietary format by translating parameters between synths. The server would run the plugins in a host, loading the preset, and discovering the values of the automate-able parameters. Build machine learning models that map between parameters and the output sound, and let you reverse engineer the parameters based on a sound, and translate those parameters between synths.
- Of course, if you ran this service, you could refer customers to the right synth for the right preset (even if they don’t own it).
In researching this I found some interesting research on Deep Estimation of Synthesizer Parameter Configurations from Audio Signals
Next: AI for Composing Music