Recent developments in computer vision and audition techniques motivate a push towards intelligently converting these exclusively mono recordings to multichannel formats, although there are challenges inherent to producing results with equivalent quality to the work produced by an experienced mixing engineer. This thesis proposes several deep learning-based models for this 'up-channeling' process, spanning both direct waveform generation as well as spectral representation generation.
This research uses a U-Net architecture operating in both the magnitude spectral-domain and the time-domain, as well as a Generative Adversarial Network (GAN) that utilizes these U-Nets. The networks are trained on a dataset containing string quartet music, as classical music has standardized stereo recording techniques that produce very distinct sounding individual channels. While the spectral-domain models produce more distinct channel separations, the time-domain models produce audio with less noise.
Although this study focuses on converting from mono to stereo, these models would be feasible for up-channeling to more channels, such as stereo to 5.1 surround sound. However, training these models requires a large dataset of music in the desired output channel count, which does not exist for many musical genres.