About a year ago I started on my analog polysynth project. I made progress with the initial analog sound engine design but then pushed the project on the back burner. I’m back in the groove now after gaining some more experience with microcontrollers this past year.
I’m tempted to use PIC32 for the project since I am most comfortable with it, however I will use the opportunity to learn a new 32bit platform: STM32. It’s probably for the best not to use PIC32 for this project because PIC32 communication peripherals silicon errata notoriously do not always function at higher speeds.
I have chosen an STM32F446 as the voice card mcu mostly because it has two SAI modules (Serial Audio Interface) which are capable of outputting 4 I2S TDM streams (Inter-IC Sound Time Division Multiplexing) for the possibility of 32 24bit DAC channels. The reason for 24bits of precision is the ability to eliminate the exponential conversion analog circuitry for the oscillators. Instead, the exponential conversion can be done in software and using linear oscillators which will have better tuning stability. Most of the legendary polysynths will use exponential VCOs with inherent tune drifting which is a large part of the vintage tone many people are after. But from an engineering standpoint, linear oscillators are a far better design.
An 8 channel I2S 24bit DAC is chosen for high number of outputs, high precision, and competitive price of $3.02 per IC. Each voice will need around 16 outputs meaning each mcu can control 2 voices, this will save money on the bill of materials as a polysynth needs several voices.
STM32 is mostly uncharted territory for me so I’m taking the project in small steps. First I designed the DAC control board with the STM32 and 4 I2S DACs to asses the components and verify the digital framework before wasting money on larger prototypes that might never be able to work. The next step will be to design the analog board to interface with the control board. Once the digital and analog sections are working, they will be combined into one voice card.
What will the voice card do?
The voice card is essentially an individual programmable monophonic synthesizer. Stack a few voice cards together controlled by a master CPU, and you have polyphony. The voice card will be programmed by a SPI interface that transmits all the sound engine parameters like oscillator pitch, note press triggers, filter frequency, audio mixing levels as well as parameters for the envelope generators, low frequency oscillators, and modulation routing that are all generated in software. For example, the master cpu could transmit parameters of a triangle wave at .1Hz and 10% amount applied to filter frequency. The voice card mcu will run firmware to output the modulation to the correct DAC channel. The voice card has a digital section comprised of the MCU and DACs, and an analog sound section comprised of the oscillators, filters, and voltage controlled amplifiers for controlling the software generated modulation.
There’s plenty of voice card development ahead for me. Then there is also the master CPU, user interface, and industrial case design to be done. The only way I’ll be able to finish is by tackling it one small step at a time.