AI Creates SID Music: Inside SIDmancer’s Machine Learning Demo

274

AI creates SID music in a way that feels truly fresh. SLAJEREK’s new experiment introduces SIDmancer, a tool built to generate SID chip music with machine learning. The project starts with a simple question: what if a transformer AI could predict each new SID frame using the ones before it?

SIDmancer converts every SID frame into a small group of tokens. These tokens capture notes, event types like note on or off, waveforms, sync, ring, gate flags, filter settings, volume, and normalized values for frequency and pulse. Each sequence from all three SID voices goes into a transformer neural network.

The model does not jump between registers at random. Instead, it predicts each new frame based on recent frames. It uses softmax to handle note and waveform types, while regression controls frequency, pulse, and cutoff. This approach preserves musical flow and helps create new variations that sound authentic.

For training, SLAJEREK used 1874 classic SID tunes, pulling the first minute from each. The AI takes a prompt of 512 SID frames and generates up to 2048 new ones. This results in about 30 seconds of music before the model starts looping and repeats a pattern.

Since SIDmancer’s model is small and training time was short, the output is still a mix of random and musical. Yet, it already creates tunes that resemble real SID tracks. This working demo proves the idea can go further with more data and training. SIDmancer could soon become a unique creative partner for chip musicians.

Watch the video to see SIDmancer in action. Get a closer look at how AI generates SID chip music and explore the technology behind this innovative tool.

Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments