My off hours hobby is music production.
Here are similarities I see between coding and music production so far:
Code compiles into binary - midi is rendered into wav files
When producing music you need to take many breaks to get your ears “out of the mix” and get your head clear again. The same goes for coding when facing tough creative problems or complicated code.
Software culture is moving in the direction of automation and factories, with architectures that support changing things on the fly. Music production has two opposing factions: a. Newer pop songs are made almost in an industrial fashion (try many tracks in a day, find one that works , then lyrics and vocals are found for it from many contenders; and b. singer-songwriter which is all about manually crafting a song which can take a lot of time. (compare that to a hand crafted optimized piece of code that needs to run at a very low level like a DSP unit or real time calculations)
Pair-Programming - > most new music these days is not done alone: there are lyric writing teams, melodic writing teams, production teams.
in Music, new digital tooling is taking the place of older generation analog tooling. Instead of filling rooms with huge hardware, people use digital work stations with software plugins that emulates almost every piece of hardware out there. Software coding is fully digital, but used to be very hardware oriented . Software testing is still very manual but automation is starting to take its place.
Long term support: Software can keep changing after you’ve released it (new features added etc. Music is usually delivered in multiple different versions: radio mix, PG safe mix, club mix etc.. and music also gets remixed, new covers by other bands are made, remakes by the original band in shows, digitally remastered albums… not exactly the same, but in music all the “stems” (the parts that make the individual atoms of the song) are always saved and can be used to recreate different variations of the song.
Code has classes and functions, Music production (digital) also has building blocks: Sound designers use waveforms and combine them, munge them up and process them to create new types of sounds that are then reused by other musicians. If a waveform is the input of code, an LFO envelope is a type of function that operates on that code. Those are just two types of the :atoms” of synthesized music)
When mixing, it is encouraged to take many breaks (20 minutes mixing, 15 minutes break) so that your ears don’t get too used to the mix. Programming is also a very tough activity that is helped by taking many small breaks.
TDD: In music you might have a “reference” mix - a song you like or a sound you want to sound like that tells you how close you are to getting the same results (loudness, frequencies, etc). TDD is the same way somewhat in that you set yourself a target and see if you measure up to it or not. Music is not automated for this though, and is not a any kind of regression test.
Music pros do as much integration testing as possible they make their friends, DJS, their mom and their cat hear the mix all the time because they know they are not objective about how it translates in the real world.
There’s more but my head is drawn a blank..