Music demonstrates that we're solving problems - and processing information generally - via a process of entropy reduction. The phenomenon that is key to all this is the solution to the paradox of octave equivalence.

Spatial and temporal frequencies (ie. in harmonic and rhythmic domains) in a factor-of-two relationship are the simplest possible freq. relationships, and these entropic minima correlate with connective and impulse rate entropic minima.

Thus all the information we model and process internally is constituted in the form of spatiotemporal modulation of factor-of-two symmetries; borne of the inherent thermodynamics of multicellular information processing.

Thus all complex animals must be subject to the equivalence paradox, not least aliens! They will therefore derive tonal systems by dividing octaves, likely aligned to the harmonic series, and share a similar affinity for rhythm.

Obviously, this principle applies to all higher processing, including language.

There seems to always be unwarranted assumptions about what music actually is when researchers study music and so the results are analysis of their definition of music and not of music itself.

For instance the assumption that music must have a beat, melody, rhythm or even that it must be 'pleasant to the ear' would require us to rename half of what was previously called music to something else.

Firstly, movie, documentary and dramatic background music often consists of a single chord (no melody, no beat, no rhythm) and can be painful to the ear (to evoke negative emotion, conger discord, pain, other negative emotions and feelings).

People who hate some particular form of music, say a classical fan listening to punk, grunge or the like or visa versa, can still identify the offensive-to-the-ear sound as music.

But researchers tend to think of pleasant, melodic tunes enjoyed by the listener as the default form of music…wrong!!