NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Why does an A note sound different across instruments? (omarshehata.me)
munificent 1151 days ago [-]
This is a really interesting article because the author clearly figured out some stuff but also hasn't filled in all the missing pieces or learned the terms yet.

They are absolutely correct that the thing that makes different instruments playing the same pitch sound different is additional higher-frequency components.

In particular, frequencies that are integer multiples of the lowest fundamental frequency are called harmonics. The set of harmonics and their relative amplitudes determines an instrument's timbre (usually prounounce "tamber" in English) or its characteristic sound.

There are also inharmonics—frequencies that aren't multiples of the fundamental. Those tend to die out quickly because they don't form standing waves in the resonating body. These transient sounds form an important part of the very beginning of the sound. Some instruments, like bells, have more or longer-lasting inharmonics.

enriquto 1150 days ago [-]
> There are also inharmonics—frequencies that aren't multiples of the fundamental. Those tend to die out quickly because they don't form standing waves in the resonating body.

This is only true for one-dimensional vibrations on strings and the air inside a long tube. Two and three-dimensional bodies have overtones with arbitrary ratios to the fundamental, depending on the shape of the object. Bells need to be carefully tuned to have harmonic spectrum, but this is an artificial construction based on western music tastes (who wants octaves to not be dissonant). Other percussion instruments (e.g., the indonesian gamelan) are deliberately tuned to a non-harmonic overtone sequence adapted to the local music tastes. You can certainly have inharmonic overtones that don't die out quickly! And they can form standing waves in the resonating body, just like the fundamental.

hashkb 1150 days ago [-]
This helps answer the question I just asked under another comment. Thank you!
munificent 1150 days ago [-]
Ah, great comment, thanks! I'm still learning too.
olau 1150 days ago [-]
I would like to add to this that real instruments are much more complicated in their sound. If you take a look at the frequency histogram of one of those, you'll realize that what you might call a theory of pure frequencies, while perhaps adequate for explaining melody (several notes put together) is nowhere enough for explaining sound. Just like salty/fatty/sweet/sour is nowhere enough for explaining taste.

This is also why it is so incredibly difficult to make a realistic sounding synthesizer.

If you want to play with this, search for a sample pack and try running a couple of the sounds from that through sox to get a histogram.

alanbernstein 1150 days ago [-]
I'm not saying this is wrong, but the Nyquist-Shannon sampling theorem states that a "theory of pure frequencies" is sufficient to reconstruct a band-limited signal from samples. What's the missing element?
yoz-y 1150 days ago [-]
I'd hasard a guess that the problem is not that you can't perfectly reproduce a sound of an instrument being played. However when you simulate it using samples you reduce the sound to a sum of independent recordings, whereas a real instrument will behave differently when you play multiple notes because of the material.
andybak 1150 days ago [-]
Yes. Plus a huge amount of other stuff.

A naive sampler misses:

1. (as you say) How playing multiple notes at the same time changes the way the instrument responds

2. How playing at different intensities changes how the instrument responds.

3. The many subtle ways your physical interaction with the instrument change the sound

4. The way playing notes in succession at different rates can alter the sound

5. The physical space or choice of amplification can affect the instrument - even an acoustic guitar can "feed back" on itself to some degree

6. A bunch of other things I haven't thought of.

Sophisticated samplers (and sophisticated sample libraries) can simulate some of the above. But physical modelling synths are probably a better way forward.

alanbernstein 1150 days ago [-]
What is the difference between "a sum of independent recordings" and "reconstructing from samples"?
TheOtherHobbes 1150 days ago [-]
More than that, the sound is defined by how the overtones change over time, and for acoustic instruments that's defined by various resonant modes and how they're excited.

This article makes it sound as if timbre is static. It isn't. It's the changes that make instruments recognisable.

A static slice of a violin timbre doesn't sound much like a violin.

It's also wrong about having to use a 9ms slice. FFTs apply a windowing function which fades a slice to zero at the edges. Otherwise you get discontinuities which introduce spurious high frequency overtones which don't really exist.

And so on. These are all things that some slightly deeper background reading would have revealed.

titzer 1151 days ago [-]
I spent hours playing with Friture, which is a continuously-running audio spectrum analyzer. It's really cool to see the characteristics of a musical instrument live and connect the visual cortex to the auditory.
temporallobe 1150 days ago [-]
I play several instruments and this is a correct analysis, but I would also add that resonance plays a huge part in the emphasis of certain frequency ranges. An acoustic guitar resonates through the body to emphasize and amplify the vibrations of the guitar strings, and in fact the shape and composition of the body will emphasize different ranges in completely different ways. The strings also cause sympathetic vibrations in each other, further adding to the resonance and contributing to the tonal quality (timbre) of even a single note.
hashkb 1151 days ago [-]
Can you clarify harmonic vs overtone?
SeanLuke 1151 days ago [-]
All sound waves can be defined as the sum of a (possibly infinite) set of different sine waves. Each such sine wave naturally has a frequency (a pitch), an amplitude, and a phase. If you break a wave into its constituent sine waves like this, each sine wave is known as a PARTIAL.

Sound waves that we perceive as tonal -- as musical -- commonly consist with partials which follow a certain pattern. Namely, there is one lowest partial, and most or all of the higher partials have frequencies which are INTEGER MULTIPLES of the this lowest partial's frequency. For example, if the fundamental is at 200Hz, perhaps the next lowest partial is at 400Hz, and the next one might be at 600 or 800Hz, and so on.

When partials are organized like this, they are known as HARMONICS. The lowest such partial -- the lowest harmonic -- is called the FUNDAMENTAL, and the remaining (higher) harmonics are known as the OVERTONES. It is common, but not always the case, that the fundamental (1) defines the pitch of the sound (2) is the loudest harmonic. Overtones instead tend to add color to a sound.

hashkb 1150 days ago [-]
So all overtones are harmonics. All harmonics except the fundamental are overtones?

Edit: I never realized the fundamental was a harmonic. I thought all harmonics had to be above the fundamental. I thought the overtones were the multiples (e.g. higher octaves) and harmonics were other intervals produced by the instrument (organs being the most extreme example and maybe oboe being the least)

schoen 1150 days ago [-]
Might be a terminological ambiguity -- like in mathematics you can distinguish "subsets", "proper subsets", and "nontrivial" (or "nonempty") subsets. Or "divisors" and "nontrivial divisors" and "proper divisors".

Even though the terminology can be defined in order to make these distinctions clear, there's an arbitrariness in terms of which definition you prefer (in the past, some mathematicians treated 1 as prime, which makes the definition of a prime simpler, but makes many theorems about primes more complicated to express). And there's a likelihood that even experts will sometimes use the simple term informally when they technically mean a more specific thing (like occasionally saying "divisors" instead of "proper divisors", or something).

I imagine that acoustics experts have a definition available that definitively states whether the fundamental "is" a harmonic, but in certain contexts it intuitively makes sense either to include or exclude it, regardless of that.

A similar case might be "animals"; taxonomically humans are animals (and apes), which is very important sometimes and confusing other times.

hashkb 1150 days ago [-]
The fundamental being in or out of the set is less interesting to me than the (useful, to me in my day to day) distinction between overtones and harmonics. Technical misunderstandings really mess up rehearsals so I try to be as accurate as possible with the objective stuff so I can use it to communicate the subjective stuff.
SeanLuke 1150 days ago [-]
The fundamental is harmonic #1. This is the case for essentially all additive synthesizers for example.

Overtones are any partials other than the fundamental. While overtones CAN be non-harmonic (notably in bells), I think it's fair to say that most of them, or at least the most important ones, are usually harmonics. This is because the terms "fundamental" and "overtone" are historically music terms, and are generally applied to sounds we perceive as tonal or musical: and such sounds are largely composed of harmonics.

enriquto 1150 days ago [-]
> So all overtones are harmonics.

Not at all! It's the other way round. Harmonics are those overtones that are integer multiples of the fundamental. You can have other, non-harmonic, overtones.

hashkb 1150 days ago [-]
Just to make sure I've got it: harmonics will always sound like (be?) octaves relative to the fundamental. But all the less-dominant frequencies are overtones, even if I somehow get a tritone or something gross sounding.

Now what happens when I play a chord and get overtones out of the interaction between two strings or a choir? What's happening there?

munificent 1150 days ago [-]
> harmonics will always sound like (be?) octaves relative to the fundamental.

Since you mention "octave" here I want to point out that this is a common misconception. Harmonics are not just octaves of the fundamental. The octave scale is logarithmic but harmonics are linear and include all integer multiples.

If your fundamental is 100, the octaves are 200, 400, 800, 1600, .... But the harmonics are 200, 300, 400, 500, 600, ... There are many extra harmonics that aren't overtones.

This is important for many reasons, but a fun one is that you can use this in sound design by relying on a clever thing our brains can do. We are so good at doing frequency analysis in our heads that we can figure out what fundamental must be present even when it isn't. If you play sine waves at 200, 300, 400, 500, 600, etc. your brain can figure out that those would all be multiples of a 100-Hz fundamental, even though that fundamental isn't present [1].

This lets you do a neat trick where you hi-pass a sound to remove some of the lowest frequencies in order to make room in the mix for other bass sounds. Even though the sound loses its fundamental, listeners will still hear it as "functionally" having a bass register. (This is also why when you listen to music on a crappy tiny speaker, you still hear the bass as bass even though it's actually quite tinny and high-frequency.)

[1]: https://en.wikipedia.org/wiki/Missing_fundamental

beardyw 1150 days ago [-]
Not as I understand it. A harmonic is an integer multiple of the original frequency. An octave is a power of two. So the second and fourth harmonics are octaves. The third is not, though still on a normal western musical scale. The fifth is not even on the musical scale. The 12 note "well tempered" scale was and remains a fix to try to put some kind of order into all of this.
enriquto 1150 days ago [-]
> Now what happens when I play a chord and get overtones out of the interaction between two strings or a choir? What's happening there?

Nothing. Sound superposition is linear. There's no interference between different frequencies. By playing several strings together you obtain the sum of the sounds played by each of them separately. No new frequencies can appear.

ghusbands 1150 days ago [-]
Overly simplistic. You can get beats when you combined two nearby frequencies. The beats have a frequency and are audible. Similarly, sounds can be reinforced or hidden in the interaction between instruments; it might be linear, but that does not say anything about what you actually hear.
tripa 1150 days ago [-]
Overly confrontational. You can and do get beats when combining nearby frequencies, and it's not at odds with sound superposition being linear.

The beats have a frequency and are audible, but their frequency is not a pitch.

djeiasbsbo 1150 days ago [-]
Yes, you got it. Important is that a wave has a frequency and also a shape (e.g. sine, square, etc.). Even if the shape is different and the frequency is the same, the "pitch" is the same. The shape is the "timbre", or what makes the instruments sound different.

You can deconstruct any given sound wave into its partial sine waves. This is typically done using a Fourier Transform algorithm (FT).

Of course, we can also do it the other way around, create a signal out of many sine waves. In practice, this is called additive synthesis.

Instruments not only sound different because of the timbre. Another thing to look at is the loudness over time, e.g. when plucking a string. We usually do this with an ADSR representation (Attack, Decay, Sustain, Release).

Knowing these chracteristics is enough to recreate an instrument with a synthesizer. Of course, it gets more complex when there are inharmonics and if they are irregular (different depending on each tone). That's why synthesizing instruments realistically is a pretty time consuming science.

dwd 1150 days ago [-]
...and why guys like Martin Galway were absolute genius in the music they could get out of the C64 SID.
1150 days ago [-]
sova 1151 days ago [-]
Spectograms of each note will also make it strikingly clear that there is a dominant frequency invoked and overtones (harmonics) also being invoked that give the sound its full sound-profile. If sound is atmospheric texture, the overtones are irreplaceable grooves in the ether.

The simple sine wave is exactly one dominant frequency in a spectrogram, a line. Instruments such as a trumpet will have upwards of 12 overtones, parallel lines, lessening in strength.

One interesting idea that came to me last night was trying to reproduce the physical 3D model of an instrument based on its spectrographic fingerprint. With enough samples, this ought be possible, and with a 3D printer one might even be able to create interesting physical instantiations of instruments based on spectrographic fingerprints. One could even create never-before-seen instruments based on a generated spectrogram, in an interesting radar-to-ocean operation (as opposed to ocean-to-radar, how radar normally works). Maybe topography-from-radar is a clearer way to state the same.

Generating audio from spectrograms is an open problem and I would love to see more open-source work in this domain.

CogitoCogito 1151 days ago [-]
> One interesting idea that came to me last night was trying to reproduce the physical 3D model of an instrument based on its spectrographic fingerprint.

In case you haven't already read it, this might be of interest to you:

Mark Kac: "Can One Hear the Shape of a Drum?"

https://en.wikipedia.org/wiki/Hearing_the_shape_of_a_drum

https://www.maa.org/sites/default/files/pdf/upload_library/2...

junon 1151 days ago [-]
Would be interested to see this applied to drums that have intra-beat tuning, such as the talking drum or the idakka.
ttt0 1151 days ago [-]
> Spectograms of each note will also make it strikingly clear that there is a dominant frequency invoked and overtones (harmonics)

It's not even about which frequency is dominant. I'm not an expert on psychoacoustics from the "sciency" side, but basically our brains can figure out the fundamental note based on the harmonics alone, even if we don't hear the fundamental frequency itself. De-emphasizing the fundamental frequency is sometimes used in synthesis just to create an interesting sound, but most notably this phenomenon is heavily relied on in modern heavy bass electronic music and down-tuned metal. A really common trick is to distort the bass track, so it can be still heard even if speakers can't reproduce that frequency, as distortion generates more harmonics. I don't mean distortion in literal sense, but in audio effect sense, so effects like saturation, overdrive, fuzz, clipping, etc.

xavriley 1150 days ago [-]
Being able to “hear” a fundamental when it’s not there in a spectrogram but the upper harmonics are is known as the missing fundamental and has been known and studied for over 100 years. It seems likely that the auditory system is not only doing a spectral breakdown of sounds (like an FFT) but also something like an autocorrelation of the signal. That autocorrelation will recover the fundamental as long as enough harmonics are present
Shorel 1150 days ago [-]
> but basically our brains can figure out the fundamental note based on the harmonics alone, even if we don't hear the fundamental frequency itself

I guess this is valid for trained musicians. For me, I can't possibly identify the same note in different instruments, as all these sound very different to me.

The only thing I seem to be able to do is to identify a song based on the very first second or two. Because they are different sounds, not just notes.

hippira 1151 days ago [-]
Very interesting, since the artificial “ghost” note is at a lower frequency, would they essentially become an “undertone” ?
ttt0 1151 days ago [-]
Undertone is something different, but I never explored that concept and what could it be useful for, so I can't really explain it

Not an undertone, it's fundamental frequency. Harmonics/overtones follow a very specific pattern:

https://en.wikipedia.org/wiki/Harmonic_series_(music)

...so in most cases there isn't really any ambiguity on what the actual fundamental frequency is. The best way to demonstrate it is probably with a high-pass filter (aka. low-cut filter). High-pass filter allows only higher frequencies to pass through, or in other words it cuts out lower frequencies. On the example below you can hear that as the lower frequencies are getting removed from the signal, it's still the same note:

https://www.youtube.com/watch?v=50lRE2Bgag0

I wish the filter sweeps were slower, but that's the best example I could find

TylerE 1151 days ago [-]
No, not really.

The fundamental is always the lowest frequency. All overtones are some (almost always integer, or _nearly_ integer) multiple of that.

yobert 1151 days ago [-]
I read somewhere about people doing this with samples of thunder, to recreate the shape of the lightning bolt. So cool!
leetcrew 1151 days ago [-]
ignorant/lazy question:

I'm a guitar player who took a few months of trumpet lessons when I was a kid. I recall that you can produce several different notes with the same fingering on the trumpet depending on the shape of your mouth. is this similar to the natural harmonics you can produce with a guitar by covering (but not fretting) the strings at certain nodes?

yellowapple 1151 days ago [-]
Brass player here with a few months' worth of guitar ability. You're correct about the harmonics being at play, but the mechanism's a bit different.

To elaborate/review, your lips are the guitar strings in this equation, and naturally will behave much like a string on a chamber instrument or fretless guitar (as is obvious from "free buzzing" and mouthpiece buzzing). The length of the tubing then dictates which frequencies will resonate with your lips, meaning your lips will want to "settle" into something in harmonic resonance with that length of tubing.

The key difference here is that your lips themselves are basically "fretted" to the harmonics (though with practice you can bend that quite a bit), since the corners of your lips are moving (slightly; it's a short string!) in and out to go higher or lower (respectively), and since they'll want to vibrate at a harmonic (the vibrating metal and air impart a force on your lips for the same reason your vibrating lips impart a force on the metal and air, so it takes much more effort to buzz against that harmonic than it does to just ride it and keep that feedback loop going). Further, the mouthpiece itself is basically a capo in this context, so your lips are always "fretted" to the mouthpiece's constraints (this is a big part of the reason - if not the entirety of it - why trumpets have tiny mouthpieces and tubas have giant mouthpieces).

I suspect that if you were to replace the body of an acoustic guitar with a really long pipe, you'd see/hear similar dynamics at play: a string tuned or fretted to one of that pipe's harmonics will keep vibrating for a good while, and a string tuned/fretted to something else would stop vibrating sooner as the destructive interference sucks energy out of it. And the former would likely be much more audible than the latter.

marai2 1151 days ago [-]
I know HN frowns upon such meta comments, but thank you for this comment! Wonderfully clear exposition - I followed along completely with your comment and now have some level of understanding of harmonics and brass instruments where I had none before. This was a good day, I learned something today that I have long been curious about!
chrisweekly 1151 days ago [-]
Your disclaimer "I know HN frowns upon such meta comments, but " was the only thing wrong w your comment. (Which I upvoted). As a guitar player and former trumpet player, I dug this tangent too.
Cogito 1151 days ago [-]
Just to reiterate the other response to your comment, meta comments are often welcomed, especially when they are expressing thanks or gratitude.

Content-less comments that derail the conversation, comments that are needlessly inflammatory - these are the kinds of meta comments that are not wanted.

jalgos_eminator 1151 days ago [-]
I don't play trumpet, but I think that is basically what is going on. You can do pinch harmonics on guitar, which silence certain harmonics or even the fundamental while retaining others. It sounds like changing mouth shape does a similar thing on trumpet.

edit: here's a fantastic video on pinch harmonics: https://www.youtube.com/watch?v=eTWxCdoyol0

sova 1151 days ago [-]
Yes, that's exactly right. "Covering but not fretting" a string on the guitar will dampen the lowest harmonic because the full length of the string is unavailable for vibration [1]; frequency is one over wavelength (e.g. twice the frequency is one half the wavelength). You can pluck either side of the string when covering it to produce the same harmonic (neck side or body side).

On a trumpet, the embouchure will affect the frequency of the vibration of the air compressed in the tube, and simply drop out lower harmonics, as can be confirmed via spectrogram.

The same thing happens on a guitar, leetcrew I encourage you to try the spectrogram linked at that site with your guitar to note the effect [0].

[0] https://musiclab.chromeexperiments.com/Spectrogram [1] I am under the impression that the fundamental tone of the string requires the whole string-length to resonate, and if it is clamped, pinched, or otherwise muted all you will hear are resonant harmonics that can exist on smaller string segment lengths.

whiddershins 1151 days ago [-]
We need to compare this description with yellowapple above.

One of these is true but I’m not clear on whether this means actually both of them are true?

conformist 1151 days ago [-]
Yes, to first oder, a trumpet is a long tube with a standing wave, which works conceptually like a string (but with different boundary conditions). It's probably the player's imposed frequency hitting multiples of the lowest resonance frequency that leads to different tones.
bqmjjx0kac 1151 days ago [-]
Great video. In case it's not obvious, the different notes are purely a function of pick/thumb placement. The guitarist is not changing frets, but he seems unable to resist throwing in some vibrato :)
tshaddox 1151 days ago [-]
Yeah, on a trumpet you can play multiple notes with the same fingering. The set of notes you can play with a fixed fingering comes from the harmonic series. A bugle is essentially a trumpet with a fixed length of tubing, and famous bugle calls like Reveille and Taps all come from that harmonic series (and can also be played on a trumpet without varying the valves).

My understanding of how this works is that the length of the trumpet's tubing (at any given fingering) permits the air to resonate in standing waves only at one of the frequencies in a harmonic series. The player's lips can vibrate at any frequency on their own, but the big column of air inside the trumpet will essentially lock the column of air into vibrating at the closest frequency in that set.

11thEarlOfMar 1151 days ago [-]
And this is why we can synthesize different instruments electronically. Reproduce the overtone patterns and you hear the same instrument.

Not to mention, two different pianos or two different violins can sound very different.

loganhood 1151 days ago [-]
The sustained overtones are only half the battle. Getting the attack correct (the sound profile of the first ~10 milliseconds) is really important for differentiating instruments. Plucking a guitar string and hammering a piano string have very different attack characteristics. A flute has a distinctively "breathy" attack.

Many synthesizers use a sampled recording of the actual instrument for the attack, then synthesize the sustained portion of the instrument.

whiddershins 1151 days ago [-]
> The sustained overtones are only half the battle.

Actually probably like 10% at most of the battle. My understanding is attack is overwhelmingly dominant in our perception of timbre.

dwd 1151 days ago [-]
This takes me back to when I played around with sound using the C64 midi. There were four attributes that you could adjust to try and emmulate the timbre of a particular instrument: attack, delay, sustain & release.

Was a lot of fun when I was young.

m463 1151 days ago [-]
Are you saying you could say, do a spectral analysis of someone singing or talking, then make a physical instrument that sounds similar?
neltnerb 1151 days ago [-]
Sure, you can do that kind of thing in principle. I think the vocal cord model needs to be constrained which will not be very fair, but I was reading papers over a decade ago about simulating speech with physical models of the voice. I assume that's how they guess what dinosaurs might have sounded like.

Of course, with such a complicated and underconstrained system you might need to basically tell it what a human vocal system roughly looks like and let it calculate parameters based on the model. Maybe not though, neural networks are surprising sometimes.

odyssey7 1151 days ago [-]
This fact was confusing for me back in my school’s chorus. I don’t know if it was confusing to anyone else, but it was to me.

How does a person match the pitch of the piano? I could hear a few different pitches when one note was played (in a confused way, I would zero in on different parts of the sound), any of which might have been the target pitch to be matched.

And was I supposed to make my voice sound more like the piano? Was that part of “matching the note?”

Complicating things was the fact that my own voice had different pitches in it. Which part of my voice was supposed to match the note?

What a time. Now I know I was noticing the fundamental of the piano note at times and overtones at some others. Also, changing the timbre of your voice can mirror the overtones of the piano better, but that isn’t normally the goal of a singer.

whiddershins 1151 days ago [-]
There’s a theory that harmony arose from this.

A percentage of monks heard a different fundamental pitch than their brethren, so they sang one of the harmonics. Leading to polyphonic hymns and then to formal western harmony.

(Shaped by equal temperament along the way)

ani-ani 1151 days ago [-]
This is a fun exploration, though there's a lot of fuzzy usage of terms and missing stuff. Eg. the first 5 paragraphs are apparently trying to define timbre, yet the word appears nowhere. Same with harmonics, etc.

I think one of the most interesting things about pitch is that it's not well defined, it's a psychological phenomenon. If you could extract it from people's brains, you would likely get different values from different people. This is compounded by the fact that harmonics produced by real instruments are not exact ratios of each other, yet they affect the perceived pitch.

halayli 1151 days ago [-]
The title and content don't match.

The way we distinguish between musical instruments and notes is because of timbre/tone color. Which has nothing to do with fourier transforms per se and you can use wavelet for that matter. DFT/DTFT are the most common approaches to quantize and convert back to analog and they can be completely left out of the discussion for such a title.

barnabees 1151 days ago [-]
Surprised to see no mention of fundamental frequencies or harmonics
neltnerb 1151 days ago [-]
My electronic music professor literally defined exactly what this article is trying to describe as "timbre" meaning the overtone sequence (oddly "overtone" and "timbre" are not present in the article?) plus off-harmonic frequencies that are present for any real instrument.

This is pretty well studied, but kudos to the author for trying to explain it again, it's an odd topic. But I suggest looking up "timbre" at least and perhaps updating the article with the terms used by actual musicians to mean exactly this.

Timbre - "the quality of tone distinctive of a particular singing voice or musical instrument"

PeterWhittaker 1151 days ago [-]
Yes, this! Failing to mention timbre in an article on why the same note sounds differently on two instruments is missing the point, metaphorically being so focused on a detailed simulation of treestuff that one fails forestry.

(Yeah, OK, that was terribly said, sorry 'bout that.)

For timbre, see https://en.wikipedia.org/wiki/Timbre e.g.

FFTs, etc., give us visualization tools, but they miss the point. Different materials resonate differently across the auditory spectrum, emphasizing or diminishing various harmonics, resulting in complex sound profiles that make each instrument distinctive.

Or, to put it most simply: Different materials react to sound differently, and each instrument's materials and construction are what give it its distinctive timbre.

On an unrelated note, Daniel Levitin, former music producer and director of the Grammys, and current neuroscientist at McGill, was once asked what makes each musical era distinctive: timbre was, in his opinion, the single most important factor (source: one of his books, probably still in a moving box in my basement, otherwise I'd look it up).

computator 1150 days ago [-]
> Daniel Levitin: what makes each musical era distinctive: timbre was, in his opinion, the single most important factor

That's a fascinating idea; I googled and found what might be the story you're looking for:

[quote]

In the best seller “This is your brain on music” by Daniel Levitin, he talks about John R. Pierce (inventor of the travelling wave vacuum tube and the first telecommunications satellite) who, interested to discover rock music, asked him to summarize the genre in a concise list of six songs. Levitin ended up with a list of songs from Little Richards, the Beatles, Jimi Hendrix, Eric Clapton, Prince and the Sex Pistols.

Interestingly, while listening, Pierce was not really interested by the songs themselves, their melodies, their harmonic structures or their rhythm characteristics, but he said he found the “timbres” to be remarkable and described them as being new, unfamiliar, and exciting.

Levitin concludes his story by saying: “The way in which instruments were combined to create a unified whole - bass, drums, electric and acoustic guitars, and voice – that was something he (Pierce) had never heard before. Timbre was what defined rock for Pierce. And it was a revelation for both of us.”

Quoted from "An Overview of the Concept of Timbre and its Use in Contemporary Music and Record Production" by Mathieu Bedwani

PeterWhittaker 1150 days ago [-]
That's the one!
cannam 1150 days ago [-]
> asked what makes each musical era distinctive: timbre

Anyone interested in this might enjoy the paper about the "Scanning the Dial" experiment. (It's about contemporary music rather than historical eras, but the idea is related)

https://www.researchgate.net/publication/248906443_Scanning_...

The punchline is that listeners can typically assign a genre to a recording after hearing a single quarter-of-a-second clip, but the paper is worth reading for its notes on genre and timbre in general.

1151 days ago [-]
nathanyo 1151 days ago [-]
https://www.youtube.com/watch?v=Wx_kugSemfY

Andrew Huang's video on the harmonic series does a really cool dive into this

redsparrow 1150 days ago [-]
This is a really excellent video. There is part where he has recordings of a clarinet and a guitar both playing the same note and slowly applies an filter to cut out the harmonics and the two distinct sounds converge. There is so much more to see in the video though and I highly recommend it.
ngcc_hk 1150 days ago [-]
To be honest this discussion need a demo like this and further analysis.
TaupeRanger 1151 days ago [-]
Or literally anything about the physics of sound, which is very well understood and has names for all of the concepts the author is talking about without mentioning any of them.
OmarShehata 1151 days ago [-]
I think just saying "it's timbre" or stating these terms isn't helpful. I've read dozens of articles that try to explain these concepts but still left me with lingering questions.

This was my attempt to show you how you can derive the answer from first principles, just by analyzing the sound, and forming your own hypothesis/getting to these conclusions yourself.

The links at the end do have resources for the physics and theory behind it.

y2bd 1151 days ago [-]
I actually do appreciate this teaching style, as long as you do put notes to the established theory and terminology at the end.

It's like that popular Monad tutorial that has you implementing Functors and Monads in order to solve a problem without ever telling you until the end that what you just created were Functors and Monads.

Starting off with the well-known terminology kind of colors the discussion from the beginning--even folks who don't really know what harmonics or timbre or monads or applicatives are probably have some general impression, and that impression could be wrong in a way that prevents them from learning.

neltnerb 1151 days ago [-]
If you titled the article "Timbre -- why notes sound different across instruments" then the reader would know which wikipedia article to look up or which search terms to use to learn more.

No reason that learning a fairly interesting word's definition can't be part of the lesson...

calebm 1151 days ago [-]
pier25 1151 days ago [-]
Best video I've seen about the harmonic series is an old talk by Leonard Berstein:

https://www.youtube.com/watch?v=9HjEAtJXssc

solids 1151 days ago [-]
Also a very important fact, is that the stronger overtones are the unison (octave), and the 5th.
souprock 1151 days ago [-]
There is a fun program that can display this sort of data as you play notes with your ordinary QWERTY keyboard:

https://github.com/kevinacahalan/piano_waterfall

It's portable to Linux and Windows at least. It won't run well in a virtual machine (including a ChromeBook) because it needs a GPU that can scroll the window fast enough.

There are 3 windows. One just shows the selected waveform. The others show an 8192-bucket FFT in red, a 1024-bucket FFT in green, and the active MIDI notes in blue. It's live, scrolling up at 93.75 pixels per second.

The QWERTY row becomes the white keys, and the number row becomes the black keys. F1 through F4 choose the type of sound. Left and right arrows change the octave in use; your speakers probably don't handle the full range very well. The program turns out to be a great speaker test, especially if you change the sound to a sine wave. It's also a great keyboard test; see how many keys you can hold down before your keyboard won't register any more. Individual colors in either window can be toggled with the 2x3 keypad that has Insert, Delete, Home, End, PgUp, PgDn. (the screenshot has green toggled off)

To make a trombone sound, first switch to a type of sound with lots of harmonics, like a sawtooth wave. Pick a low note, then find notes to line up well with the first two harmonics. Switch to the sine wave, and play all three of your chosen notes. For more of a clarinet sound, release the middle of the three that you have selected.

linux_is_nice 1151 days ago [-]
You're probably the first person in the wild I've seen that knows that Chromebooks support Linux containers now.
zokier 1151 days ago [-]
I think it's bit of a shame that the time dependence is left as a footnote. ADSR envelope and other expressive dynamics have also huge influence on the perceived sound of different instruments (for at least me..). This has been then explored by electronic musicians by mixing and matching the dynamics and overtone patterns to create all sorts of interesting novel sounds.
sharpercoder 1151 days ago [-]
I see grey text on white backgrounds more and more. It is infuriating. Please just use #000, it heavily distracts from the content and unecessarily makes the point literally unclear.
1151 days ago [-]
young_unixer 1151 days ago [-]
#000 is very readable, but looks ugly.

The sweet spot is between #222 and #333 in my opinion.

Wolfenstein98k 1150 days ago [-]
All the way into Fourier but no mention of harmonics? This is a little over-academic relative to one of the main factors that influences the texture ("timbre") of sound - namely its harmonic content.

Distortion on a guitar makes an identical phrase sound much richer, and it's not just the wavelength limiting - it's the much louder harmonics relative to the fundamental.

eyelidlessness 1151 days ago [-]
I’m so glad to see harmonics well discussed in the comments, and was so disappointed to see it not mentioned once in the article. Putting aside percussive differences between instruments or playing style (which also have harmonic effects but aren’t always perceived that way), the difference between a “frequency” that sounds different but measures the same is generally because there are many less prominent frequencies behind the dominant one. That’s why some sounds twang and other sounds thunder. That’s why some coated strings sound dull. That’s why loose drum heads sound loose. That’s why a sawtooth or sine wave sounds synthetic: it is, nothing in real life sounds like that.
souprock 1150 days ago [-]
Real life sine wave: ocarina

Real life sawtooth: violin

eyelidlessness 1150 days ago [-]
I can’t speak to the ocarina but a violin??? It’s an instrument infamous for screechy off sounds. Those are caused by unexpected friction on the strings amplifying or even making harmonics dominant. The most prominent sound wave may mimic a sawtooth because of the mechanics of a bow (I don’t know, just trusting you on this), but every bit of loose fiber on the bow and every inconsistency in rosin and every physical variation on the surface of the string can produce harmonics before any sound hits the curves of the body, which produces yet more.
HPsquared 1151 days ago [-]
The inner ear literally applies a Fourier transform to the incoming waveform, each location containing tiny hairs which each respond to a narrow frequency range. The spectrogram directly reflects what is sensed by the inner ear.
ciconia 1150 days ago [-]
Timbre is not only about harmonic content. There's also the envelope of the sound - how it changes over time. Violin pizzicato (plucked) sounds completely different than violin arco (bowed), yet it's the same string being excited into vibration. Pizzicato is percussive and decaying, arco is more smooth and sustaining. Same for piano - try to imagine a piano sound that doesn't decay and doesn't start with a bit of percussive thump, that would sound quite different!
diimdeep 1150 days ago [-]
Here is video [1] with example of using physical modeling method `Karplus–Strong string` [2]

[1] https://www.youtube.com/watch?v=FOpZYlI-F1g

[2] https://en.wikipedia.org/wiki/Karplus–Strong_string_synthesi...

jedimastert 1151 days ago [-]
I thought this was going to be a discussion on why different orchestral instruments have different "concert" pitches (i.e. a C on a trumpet is a Bb on a piano and so on), which is an interesting look into the history of European instrument inventors in the 20th century.
tomstoms 1150 days ago [-]
Interestingly, exact same phenomenon occurs in speech. What is the difference between the sounds /a/ and /o/? Turns out it’s timbre and that our vocal cavities changing form changes the timbre while the vocal chords produce the same fundamental frequency.
tomstoms 1150 days ago [-]
Further, by producing spectrograms, you can see the dominant harmonics of vowels, they are called formants, and you can learn to differentiate vowels based on comparing spectrograms without hearing the sounds. It’s pretty cool.
jancsika 1151 days ago [-]
There's a neat thing about a kind of "relative timbre." E.g., if you listen to a solo piano piece, you make some kind of adjustment to the homogenized timbres and are able to focus in on smaller timbral differences.

There's a music cognition paper about it somewhere.

squabble 1150 days ago [-]
The real beauty happens when more than one note is played at the same time. The overlapping harmonics give rise to harmony. Combine various instruments to get all kinds of interesting sounds. This is part of the art of orchestration.
spoonjim 1151 days ago [-]
I’d love to hear a synthetic instrument that’s halfway between a piano and a violin.
smlss_sftwr 1151 days ago [-]
If I remember correctly Google open-sourced an ML toolkit to do just that a few years back, I forget the name but someone spun up an online sandbox with it that lets you experiment with combining different sample sources

edit: found it here: https://experiments.withgoogle.com/sound-maker

yesenadam 1151 days ago [-]
"The hurdy-gurdy is a stringed instrument that produces sound by a hand-crank-turned, rosined wheel rubbing against the strings. The wheel functions much like a violin bow, and single notes played on the instrument sound similar to those of a violin. Melodies are played on a keyboard"

https://en.wikipedia.org/wiki/Hurdy-gurdy

Andrey Vinogradov playing his https://www.youtube.com/watch?v=wwyznoWJDHI

ksherlock 1151 days ago [-]
Like a viola organista, perhaps?

https://en.wikipedia.org/wiki/Viola_organista

lostgame 1151 days ago [-]
Logic’s Sculpture synth allows you to create synthesis using materials, such as nylon, wood, metal, etc; and move in between them to create sounds like this. :)
pier25 1151 days ago [-]
That's called physical modeling.

Another synth that does this is Chrompahone by AAS:

https://www.applied-acoustics.com/chromaphone-3/

In fact it was used by Richard Devine to produce UI sounds for Google.

1151 days ago [-]
TheActualWalko 1151 days ago [-]
Here's some code on WavTool for trying out overtone combinations: https://wavtool.com/?code=3
1151 days ago [-]
analog31 1151 days ago [-]
In addition to harmonic content, time plays a role too. For instance, harmonics are not the only reason why a snare drum sounds different than an oboe.
marcodiego 1151 days ago [-]
Simple answer: different timbres.
deathanatos 1150 days ago [-]
But not a useful answer to someone asking the question… they're simply going to respond with "What's timbre?" and if you answer is something like "It's what makes an A sound different on different instruments", they're not going to be satisfied.

The question is clearly not, "what's the musical term for why instruments sound different when playing the same pitch?" (timbre) it's, "what is the fundamental reason for that?" — and for most people, they have a mental model that an instrument plays a single frequency, so you have to show them that that model is broken, and then it becomes very clear, very fast what is going on.

quadrangle 1150 days ago [-]
And timbre is itself technically defined as all the aspects of a sound that are not pitch, loudness, or location.

So, it doesn't really explain anything. It's kinda circular. Timbre is itself the word for "sounding different" (without being different in pitch, loudness, or location).

My simple answer: "different frequency spectrum + consistent patterns of change over time in spectrum, loudness, and/or pitch"

E.g. it's not the loudness of trumpet vs piano, but it is partly the fact that trumpet doesn't consistently have the loud to quiet fade timbrel temporal pattern of piano

cjbenedikt 1151 days ago [-]
Closi 1151 days ago [-]
Did you read the article? This point is debunked right at the start (i.e. two instruments playing the same frequency still sound different).
dillondoyle 1150 days ago [-]
i think what op might be referring to is what tuning 'A' sounds like. e.g. some orchestras will tune at 415 to play older music

https://www.youtube.com/watch?v=cvb7VlL_d6I

Closi 1150 days ago [-]
Yeah, the point of the article was just that 440 hz (or 415) on one instrument isn’t the same as 440 hz from another instrument in terms of the sound.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 16:49:20 GMT+0000 (Coordinated Universal Time) with Vercel.