Jump to content

How Should I Interpret This Frequency Analysys?


Recommended Posts

I used Audacity to sample A3 played on my Concertina Connection Hayden Peacock and I got this frequency analysis graph:

 

Capture.JPG

 

The first peak (the fat one on the LHS) is A3, the second peak is A4 and the third peak is E5, the fourth is A5, the fifth is C#6 and so on.

 

I understand that these represent the overtone series and that they are what I should expect to see, what surprises me is the amplitude of the second overtone (E5). It appears to be louder (in the graph) than the played note which seems wrong?

 

Don.

 

(I started doing this because I suspect that midi players are also confused by the frequencies generated by concertina samples. I am pretty sure that midi players sometimes pick the second overtone rather than the sample tone because it has higher amplitude)

Link to comment
Share on other sites

Don, wouldnt we have to take the combined amplitudes of the root tone and the first harmonic into account which will - to our human ears - clearly define the tone against the fifth (with the next octave contributing as well, where further harmonics would again distract)?

 

Best wishes - Wolf

Link to comment
Share on other sites

Well, part of the issue here is that you selected a relatively low pitch to look at. The preferred length for a given note with a given reed size and thickness gets progressively more problematic as the notes get lower, what’s wanted is a significantly longer reed, which will in turn have issues with speed of response. So, all of the reed lengths are compromises of various sorts. The first tenancy is increased first harmonic, and the next is Increased second harmonicas you see here. I’ve insufficient recollection of the details of vibration of concertina reeds to-provide chapter and verse, but if you want to prove your thesis I’d suggest trying A4 and perhaps some other notes. You may be correct for the given note, but incorrect for the instrument in general.

Link to comment
Share on other sites

(I started doing this because I suspect that midi players are also confused by the frequencies generated by concertina samples. I am pretty sure that midi players sometimes pick the second overtone rather than the sample tone because it has higher amplitude)

This doesn't fit with my understanding of how sample-based synthesisers work. Doesn't the soundfont file contain a table that tells the synthesiser what pitch each sample represents? Then when you ask it to play a note with a particular pitch, it searches the table and picks the nearest sample to that pitch, then plays it back at the appropriate speed to shift it to the desired pitch?

Link to comment
Share on other sites

Alex, Im perfectly sharing your understanding.

 

However, Don appears to have started here:

 

FWIW: The sound font file fails with these high notes because the sample (which is C6 - the highest note available on Phil Taylor's baritone EC from which the samples were taken) has two very strong overtones of C7 and, especially, G7. The G7 overtone actually has a higher amplitude than the played C6 note in the recording!

 

This confuses the playback because it is not sure which note in the sample to choose as the root note, especially when it has to interpolate the sound for notes far north of C6. The single C6 sample is used for all of the notes higher than C6 and it looks like it picks G7, it might also pick C7.

 

Having said all this, I will post any future findings and sound font fixes in a separate topic.

You will also find his observations not being restricted to low reeds.

 

Best wishes - Wolf

Edited by Wolf Molkentin
Link to comment
Share on other sites

Alex, Im perfectly sharing your understanding.

 

However, Don appears to have started here:

 

FWIW: The sound font file fails with these high notes because the sample (which is C6 - the highest note available on Phil Taylor's baritone EC from which the samples were taken) has two very strong overtones of C7 and, especially, G7. The G7 overtone actually has a higher amplitude than the played C6 note in the recording!

 

This confuses the playback because it is not sure which note in the sample to choose as the root note, especially when it has to interpolate the sound for notes far north of C6. The single C6 sample is used for all of the notes higher than C6 and it looks like it picks G7, it might also pick C7.

 

Having said all this, I will post any future findings and sound font fixes in a separate topic.

You will also find his observations not being restricted to low reeds.

 

Best wishes - Wolf

 

So when you ask the synthesiser to generate a C7, it's actually playing back the C6 sample at double speed, which effectively halves the sample rate. I could certainly imagine that causing problems with higher partials, particularly if the sample rate was low to start with.

Link to comment
Share on other sites

So when you ask the synthesiser to generate a C7, it's actually playing back the C6 sample at double speed, which effectively halves the

sample rate. I could certainly imagine that causing problems with higher partials, particularly if the sample rate was low to start with.

 

I'm interpreting this (and other comments in the thread) as meaning that, in a ideal world, if you are building a

sound font, which uses samples from an actual instrument, you should sample every note. That way, there is

no interpolation/extrapolation needed to create non-sampled notes, and hence no source of the errors/problems

Don is talking about.

 

I've no experience creating these sound fonts, and appreciate that it may not be practical to sample every note,

but is that correct in principle? (I'm trying to remember this stuff from 40 years ago - and not doing very well...)

 

Ta.

Edited by lachenal74693
Link to comment
Share on other sites

"How should I interpret this frequency analysis?" With caution, I would suggest. Surely a decent frequency analyser would show clear lines at 220, 440, 660 Hz etc. not the blurred mess shown here?

 

"... wouldn't we have to take the combined amplitudes of the root tone and the first harmonic into account which will - to our human ears - clearly define the tone against the fifth ...?" The human ear is more subtle than this. I think it's almost the reverse - our brains interpret the fundamental from the overtones even when the fundamental is missing. This is why we have no problem identifying a man's voice on the telephone, even with the British telephone system which cuts off below 300 Hz.

 

Here's an experiment: play two notes together, as high as possible, a fourth apart. What do you hear? Two notes or three? Your ear is "hearing" the "fundamental" two octaves below the top note; interpreting the two played notes as the second and third overtones. (This works best on a "bright" or "harsh" instrument.)

Link to comment
Share on other sites

 

So when you ask the synthesiser to generate a C7, it's actually playing back the C6 sample at double speed, which effectively halves the

sample rate. I could certainly imagine that causing problems with higher partials, particularly if the sample rate was low to start with.

I'm interpreting this (and other comments in the thread) as meaning that, in a ideal world, if you are building a

sound font, which uses samples from an actual instrument, you should sample every note. That way, there is

no interpolation/extrapolation needed to create non-sampled notes, and hence no source of the errors/problems

Don is talking about.

 

I've no experience creating these sound fonts, and appreciate that it may not be practical to sample every note,

but is that correct in principle? (I'm trying to remember this stuff from 40 years ago - and not doing very well...)

 

Ta.

 

Yes. The two issues are that making soundfonts is already a lot of work and sampling every note would multiply the effort required; and the synthesiser would use a lot more memory because it has to have all the samples available for instant access. The latter used to be an important consideration back in the nineties when synthesisers were a dedicated piece of equipment that only contained a few megabytes of memory, but it's less of an issue nowadays when it's usually a program running on a modern PC with multiple gigabytes of RAM available. Even smartphones should be able to cope with quite large soundfonts.

 

Edit: Also, the problem Don's having sounds like it might be because he's trying to play notes that are significantly higher than the highest note on the instrument the soundfont was sampled from, so sampling every note on that instrument wouldn't solve the problem. You'd have to sample from a different instrument with a higher range.

Edited by alex_holden
Link to comment
Share on other sites

My numbering.

 

 

1) Yes. The two issues are that making soundfonts is already a lot of work and sampling every note would multiply the effort required...

2) ...sounds like it might be because he's trying to play notes that are significantly higher than the highest note on the instrument the soundfont was

sampled from, so sampling every note on that instrument wouldn't solve the problem. You'd have to sample from a different instrument with a higher

range.

 

1) Yes, I suspected it might be a relatively large task to create the font using a full sample.

 

2) I hadn't picked up on that, I assumed he was interpolating between two widely spaced

samples (in frequency terms). Going outside the limits of the range of samples will be dodgy

to say the least (memories of 40+ year old Numerical Methods lectures are starting to kick in).

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...