An Audio Synthesis Textbook For Musicians, Digital Artists and Programmers by Mike Krzyzaniak

  1. //HelmholtzPiano.c
  2. //gcc MKAiff.c HelmholtzPiano.c -o HelmholtzPiano
  3. #include "MKAiff.h"
  4. #include <math.h>
  5. #define SAMPLE_RATE 44100
  6. #define NUM_CHANNELS 1
  7. #define BITS_PER_SAMPLE 16
  8. #define BYTES_PER_SAMPLE 2
  9. #define NUM_SECONDS 3
  10. const int numSamples = NUM_SECONDS * NUM_CHANNELS * SAMPLE_RATE;
  11. #define PI 3.141592653589793
  12. const double TWO_PI_OVER_SAMPLE_RATE = 2*PI/SAMPLE_RATE;
  13. #define numFrequenciesToAdd 6
  14. int main()
  15. {
  16. int i, j;
  17. float audioBuffer[numSamples];
  18. for(i=0; i<numSamples; audioBuffer[i++]=0);
  19. double fundamentalFrequency = 220, phase, frequency;
  20. double amplitude[numFrequenciesToAdd] = {.10, .249, .2429, .1189, .0261, .0013};
  21. //double amplitude[numFrequenciesToAdd] = {.10, .3247, .5049, .5049, .3247, .100};
  22. //double amplitude[numFrequenciesToAdd] = {.1, .1, .1, .1, .1, .1};
  23. for(j=1; j<=numFrequenciesToAdd; j++)
  24. {
  25. frequency = j*fundamentalFrequency;
  26. phase = 0;
  27. for(i=0; i<numSamples; i+=NUM_CHANNELS)
  28. {
  29. audioBuffer[i] += sin(phase) * amplitude[j-1] * 0.5;
  30. phase += frequency * TWO_PI_OVER_SAMPLE_RATE;
  31. }
  32. }
  33. //ADSR ENVELOPE
  34. float attack=0.01, decay=0.2, sustain=0.3, release=2.5;
  35. attack *= SAMPLE_RATE;
  36. decay *= SAMPLE_RATE;
  37. release *= SAMPLE_RATE;
  38. for(i=0; i<numSamples; i+=NUM_CHANNELS)
  39. {
  40. if(i/NUM_CHANNELS<=attack)
  41. audioBuffer[i] *= (float)i/(float)attack;
  42. else if(i/NUM_CHANNELS<=(attack+decay))
  43. audioBuffer[i] *= 1-(1-sustain)*((double)(i-attack)/(double)decay);
  44. else if(i/NUM_CHANNELS<=((numSamples/NUM_CHANNELS)-release))
  45. audioBuffer[i] *= sustain;
  46. else
  47. audioBuffer[i] *= sustain*((double)((numSamples/NUM_CHANNELS-i)/(double)release));
  48. }
  49. MKAiff* aiff = aiffWithDurationInSeconds(NUM_CHANNELS, SAMPLE_RATE, BITS_PER_SAMPLE, NUM_SECONDS);
  50. if(aiff == NULL) return 1;
  51. aiffAppendFloatingPointSamples(aiff, audioBuffer, numSamples, aiffFloatSampleType);
  52. aiffSaveWithFilename(aiff, "HelmholtzPiano.aif");
  53. aiffDestroy(aiff);
  54. return 0;
  55. }

Output:

Explanation of the Concepts

This example sums sine waves to attempt to emulate a piano sound according to Herman von Helmholtz' calculations. The result is saved as an aiff file.

In his 1863 treatise, "Lehre von den Tonempfindungen", Herman Helmholtz made the first scientific foray into the realm of additive synthesis. He used a series of tuned resonators to analyze the frequency content of various sounds, and then tried to synthetically recreate those sound by using several mechanical sine-wave generators that he invented for the purpose. He gives, amongst other things, this table that details the theoretical frequency content of a piano string that has been struck at a point 1/7 of the way along its length.
Strings
This example uses this table to attempt to recreate the sound of a piano. Here, we apply an ADSR envelope to the sound to attempt to mimic the amplitude-shape of a piano. The result, one must admit, does not sound very much like a piano, but the sine waves do fuse together into one coherent sound, and it is instructive to change the timbre of that sound by adjusting the relative amplitudes of the sinusoids.

Explanation of the Code

The structure of this program is similar to the previous chapter, on basic waveforms. Here, however, the amplitudes are not calculated in the outer loop, rather they are stored in an array on line 23 (or 24 or 25), and accessed on line 34.

Lines 39-54 apply an ADSR envelope to the audio-buffer (follow the link in the "Builds On" section above for more on this). In the chapter on ADSR, the envelope's values were written into a separate buffer, and each audio sample was multiplied by its corresponding envelope value as it was written into the audio buffer. That is efficient if the same envelope will be repeatedly reused. Here, however, the envelope is only being used once, so it is more efficient to just have the envelope to operate directly on the audio buffer, after the audio has been put there. Furthermore, this arrangement is more flexible and allows the parameters to change every time the envelope is used.