An Audio Synthesis Textbook For Musicians, Digital Artists and Programmers by Mike Krzyzaniak

  1. //OvertoneEnvelopes.c
  2. //gcc MKAiff.c OvertoneEnvelopes.c -o OvertoneEnvelopes
  3. #include "MKAiff.h"
  4. #include <math.h>
  5. #define SAMPLE_RATE 44100
  6. #define NUM_CHANNELS 1
  7. #define BITS_PER_SAMPLE 16
  8. #define BYTES_PER_SAMPLE 2
  9. #define NUM_SECONDS 3
  10. const int numSamples = NUM_SECONDS * NUM_CHANNELS * SAMPLE_RATE;
  11. #define PI 3.141592653589793
  12. const double TWO_PI_OVER_SAMPLE_RATE = 2*PI/SAMPLE_RATE;
  13. #define numFrequenciesToAdd 8
  14. int main()
  15. {
  16. int i, j;
  17. float audioBuffer[numSamples], nextSample;
  18. for(i=0; i<numSamples; audioBuffer[i++]=0);
  19. double fundamentalFrequency = 440, phase, frequency,
  20. amplitude[numFrequenciesToAdd] = {.2 , .28, .0429, .1189, .0061, .0013, 0.001, 0.002},
  21. attack [numFrequenciesToAdd] = {0.4, 0.15, 1, 0.7 , 0.02, 0.03, 0.1 , 0.1 },
  22. decay [numFrequenciesToAdd] = {1.0, 0.2, 0.25, 0.2, 0.1 , 0.1 , 0.4 , 0.5 },
  23. sustain [numFrequenciesToAdd] = {0.4, 0.2, 0.3, 0.3, 0.3 , 0.3 , 0.3 , 0.3 },
  24. release [numFrequenciesToAdd] = {1 , 0.4, 0.5 , 1 , 2 , 3 , 1.5 , 1.5 };
  25. for(i=0; i<numFrequenciesToAdd; attack[i]*=SAMPLE_RATE,
  26. decay[i]*=SAMPLE_RATE,
  27. release[i]*=SAMPLE_RATE, i++);
  28. for(j=0; j<numFrequenciesToAdd; j++)
  29. {
  30. frequency = (j+1)*fundamentalFrequency;
  31. phase = 0;
  32. for(i=0; i<numSamples; i+=NUM_CHANNELS)
  33. {
  34. nextSample = sin(phase) * amplitude[j];
  35. //ADSR ENVELOPE
  36. if(i/NUM_CHANNELS<=attack[j])
  37. nextSample *= i/attack[j];
  38. else if(i/NUM_CHANNELS<=(attack[j]+decay[j]))
  39. nextSample *= 1-(1-sustain[j])*((i-attack[j])/decay[j]);
  40. else if(i/NUM_CHANNELS<=((numSamples/NUM_CHANNELS)-release[j]))
  41. nextSample *= sustain[j];
  42. else
  43. nextSample *= sustain[j]*(((numSamples/NUM_CHANNELS-i)/release[j]));
  44. audioBuffer[i] += nextSample;
  45. phase += frequency * TWO_PI_OVER_SAMPLE_RATE;
  46. }
  47. }
  48. MKAiff* aiff = aiffWithDurationInSeconds(NUM_CHANNELS, SAMPLE_RATE, BITS_PER_SAMPLE, NUM_SECONDS);
  49. if(aiff == NULL) return 1;
  50. aiffAppendFloatingPointSamples(aiff, audioBuffer, numSamples, aiffFloatSampleType);
  51. aiffSaveWithFilename(aiff, "OvertoneEnvelopes.aif");
  52. aiffDestroy(aiff);
  53. return 0;
  54. }

Output:

Explanation of the Concepts

This example uses additive synthesis to create a sound whose timbre changes over time by applying a separate envelope to each sinusoidal constituent.

If you strike a low string on a piano and listen closely, you can hear that, before it begins to decay, the low frequency components actually swell a little, starting perhaps a second or so after you strike it. The same can be observed visually by plucking the low string on a guitar and watching it at the 12th fret. In each case, you can also hear that, as the sound decays, not only does it become softer, but it also becomes duller in timbre. At least one of the reasons that Helmholtz' piano does not sound very much like a piano is that the timbre is static. We may make a more dynamic timbre by applying a separate envelope to each sinusoidal constituent, so that different overtones can be heard more clearly as the sound progresses. A good 3d spectrum analysis of a piano would probably facilitate a drastic improvement on the piano sound in the previous chapter. However, if you want your computer to sound like a piano, it will probably sound better and use less electricity to use "wavetable synthesis", which was discussed in the previous chapter. So, for now, we will be content to just create a fairly generic sound (perhaps slightly reminiscent of the reed-stops on an organ) whose timbre changes over time.

Explanation of the Code

In the last example, we applied the ADSR envelope directly to the audio samples. Here, this makes it easy to use different values for the attack, decay, sustain and release each time the envelope is used. Lines 24 - 27 store these values in arrays. The envelope itself, lines 40-48, have now been moved inside of the loop. Before, they were not used until all of the sinusoids had been written into the buffer, but here they are used while the sinusoids are being written into the buffer. Because each envelope should affect only the current sample, and not the sum of all the samples in the buffer at that point, the value of the current sample must be calculated and stored in a separate variable, so that the envelope only affects that sample, before it is added into the buffer. Because of this, the variable nextSample is declared on line 20, and the next sample is written into it on line 38.