To set the glottis to a position suitable for phonation, use the ArtwordEditor to set the Interarytenoid activity to 0.5 throughout the utterance. You set two targets: 0.5 at a time of 0 seconds, and 0.5 at a time of 0.5 seconds.
To prevent air escaping from the nose, close the nasopharyngeal port by setting the LevatorPalatini activity to 1.0 throughout the utterance.
To generate the lung pressure needed for phonation, you set the Lungs activity at 0 seconds to 0.2, and at 0.1 seconds to 0.
To force a jaw movement that closes the lips, set the Masseter activity at 0.25 seconds to 0.7, and the OrbicularisOris activity at 0.25 seconds to 0.2.
It makes this sound:
Then the weird thing is if you mimic the sound you’re following the same program!
The mp3s in the above are the wrong way around, the first is the bagpipes and the second is the Canntaireachd oral instruction. I think if you listen to the bagpipes, then the oral instruction, then the bagpipes they sound a lot more accessible and interesting the second time!
I’ve spent a great deal of time with Pink Trombone, and adapted my own version based on the underlying DSP: https://pbat.ch/proj/voc/
The underlying patterns here are in the various vocal tract shapes used to produce different phonemes. There are 44 discrete diameter sizes that can be adjusted in individual ways to produce target phonemes.
I turned a simplified version of Voc into a little android instrument to demonstrate how one can sculpt the virtual tract to produce different sounds. What’s interesting to me is that because it’s only 44 diameters, you can see them on screen, like some kind of low-res art: