I've fallen in love with Halloween and Christmas animation - and I would like to share where current thoughts are with HLS supporting that activity.
As of last light, HLS can accept text vocals and produce the series of mouth positions required to animate speech (same results many of you obtained by using Papagayo).
HLS will enable the building of a Word dictionary that will hold "words" and their associated series of mouth movements required to animate speech.
HLS will have several new channel types added to it's capabilities which will greatly improve animation sequencing.
New channels types: "Word", "Mouth", and "Eye".
A Word channel will allow easy placement and sizing of a "word effect" on to a Word channel. The user will be able to select a word from the vocals and position/size it just like any other effect in HLS.
Once the vocal's words are placed, the user will inform HLS as to the number of mouth positions their specific display requires (some of the Halloween faces that can be purchased utilize 3, 5, or 7 different mouth positions). The user will create a translation map uniquely for that Word channels that tells HLS how to go from the 10 mouth positions derived from phonic analysis of the vocal's words to those required for their display. Assume we have a Halloween face that utilizes 5 unique mouth formations.
HLS will utilize a "Mouth" channel for each mouth position associated with the Word channel. If you have not looked into animation, a single word will require multiple different mouth positions to animate speaking that word.
So ... the Word channel defines the start and duration of the word .... a long spooky "ooooooo" can be as long as needed. HLS then drops "mouth" effects on to the associated Mouth channels. The user can then position the placement and duration of each mouth movement within the time frame of the Word.
Looking at a real scenario ... take a display utilizing 3 singing faces .... lead and two backup singers. Lead singer has 5 mouth positions and the backs each has 3 mouth positions.
The sequencing would go like this ...
1 Word channel for Lead's Only vocals ... leading to 5 mouth channels.
1 Word channel for Left backup Only vocals ... leading to 3 mouth channels.
1 Word channel for Right backup Only vocals ... leading to 3 mouth channels.
1 Word channel for vocals where right and left backups sing in harmony ... leading to 3 mouth channels.
HLS will then provide a mechanism where physical illumination channels are assign to one or more Mouth channels. HLS will then automatically populate the physical illumination channels with Level effects being driven from the multitude of Mouth channels stated above.
In summary - the process flow will be:
Vocal Text
Text to 10 animation mouth positions
Position vocal's words on to a Word channel
Translate mouth position from the 10 to the number required for the display and create that number of Mouth channels.
Position your mouth effects onto the associated mouth channel.
Tell HLS how to map you mouth channels into physical channels.
HLS automatically populates the physical illumination channels as required.
All comments and suggestions are welcomed as I'm still in development.
I would like to thank "timon" for the long discussion on this topic last night ... he helped solidify a number of items.
Joe
Here are the 10 mouth positions that HLS currently extracts by phonetically analyzing spoken text.
As of last light, HLS can accept text vocals and produce the series of mouth positions required to animate speech (same results many of you obtained by using Papagayo).
HLS will enable the building of a Word dictionary that will hold "words" and their associated series of mouth movements required to animate speech.
HLS will have several new channel types added to it's capabilities which will greatly improve animation sequencing.
New channels types: "Word", "Mouth", and "Eye".
A Word channel will allow easy placement and sizing of a "word effect" on to a Word channel. The user will be able to select a word from the vocals and position/size it just like any other effect in HLS.
Once the vocal's words are placed, the user will inform HLS as to the number of mouth positions their specific display requires (some of the Halloween faces that can be purchased utilize 3, 5, or 7 different mouth positions). The user will create a translation map uniquely for that Word channels that tells HLS how to go from the 10 mouth positions derived from phonic analysis of the vocal's words to those required for their display. Assume we have a Halloween face that utilizes 5 unique mouth formations.
HLS will utilize a "Mouth" channel for each mouth position associated with the Word channel. If you have not looked into animation, a single word will require multiple different mouth positions to animate speaking that word.
So ... the Word channel defines the start and duration of the word .... a long spooky "ooooooo" can be as long as needed. HLS then drops "mouth" effects on to the associated Mouth channels. The user can then position the placement and duration of each mouth movement within the time frame of the Word.
Looking at a real scenario ... take a display utilizing 3 singing faces .... lead and two backup singers. Lead singer has 5 mouth positions and the backs each has 3 mouth positions.
The sequencing would go like this ...
1 Word channel for Lead's Only vocals ... leading to 5 mouth channels.
1 Word channel for Left backup Only vocals ... leading to 3 mouth channels.
1 Word channel for Right backup Only vocals ... leading to 3 mouth channels.
1 Word channel for vocals where right and left backups sing in harmony ... leading to 3 mouth channels.
HLS will then provide a mechanism where physical illumination channels are assign to one or more Mouth channels. HLS will then automatically populate the physical illumination channels with Level effects being driven from the multitude of Mouth channels stated above.
In summary - the process flow will be:
Vocal Text
Text to 10 animation mouth positions
Position vocal's words on to a Word channel
Translate mouth position from the 10 to the number required for the display and create that number of Mouth channels.
Position your mouth effects onto the associated mouth channel.
Tell HLS how to map you mouth channels into physical channels.
HLS automatically populates the physical illumination channels as required.
All comments and suggestions are welcomed as I'm still in development.
I would like to thank "timon" for the long discussion on this topic last night ... he helped solidify a number of items.
Joe
Here are the 10 mouth positions that HLS currently extracts by phonetically analyzing spoken text.
Attachments
Last edited: