Vixen 3 Lip Sync Overview

idleup,

Jon is correct. The matrix zig-zag is setup as part of display patching as it is specific to how the matrix is wired. Lip sync only renders rows and columns as they are viewed. To convert the viewed row and column to the actual pixel address is part of the patching process.

Ed
 
Hi Ed,
Great improvement. Can you just advise me that if I use dumb strings I can set the mapping up as strings and not pixels?
Thanks
Graham
 
Thanks.

Yes, you can set it up as dumb strings. The first video covers how to do this. Feel free to ask me questions if you run into any issues
 
Wow! Love it. I just wished that I had watched the videos before completely manually programming my first song. Though it did give me a good experience. I'd defintely add a +1 to allowing for changing the color on a per item basis, not just in the map. And secondly, would be to use this as an overlay on effects. I can see having the mouth fade out as a song draws to the end.


Thanks so much! And next time I'll look for a video tutorial before I try new functionality (though, in reality, it wasn't that bad. lots of copy/paste)
 
I think you may have been the first that I know of to manually program a song since the feature was added, congrats! :) Glad it was a good experience. I have a list of new features that I am planning for next year. I am thinking about ways to do fades, color changes, eye blinks, etc at that time.
 
That is sounding great! It's on github? If you do eye blinks, you should have settings for randomness for the blinks like blinks per minute or something like that, and then generates them.
 
Hi,
The Lipsync editor is excellent! Is there a way to select independent colors for common pixels used in multiple phonemes or are the pixels locked to the same color for each one?
 
Last edited:
Thanks Big_Mac.

The answer is no.... and yes.....

Option A:
Vixen 3.1 Build 46 - The LipSync String Mapper does not allow you to set the color independently for each phoneme. This is a change I plan to make in a future release, but it is unlikely to hit this year. As a possible, but more involved, workaround you can use a Matrix / Megatree map in a (String) x (Pixel per string) configuration and then map the phoneme colors independently for each string. The future string mapper will use a similar concept.

Option B:
I recently implemented a feature where the colors, gradients and dimming curves of individual phonemes could be overriden on the timeline. This is likely to make it into the release after the V3.1 patch release, which should happen within the next couple of weeks. The idea here would be that you could "Explode" the top level phoneme into it's individual components and modify them as needed. Still requires extra work on your part for now but gives you a little bit more capability. Below is a link to the video, the description gives you a time table of contents. This still will still allow you to do it, but still requires a little of time investment on your part.

Remember that it is still subject to change as I have opened it up for feedback in the dev group.

https://vimeo.com/108722950

Ed
 
Thanks!

Yes, it is in the latest dev build (#56). You can grab it or wait for the 3.1 bugfix release which is pretty close.

Ed
 
Umm wow. This is totally amazing. I've been lurking around here for years without any blinky flashy, but after seeing the potential in this decided to "throw" together a Halloween display. I have a couple of singing pumpkins doing three songs at the moment with a week left to go!

A few comments on what I've experienced so far.

- I lost the maps on two occasions. I originally thought it had something to do with the web server or the sequencer, but I haven't been able to reproduce it. It may have just been that I had started multiple songs before I had finished building the maps, and they reverted somehow.

- It seems to remember the phoneme 'spelling' for new words. After I enter a word once, it never asks again. This is great, but as I'm not sure I always get it right the first time, it would be nice to have access somehow to the dictionary so I could correct (or at least delete) errors.

- I've been using the word marks to input. This has been relatively easy (the phoneme mark method made me break out in hives) but the equal spacing obviously does not always represent the way the word is formed. it seems some phonemes-sounds ('M' and 'S') are always proportionally shorter in a word. It's not that big a deal to adjust, but if you had a chorus or an oft repeated phase it might be helpful to have a dictionary that track a weighted spacing in a word (e.g., the word "see" is 15% "s", 85% "e".) Just an idea, like I said it's not a big deal.

Other minor items - Not sure if in the module you can add the ability to select-all in the text-to-lipsync box on triple-click. It would be nice to be able to just triple click and then type the next set of lyrics rather than finding the beginning or end and scrolling.

When using the word-mark method to insert lipsync phonemes, it would be great if the offset mark was automatically advanced to either the last used or last used + 1 value. I can't count the number of times I clicked insert and thought something was wrong only to later realize that I had inserted (often several times) my intended text at the start of the last phrase, which was often off the screen.

Anyway, none of this was to imply that what has already been done is totally awesome, the items at the top of my list are already things that you have indicated are in the next release.

Thanks again, and thanks from the singing pumpkins.
 
for some reason i hear no audio (its muffled) from any of the 4 videos in chrome using the vimeo site and youtube. I cranked my audio up to 100 and it sounds like you are whispering in a tin can. ANy chance of looking into this and/or reupping the videos?
 
I checked the videos on Vimeo, they sound fine on my end, I use Chrome also.

They where never uploaded to YouTube.

for some reason i hear no audio (its muffled) from any of the 4 videos in chrome using the vimeo site and youtube. I cranked my audio up to 100 and it sounds like you are whispering in a tin can. ANy chance of looking into this and/or reupping the videos?
 
I must have missed this post back in October, sorry for the really late reply. :) See below for replies.

- I lost the maps on two occasions. I originally thought it had something to do with the web server or the sequencer, but I haven't been able to reproduce it. It may have just been that I had started multiple songs before I had finished building the maps, and they reverted somehow.

There was an issue I found were the maps would not get saved until Vixen3 was existed completely. I changed that behavior and the fix went in either 3.1u1 or 3.1u2. I plan to do some work on this code this year and will be keeping an eye out if this bug still exists.

- It seems to remember the phoneme 'spelling' for new words. After I enter a word once, it never asks again. This is great, but as I'm not sure I always get it right the first time, it would be nice to have access somehow to the dictionary so I could correct (or at least delete) errors.

There is a workaround you can do to remove user defined mappings you would like to later get rid of. Look for a file in your profile directory call user_dictionary. You can either delete this file, which will wipe out all of your custom mappings, or open it up and find the line with the custom text. Once found, Delete the whole line to remove the entry.

When using the word-mark method to insert lipsync phonemes, it would be great if the offset mark was automatically advanced to either the last used or last used + 1 value. I can't count the number of times I clicked insert and thought something was wrong only to later realize that I had inserted (often several times) my intended text at the start of the last phrase, which was often off the screen.

Not a bad idea here, I will see what I can come up with this year.

Love the videos!

Ed
 
I checked the videos on Vimeo, they sound fine on my end, I use Chrome also.

They where never uploaded to YouTube.

they work on my ipad. not sure why they dont work on my pc. everything else audio related works fine. weird.
 
Did a video ever get done on using audacity to extract the vocal elements of a song? I scanned all 10 pages but I might have missed it, if so, sorry.
 
I may have missed this but, is it possible to individually control the intensity of channels using the lip-sync effect?

For example, I would like to be able to flicker the outline and maybe some of the mouth shapes during different parts of songs.
 
Back
Top