Real Time E1.31 control on FPP?

CheekyCholla

New member
I am looking into pixel mapping live video onto my pixel strips through MadMapper for further dynamic control programing. I believe I can accomplish this with one of the PixLite controllers but I would rathe do it through FPP because I enjoy the interface, and the price more, also most of my current installations are already running on FPPs.

From the digging I have done, it seems like you cant "stream" E1.31 controls to the FPP you can only send packets in chunks over E1.31, is this anywhere near correct?

Any help, guidance, or advice would be greatly appreciated :)

Cheers!
 
What do you see as the difference in streaming versus sending packets? Streaming is just sending packets continuously. And are you differentiating e1.31 controls from the e1.31 protocol?

FPP in bridge mode should do what you are asking. Of course, then you would be limited to the output processing of the rPi or the BBB but one of those should get you where you need.

I am new to E1.31 so I am not sure on any of the lingo. If you would like to explain the difference between e1.31 controls vs. protocol, I would greatly appreciate that.

I want to send a live video signal that is mapped to led pixel strips from my computer to the falcon pi cap in real time.
From my understanding, this is possible with the pixlite controller but have gotten confusing feedback from people about the picaps limitations.

I just want to know if it is possible to send live control data such as mapping real time video to the Led pixels with the falcon picap or if I should drop $150 on this new pixel controller.

Pardon any noob misunderstandings, I don't know a whole lot about E1.31 but I read that Mad Mapper can send E1.31 specifically for pixel mapping through pixel controllers. I have only ever used the Falcon Picap to run playlists of preprogrammed effects so if it is possible to do what I am looking for through the falcon picap then I would love some advice as to how to get that done.

-CodySeeHunt
 
FPP already has the ability to capture live video from a camera and output it to a matrix. You *may* be able to define a 1 row matrix and output the camera feed to it, but I haven’t tried it.
 
I am new to E1.31 so I am not sure on any of the lingo. If you would like to explain the difference between e1.31 controls vs. protocol, I would greatly appreciate that.

I want to send a live video signal that is mapped to led pixel strips from my computer to the falcon pi cap in real time.
From my understanding, this is possible with the pixlite controller but have gotten confusing feedback from people about the picaps limitations.
You are mixing things up here. The FPP is a few things in one nice package:
  • show player. FPP converts show data (from multiple sources) into data that can be used by "controller" to generate data appropriate to the lights connected to the controller. Usually sent to those controller using protocols such as E1.31, Serial, GPIO...
  • Distributed Show Player - Multiple FPPs can be used to play your show and generate data to controllers. When running in this mode they are said to be Master/Slave mode where one FPP sends out a control signal to the other FPPs so they are all playing the same thing at the same time.
  • Multi Media input device - The FPP has the ability to take information from multiple sources (including E1.31, although this is not a great thing to do) and convert it to lighting data and then send the lighting data to a lighting controller. I have been told the FPP can take video information as one of its inputs but have never done it.
  • Lighting Controller - When you add a "Pixel Hat" (aka PI hat in your description) you are adding a lighting controller to the FPP making the FPP a "show in a box".
    • This is typically a two port controller and yes the ports have limited capacity: 800 pixels
    • The 800 pixels per port limit exceeds the maximum number of pixels per port you can have when running in a 25ms refresh (680 pixels) rate so the 800 pixels exceeds the number of pixels you can put on the string. To me that is not a limitation.
    • You WILL need to run the video output at 25ms to get anything close to real time motion.


I just want to know if it is possible to send live control data such as mapping real time video to the Led pixels with the falcon picap or if I should drop $150 on this new pixel controller.
Yes it is possible to convert (specifically not using the term map) video data to show data that the FPP can send to a lighting controller.

Pardon any noob misunderstandings, I don't know a whole lot about E1.31 but I read that Mad Mapper can send E1.31 specifically for pixel mapping through pixel controllers. I have only ever used the Falcon Picap to run playlists of preprogrammed effects so if it is possible to do what I am looking for through the falcon picap then I would love some advice as to how to get that done.
The picap does not run playlists. The FPP runs the playlist as one of its possible show data sources and then uses the picap as one of the possible lighting data outputs. Video in this instance is "Just another input". This is not to say that processing video input and then converting it to show data (this is the mapping step) and then outputting that data in a protocol compatible with a pixel controller (That is where E1.31 comes in) is trivial. It is very compute intensive and time critical.

I know I did not solve your problem with this rather long response. I am trying to help you get past the "noob" stage in understanding how these shows are put together.
 
FPP already has the ability to capture live video from a camera and output it to a matrix. You *may* be able to define a 1 row matrix and output the camera feed to it, but I haven’t tried it.

If memory serves me correctly, Sean the creator of Nutcracker I believe was live streaming video a few years back .
This is interesting and I would like to know more about the how to's of it .
 
Last edited:
Well, this has all been VERY helpful! Thank you all so much for the detailed knowledge. I can for sure run with this to get further along with my experiments. Cheers!
 
Necro. This may be a long shot, but how hard would it be to rip out MadMapper, or...whatever the real-time software tool it is to map the Livestream to a pixel matrix and send e1.31...and instead insert my own tool to computationally process the Livestream and output my own e1.31? And then mix it into a show. Namely, train a machine-learning AI to generate certain e1.31 streams based on certain captured still shots? Say, a humanoid steps on my lawn, and I want the moving heads to all shine white on that spot. So I train the AI with 10,000 images of people on my lawn, and I say, "This shot corresponds to this pan, tilt, and zoom.". I don't need.a whole lot of horsepower: maybe 4 frames per second.
 
Back
Top