ESP8266 Questions (specifically Wemos Mini R1)

P. Short

Super Moderator
Staff member
I've been playing around with a generic (HiLetGo) Wemos Mini R1 trying to see how fast data can be transferred in/out of the part using the hardware SPI port. The observations have been made using the Arduino IDE with the clock set to 160 Mhz.

What I've noticed first is that the rise and fall times of output signals seem to be around 40 ns (from one level to approximately 2/3 of the other level). Is this normal? It seems awfully slow, I would have hoped for numbers between 10 ns and perhaps 20 ns at the outside most.

The other thing that I've noticed is that the maximum output data rate of the Wemos is around 400 KB/s (3.2 Mb/s), or 2.5 uS/Byte. Of this time, about 1 uS is the time to shift eight bits into/out of the process (as expected with an 8Mhz bit clock, as observed on an oscilloscope). The remaining 1.5 uS is dead-time between byte transfers. The code that generates this result is inline SPI.transfer((char)) statements (changing to SPI.transfer(int) statements has the same result). My desire would be to obtain 8Mb/s (1 MB/s) SPI transfers using an interrupt routine...likely needing low-level hardware register access. I really don't want to have to bit-bang from foreground code.

Edit: Using the block-transfer variant of the SPI.transfer command seems to eliminate the dead-time between byte transfers...but I'd still like to figure out if it's possible to somehow or other get an SPI interrupt once per byte without using up all of the CPU instruction bandwidth.
 
Last edited:
Wouldn't an interrupt with each byte have a tendency to bog things down? Is this for a practical design reason or is it more-or-less an empirical effort just to see if it could be done?

I have not spent any time dinging around with coding the ESP myself -- I've just used Shelby's firmware without modifications because it does everything I'd want it to do. Well, almost everything.... and I'm hoping to find some time to see whether servo control can be added to it...
 
The overhead on an ISR is pretty nasty. I suppose you could make things faster by not using the general purpose driver and feeding the hardware directly/ The ESPixelstick puts data into the UART FIFO by filling the FIFO as much as possible and then waiting until it is almost empty. We send about 80 bytes of data per ISR and that puts a pretty big load on the CPU.
 
Wouldn't an interrupt with each byte have a tendency to bog things down? Is this for a practical design reason or is it more-or-less an empirical effort just to see if it could be done?

I have not spent any time dinging around with coding the ESP myself -- I've just used Shelby's firmware without modifications because it does everything I'd want it to do. Well, almost everything.... and I'm hoping to find some time to see whether servo control can be added to it...

I'm hoping that 1 interrupt per microsecond wouldn't be too much of a burden of a processor running at 160 MHz. That, of course, depends on whether the SPI hardware can even generate an interrupt and on what the software overhead is for entering and exiting the interrupt service routine.
 
Back
Top