DEMO Seed - Neural Net Multi Effect built with Terrarium / Daisy Seed (Tayda UV print)

This post contains an audio or video demo
1. Turn Level control all the way down
2. Flip the left model select switch (Terrarium switch 1) four times from it's current position (up/down/up/down, or down/up/down/up)
3. Do the same thing for the right model select switch (Terrarium switch 2)
Bah! I thought I'd tried that after looking at the code. But I guess not. Today it works like a charm.

I've also already tweaked and modded your code into a different beast altogether and I've converted some other json models using convert_json_to_c_header_gru.py from your NeuralSeed project (some good models in there too). I'll try my hand at modeling some of my own amps next. Fun, fun. It's great to have your stuff as a learning tool. Cheers again for open sourcing!
 
Hi @keyth72 it is a great build and demo!
I was wondering the requirement to support something like NAM in a pedal recently, and came across your build. It would be great if you can share your thoughts on these topics below:
Regarding the DSP/MCU Chip: So, the Seed's STM32H750IBK6, that's a single-core M7 480MHz MCU, doesn't seem powerful enough for the task. Noticed that the DIMEHEAD's NAM player, with its ARM A72 quadcore, does seem to support the standard version of the NAM though. Just wondering, what would be the bare minimum we would need to run NAM?
About Latency: Any ideas on the latency between input and output for the your Seed's? DSP code adaptation might needed to get low latency.
 
Hi @keyth72 it is a great build and demo!
I was wondering the requirement to support something like NAM in a pedal recently, and came across your build. It would be great if you can share your thoughts on these topics below:
Regarding the DSP/MCU Chip: So, the Seed's STM32H750IBK6, that's a single-core M7 480MHz MCU, doesn't seem powerful enough for the task. Noticed that the DIMEHEAD's NAM player, with its ARM A72 quadcore, does seem to support the standard version of the NAM though. Just wondering, what would be the bare minimum we would need to run NAM?
About Latency: Any ideas on the latency between input and output for the your Seed's? DSP code adaptation might needed to get low latency.
Good questions! I measured the latency on the Seed at one point, I think it was around 3ms.

To run NAM on the Daisy Seed (or H7 microcontroller device) it would take some doing, and likely the standard models would be much too slow on it. I tried adapting the NAM code for Daisy to see what could run, and I got it to compile and load on the Daisy Seed, but never got beyond it crashing on start up, even when using super tiny custom models. That doesn't mean it's impossible, there are things beyond me that you can do in the code to optimize for STM32 devices.

I've been keeping up with the NAM player, very cool stuff! Yes the A72 is much more powerful, that's what the Raspberry Pi 4 uses. I was able to run my full Proteus models on the Raspberry Pi 4 with room to spare, but to run my models on the Daisy Seed I had to reduce their size by about a factor of 10. Still, you can get surprisingly good results with small models if the training is done properly. If you have experimented with smaller nam models, like feather or nano, it's hard to tell any difference most of the time.

Even though the small models sound good, the appeal of the Nam player in my opinion is leveraging the insane amount of work the community has put into creating models for it, most of which are probably the "standard" size. So the short answer, no, NAM probably can't run on the Daisy Seed without serious rework of the real-time and maybe even the training code.
 
Back
Top