Question about Funbox Earth Octave plus Mars Neural

cbcbd

New member
So this is kind of a question for @keyth72. I have been working on porting the Funbox planets to the Hothouse. I recently ported the Mars and Earth, and I thought, wouldn't it be awesome to have the Earth's octave on a footswitch with the Mars amp modeling. Obviously there were all kinds of issues with the sample rate between the two, but I think the thing that really killed me is I seemed to overload the processor when combining the two.

So the question I have is:

1. Did you ever get something like this to work, and I should just try harder.
2. Does anyone have a good lightweight octave up and down method?

The DaisySP built-in just doesn't cut it, and there doesn't seem to be any octave down. I have tried a bunch of other methodologies, but haven't had much success.
 
@cbcbd Good question, I have not tried combining those two effects on a single Daisy Seed, although I have paired those two effects on my pedalboard by using two separate Daisy Seed pedals, and it does sound really cool. There are a couple things you can try/check to see if the Daisy Seed can handle it:

1. Make sure you're compiling at the highest optimization, -Ofast in the makefile.
2. Make sure you have the Daisy Seed processor boost on, by passing "true" to the hardware initialization line. This lets the Daisy Seed use all of the available 480Mhz instead of 400Mhz on the ARM processor. "hw.Init(true)" in the main function is where that goes.
3. Use the highest block size you can, I'm pretty sure that is "256".
4. Mars uses both the neural model and impulse response, and I pushed the Daisy Seed to about the limit to do that, although I don't remember if I used all the above optimizations in the original code. Might try taking out the impulse response processing and see if it can handle the neural model with octave.

If I remember right I did a weird 6 sample buffer to make the octave code work. There is probably a better way to do this. That octave code is from @Steve Schulteis by the way, he may have some input. Here is the original code: https://github.com/schult/terrarium-poly-octave
 
I took out the IR to reduce the load and made it just a tone hack. You did use the 6 sample buffer which makes it like 8kHz. I tried to work with that, but I didn't find a better way, and trying to get it all working together produced bitcrushed mush.

On another note, I have ported the Mars, Earth and Venus so far, great job with everything you do. Your stuff sounds great. I just want you to start selling a PCB, so I don't have to get a bunch made ;-). It would be nice to have the expression, midi, and dip switches.

Thanks for the advice, I am going to try those optimizations. Have you ever found a good way to do a lightweight octave that sounds good on the Daisy? The Poly Octave stuff sounds great but it takes a lot.
 
2. Does anyone have a good lightweight octave up and down method?
If you're ok with being limited to single notes (no chords), you probably want one of the variations on the "overlap-add" (OLA) algorithm. The time domain portions of each of these videos give an overview of how it works, and they link to example source code in their descriptions.


 
Last edited:
Here are a few ideas for ways to reduce the processing power required by the poly octave code:
  • Remove the "down2" octave from BandShifter if you can live without it. Don't forget to also drop its sign calculation that happens in the update_down1 function.
  • Reduce the number of analysis filters (and increase the space between their center frequencies). This affects the quality of the results, but you should be able to get away with at least a small change before it becomes too noticeable. FYI, you may find some weird behaviors as you adjust this - I vaguely recall the algorithm being sensitive to center frequency and filter overlap changes in ways that were not always easy to predict.
  • Replace the resampling filters with versions that have fewer taps.
  • Drop the two eq filters. They're not essential.
  • Take a look at my experimental arbitrary-shift branch. I haven't done an apples-to-apples comparison with the octaves-only implementation, but I did put some effort into optimizing this more generalized version of the algorithm. Maybe there's something in there that helps. Note that the resampling in that branch is only by a factor of 2, rather than a factor of 6, so the analysis filters are running three times as often.

Regarding the 6-sample buffer: I'm applying the eq filters at the full 48 kHz sample rate. You can replace those with your other effect(s), feeding them from out_chunk, which is already a 6-sample buffer (returned by interpolate).
 
Last edited:
Thanks, I will try that. I think it is still going to be too much with the Neural processor, which already seems to be pushing things. I would probably need to figure out how to optimize everything else too, or just build a box with 2 Daisy Seeds. It is good to know, because I really like the way Poly Octave sounds, so I am going to be using it in the future with other things. Always good to have some tricks up my sleeve to make it work!
 
On another note, do you guys have any recommendations for a good source of GRU models for use with RTNeural?
 
On another note, do you guys have any recommendations for a good source of GRU models for use with RTNeural?
As far as I know, no one has really made a bunch of these and made them available, the only ones I've made are included in the Github repos.
 
The challenge is that I don't have access to all of the source equipment that I would like to model, otherwise I would just build my own. I noticed in one of your Medium articles, you mentioned "distillation" to train a small model from a large one. I was wondering if there is any viable way to take like a LSTM size 20 model and "distill" it down to a small enough GRU to be viable within a microcontroller's limitations.
 
Another option, I asked nicely and sent a small donation as a thank you, to someone for their original captures and then retrained the models with the lowered fidelity/parameters for an amp I was interested in. https://github.com/bkshepherd/DaisySeedProjects/pull/69

I considered distillation and actually had a few email exchanges with the NAM author about it and decided I didn't really have any interest in going that route at this time due to other interests/commitments.
 
Another option, I asked nicely and sent a small donation as a thank you, to someone for their original captures and then retrained the models with the lowered fidelity/parameters for an amp I was interested in. https://github.com/bkshepherd/DaisySeedProjects/pull/69

I considered distillation and actually had a few email exchanges with the NAM author about it and decided I didn't really have any interest in going that route at this time due to other interests/commitments.
Well, that is certainly a reasonable way to go about it. I was checking out your JCM800 sims. I used to have an 80's JCM800 that sounded amazing. I left it in a friend's studio, and it disappeared into the ether. Would have love to have a good sim of that amp.
 
Well, that is certainly a reasonable way to go about it. I was checking out your JCM800 sims. I used to have an 80's JCM800 that sounded amazing. I left it in a friend's studio, and it disappeared into the ether. Would have love to have a good sim of that amp.
I have all of the captures from it (https://www.tone3000.com/tones/marshall-jcm800-2203x-27847), I wasn't super into it compared to the BE-100, it didn't seem to train with the low fidelity/parameters as nicely either

I did the Sansamp BDDI myself just because I have one and wanted to do the process end to end

I considered buying a sonicake pocket master to play around with it but doing these ^ scratched the itch instead. Now I basically just use my tube screamer, rat, and big muff anyways 🤣
 
I have all of the captures from it (https://www.tone3000.com/tones/marshall-jcm800-2203x-27847), I wasn't super into it compared to the BE-100, it didn't seem to train with the low fidelity/parameters as nicely either
Yeah the tone3000 site is a great resource for different gear captures, you can always try making a GRU capture from the Nam plugin using a particular model you want to try to emulate. Is that what you mean by "distilling"?

I did at one point get a "Nano" size NAM model to run on the Daisy Seed, which sounds great to me in terms of accuracy, but the vast majority of models on tone3000 are the "standard" size NAM, much too heavy to run on the Daisy Seed without some kind of reduction. I'd be curious to know the process that products like the Valeton GP-5 use to reduce standard NAM models to something more processor friendly.
 
This is a bit off topic, but are you guys familiar with coral.AI boards?
They have a microcontroller board with TPU that I reckon should be able to run larger neural models (of course there's the problem of ADC/DAC that is already solved with the seed)
 
Yeah the tone3000 site is a great resource for different gear captures, you can always try making a GRU capture from the Nam plugin using a particular model you want to try to emulate. Is that what you mean by "distilling"?

I did at one point get a "Nano" size NAM model to run on the Daisy Seed, which sounds great to me in terms of accuracy, but the vast majority of models on tone3000 are the "standard" size NAM, much too heavy to run on the Daisy Seed without some kind of reduction. I'd be curious to know the process that products like the Valeton GP-5 use to reduce standard NAM models to something more processor friendly.
Yea I was considering an automated pipeline for recapturing/retraining with existing models like you mentioned using NAM to playback
 
On another note, do you guys have any recommendations for a good source of GRU models for use with RTNeural?

No idea if these are compatible with RTNeural, but Gru models are available...

GRU 3Despicable Me.jpg

Says "FREE3D" but they want $30 for it!



Then there's these others that are cheaper, I think...

Gru figurine.webp WITH FREEZEGUN!


GRU FIGURINE SMILING.jpg
 
Back
Top