Performing IMSRG calculations with RNNs

Calculational details:

  1. We used a simple pairing model with four particles and four doubly degenerate levels, see LNP 936, chapter 8, 10 and 11.
  2. We run an IMSRG calculation (Magnus expansion with \( s_{\mathrm{max}}=10 \), step size $ds=0.001$$, White generator
  3. The training was done for \( s\in [0,0.5] \) and testing for \( s \in [0.5,10] \). Training data evenly spaced.
  4. Activation functions are tanh and RELU and we used RNN and LSTM (Long short-term memory networks). They both had the same number of units in each layer, so the networks were structured like this:
  5. We used MSE as the loss function and Adam as the optimizer method. Finally, the networks both trained for 5000 epochs with a batch size of 28.