Performing IMSRG calculations with RNNs
Calculational details:
- We used a simple pairing model with four particles and four doubly degenerate levels, see LNP 936, chapter 8, 10 and 11.
- We run an IMSRG calculation (Magnus expansion with \( s_{\mathrm{max}}=10 \), step size $ds=0.001$$, White generator
- The training was done for \( s\in [0,0.5] \) and testing for \( s \in [0.5,10] \). Training data evenly spaced.
- Activation functions are tanh and RELU and we used RNN and LSTM (Long short-term memory networks). They both had the same number of units in each layer, so the networks were structured like this:
- input --> recurrent hidden layer (1000 units) --> recurrent hidden layer (100 units) --> dense hidden layer (10 units) --> output (1 unit)
- We used MSE as the loss function and Adam as the optimizer method. Finally, the networks both trained for 5000 epochs with a batch size of 28.