Conversation
Had some time to dig a little bit deeper into that perhaps and it would appear that the reason and the main performance hurdles are due to temporary allocations caused by indexing operations in What I did was to use the For context the peak memory usage was only ~170 MB. EDIT: There is a conversion taking place ( Switching here: - Max_Uptake_array = np.zeros((self.n_monomers*self.gridsize,self.n_taxa), dtype='float32')
+ Max_Uptake_array = np.zeros((self.n_monomers*self.gridsize,self.n_taxa), dtype='float64')removes it an with it majority of temporary copies. On my machine the runtime of the test case improved from ~870s to ~110s. |

I have added the pytest-profiling dependancy and a profiling test. This can be run by calling
pytest --profile-svg -m "profiling". This creates a call chart showing bottlenecks in 'prof/combined.svg'. It is also possible to visualise the results as a flamegraph by callingflameprof prof/combined.prof > prof/flamegraph.svg. Closes #22The result of the profiling is shown below. The majority (90%) of the time is spent in the

uptakefunction, most of which is spent in pandas indexing.