Hello all,
      I use NEST with OpenMPI on a high performance cluster to run
      plasticity related network simulations.
      So far I have divided my whole job into:
      phase 1: data collection (data collected: senders, times, sources,
      targets for each time point)
      phase 2: data anaylsis (refers to spike count and connectivity
      calculation)
      I face an issue with data-handling. In phase 1, the data
      pertaining to each of the four variables is saved in X different
      files (X = number of virtual processes), rank-wise. This means
      that the total number of files generated goes [ n(time-points)*X*4
      ], which surpasses the chunk file storage limit, per user for the
      cluster. Each file here is an ndarray saved as a *.npy file.
      I wonder if there is a way to retrieve the data from each of the X
      processes while collecting data, concatenating and then saving
      them? So instead of X number of files, I can save the concatenated
      version. This probably involves recruiting a single VP to collect
      and concatenate and save the datapoints, but i am not quite sure
      how to execute this using NEST. Any help would be highly
      appreciated!
      Thanks in advance!
      Best,
      Swathi