Bladeren bron

add documentation

Robin 2 jaren geleden
bovenliggende
commit
0e762c5c35

+ 7 - 8
README.md

@@ -1,12 +1,11 @@
 
-# Eigenangles: evaluating the statistical similarity of neural network simulations via eigenvector angles
-Code and data repository accompanying the publication by Gutzen et al. (2022) https://doi.org/...
+# Eigenangles: evaluating the statistical similarity of neural network activity and connectivity via eigenvector angles
+Code and data repository accompanying the publication Gutzen et al. (2022) https://doi.org/...
 
-<RRID> <zenodo doi>
+<RRID> [![DOI](https://zenodo.org/badge/DOI/XXXX/zenodo.XXXX.svg)](https://doi.org/XXXX/zenodo.XXXX)
 [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/git/https%3A%2F%2Fgin.g-node.org%2FINM-6%2Feigenangles/HEAD?labpath=eigenangle_basics.ipynb)
 
-## Abstract
-
-## Folder Structure
-
-## Cite
+## Content
+The different applications and testing scenarios of the eigenangle test are separated into the folders `balanced_network`, `stochastic_activity`, and `polychony_network` containing their corresponding workflows (see the respective `README.md` for details). The top-level folder `scripts` contains a general code basis used by each of the workflows.
+The folder `paper_figures` contains the figures from the publications as generated by either notebooks or scripts in the respective application folders. Figure 1 and 2 are produced by the respective notebook with this folder.
+The interactive jupyter notebook `eigenangle_basics.ipynb` presents a step-wise construction and explanation of the eigenangle test and can be executed via the above mybinder badge.

+ 13 - 0
balanced_network/README.md

@@ -0,0 +1,13 @@
+
+# Evaluating network rewiring in balanced random network models
+This folder contains the workflow to create and simulate balanced random neural networks, rewire their connectivity by given protocols, compare networks with the eigenangle test, and visualize the results.
+The network configurations are specified via the `config.py` file. The following wildcards in the file paths can update the parameters of the config file as in `N{N}_f{f}_mu{mu}_p{epsilon}_eta{eta}_sex{sex}_sin{sin}_{syndist}` (in the following referred to as {network_specs}). The main functionalities of the workflow are:
+
+* `snakemake rewire_and_redraw` creates and simulates all (rewired) networks with the parameter ranges specified in the config file.
+* `snakemake rewire_comparisons` compares the original network (weight and correlation matrix) with the rewired networks for each initialization, i.e. random seed.
+* `snakemake redraw_comparisons` compares the different network initializations, i.e. across seeds.
+* `snakemake results/{network_specs}/rewiring_results.csv` processes and aggregates all results in that folder into a dataframe.
+* `snakemake simulation_output/{network_specs}/seed_{seed}/{protocol}/<weights/correlations/spikes_{t_start}-{t_stop}ms>.png` plots the corresponding weight-, correlation matrix, or spike rasterplot.
+* `snakemake images/eigenspectrum/{network_specs}_{protocol}.png` plots the corresponding eigenvalue and eigenangle distributions based on 8 random initializations.
+* `snakemake images/pvalue_overview/{network_specs}-{protocol}.png` plots the p-value swarm plots for weights- and correlations comparisons, their ratio, and the firing rate correlations.
+* `snakemake images/pvalue_trend/{network_specs}_{protocol}.png` plots the p-value trends, their ratio, and the rate correlations for the with respect to the corresponding protocol parameters.

+ 23 - 4
balanced_network/config.py

@@ -10,8 +10,9 @@ mu = 0.1
 epsilon = 0.1
 # external rate relative to threshold rate
 eta = 0.9
-# variance of weight sample distributions
+# type of distribution for synaptic weights
 syndist = 'lognormal' # 'truncated-normal' 'lognormal' 'normal'
+# variance of weight sample distributions
 sigma_ex = 0.12
 sigma_in = 0.1
 # simulation time
@@ -20,12 +21,30 @@ simtime = 61000 #ms
 ###### rewiring parameters ######
 shuffle_source = ['E', 'I']
 shuffle_target = ['E', 'I']
-shuffle_frac = [1.0, 3500]
+shuffle_frac = 1.0
 
 add_source = 'E'
 add_target = 'E'
-add_source_frac = [0.2, 12800]
-add_target_frac = [0.1, 0.2, 0.4, 0.6, 0.8, 1.0]
+add_source_frac = [0.2, 10000]
+# add_target_frac > p/(1-p) * add_source_frac
+add_target_frac = [0.02, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0]
+
+cluster_pop = 'E'
+cluster_number = 3
+cluster_size = [0.02, 0.04, 0.06, 0.08]
+cluster_epsilon = [0.2, 0.3, 0.4, 0.5, 0.6]
+
+hub_pop = 'E'
+hub_size = [0.04, 0.06, 0.08, 0.10]
+hub_strength = [0.2, 0.3, 0.4, 0.5]
+hub_epsilon = 0.5
+
+chain_pop = 'E'
+chain_size = [0.05, 0.10, 0.15, 0.20]
+chain_strength = [0.3, 0.4, 0.5, 0.6]
+chain_epsilon = 0.6
+chain_length = 3
+connectors = [0, 30]
 
 ###### derived & fixed network parameters ######
 J_ex = mu

+ 2 - 6
paper_figures/figure_2/figure_2.ipynb

@@ -15,16 +15,12 @@
     "from mpl_toolkits.mplot3d import Axes3D\n",
     "from matplotlib.patches import ArrowStyle\n",
     "import seaborn as sns\n",
+    "import sys\n",
+    "from pathlib import Path\n",
     "sys.path.append(str(Path.cwd().parents[1] / 'scripts'))\n",
     "\n",
-    "\n",
     "sns.set(style='ticks', context='talk')\n",
     "sns.set_palette('rocket')\n",
-    "# sns.set_color_codes('colorblind')\n",
-    "# plt.rcParams.update({\n",
-    "#   \"text.usetex\": True,\n",
-    "#   \"font.family\": \"Helvetica\"\n",
-    "# })\n",
     "plt.rcParams.update({\n",
     "    \"text.usetex\": True,\n",
     "    \"font.family\": \"sans-serif\",\n",

+ 6 - 2
polychrony_network/README.md

@@ -1,4 +1,8 @@
 
-# Simulator Comparison
-This comparison evaluates the similarity of simulated spiking activity of the polychronization network model ([Izhikevich2006](https://doi.org/10.1162/089976606775093882)) generated with two different Simulators (C, SpiNNaker).
+# Comparing the correlation structure of a network model across simulators
+This folder contains the workflow to create correlation matrices from previously simulated spiking activity of the polychonization model ([Izhikevich 2006](https://doi.org/10.1162/089976606775093882)) generated with two different Simulators (C, SpiNNaker).
 The comparison use-case and the corresponding data is taken from (Trensch et al. 2018)[https://doi.org/10.3389/fninf.2018.00081] and (Gutzen et al. 2018)[https://doi.org/10.3389/fninf.2018.00090], (Trensch et al. 2018)[https://doi.org/10.3389/fninf.2018.00081] with the corresponding public data repository (https://gin.g-node.org/INM-6/network_validation)[https://gin.g-node.org/INM-6/network_validation].
+
+To perform the comparisons with the eigenangle and the Kolmogorov-Smirnov test, the main functionalities of the workflow are:
+* `snakemake scan` performs the 5 comparisons across simulators.
+* `snakemake results/simulator_comparison.csv` aggregates the results in a dataframe.

+ 3 - 3
polychrony_network/config.py

@@ -1,7 +1,7 @@
 N = 1000
-t_start = 0
-t_stop = 60000
-bin_size = 2
+t_start = 0 #ms
+t_stop = 60000 #ms
+bin_size = 2 #ms
 bin_num = int((t_stop - t_start) / bin_size)
 recording_window = [1,2,3,4,5]
 test = ['eigenangle_test', 'ks_test']

+ 5 - 0
stochastic_activity/README.md

@@ -0,0 +1,5 @@
+
+# Application of the eigenangle test to stochastic calibration scenarios
+This folder contains the workflow to create stochastic spiking activity using Poisson and Compound Poisson processes, calculate their correlation matrices, compare them with the eigenangle- and the Kolmogorov-Smirnov test. The parameters for the stochastic processes are specified via the `config.py` file. The following wildcards in the file paths can update the parameters of the config file as in `N{N}_t{t_start}-{t_stop}ms_bin{binsize}ms_rate{rate}Hz_hub{assembly_sizes}_corr{correlations}_bkgr{bkgr_correlation}_{corr_method}`(in the following referred to as {process_specs}). The main functionality of the workflow is:
+
+* snakemake `results/stochastic_activity_comparison.csv' creates the correlation matrices for all process with the parameter ranges specified in the config files, performs the comparisons between initializations (i.e. random seeds), and aggregates the results in a dataframe.

+ 16 - 6
stochastic_activity/config.py

@@ -1,17 +1,27 @@
 import itertools
 
+# number of spiketrains
 N=100
+# starting time
 t_start=0 #ms
+# end time
 t_stop=30000 #ms
+# mean firing rate
 rate=10 #Hz
+# bin size for calculating correlations
 binsize=2 #ms
-assembly_sizes=[1,2,3,4,5,6]
-# assembly_sizes='6,8'
-correlations=[0.,0.05,0.1,0.15,0.2,0.3,0.4]
-# correlations='0.3,0.1'
+# number of neurons in a hub
+assembly_sizes=[1,2,3,4,5,6] #/'6,8'
+# mean correlation with a hub
+correlations=[0.,0.05,0.1,0.15,0.2,0.3,0.4] #\'0.3,0.1'
+# mean correlation in between all spiketrain pairs
 bkgr_correlation=0.
+# method to insert correlations
 corr_method=['CPP', 'CPP'] #'pairwise_equivalent']
-test=['ks_test']
-seed=[0,1,2,3,4,5] #,6,7,8,9,10,11,12,13,14]
+# name of the statistical test for comparison
+test=['eigenangle_test', 'ks_test']
+# random seeds
+seed=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14]
 seed_pairs = [f'{i[0]}-{i[1]}' for i in itertools.combinations(seed,2)]
+# number of bins
 bin_num = int((t_stop-t_start) / binsize)