Julia Sprenger 3 лет назад
Родитель
Сommit
d9b1b9cd7f
100 измененных файлов с 17723 добавлено и 12901 удалено
  1. 4 33
      code/data_overview_1.py
  2. 16 2
      code/elephant/.gitignore
  3. 78 39
      code/elephant/.travis.yml
  4. 0 1
      code/elephant/AUTHORS.txt
  5. 1 1
      code/elephant/LICENSE.txt
  6. 16 6
      code/elephant/MANIFEST.in
  7. 0 23
      code/elephant/README.rst
  8. 0 135
      code/elephant/continuous_integration/install.sh
  9. 0 23
      code/elephant/continuous_integration/test_script.sh
  10. 22 1
      code/elephant/doc/authors.rst
  11. 58 14
      code/elephant/doc/conf.py
  12. 46 197
      code/elephant/doc/developers_guide.rst
  13. 0 34
      code/elephant/doc/environment.yml
  14. BIN
      code/elephant/doc/images/elephant_structure.png
  15. BIN
      code/elephant/doc/images/tutorials/tutorial_1_figure_1.png
  16. BIN
      code/elephant/doc/images/tutorials/tutorial_1_figure_2.png
  17. 28 14
      code/elephant/doc/index.rst
  18. 96 67
      code/elephant/doc/install.rst
  19. 21 12
      code/elephant/doc/modules.rst
  20. 0 113
      code/elephant/doc/overview.rst
  21. 10 2
      code/elephant/doc/reference/asset.rst
  22. 3 3
      code/elephant/doc/reference/conversion.rst
  23. 1 1
      code/elephant/doc/reference/cubic.rst
  24. 0 1
      code/elephant/doc/reference/kernels.rst
  25. 3 3
      code/elephant/doc/reference/neo_tools.rst
  26. 3 3
      code/elephant/doc/reference/spike_train_dissimilarity.rst
  27. 0 12
      code/elephant/doc/reference/sta.rst
  28. 3 4
      code/elephant/doc/reference/statistics.rst
  29. 21 4
      code/elephant/doc/reference/unitary_event_analysis.rst
  30. 169 7
      code/elephant/doc/release_notes.rst
  31. 0 6
      code/elephant/doc/requirements.txt
  32. 0 85
      code/elephant/doc/tutorial.rst
  33. 20 4
      code/elephant/elephant/__init__.py
  34. 1563 1381
      code/elephant/elephant/asset.py
  35. 227 198
      code/elephant/elephant/cell_assembly_detection.py
  36. 163 154
      code/elephant/elephant/change_point_detection.py
  37. 615 356
      code/elephant/elephant/conversion.py
  38. 62 60
      code/elephant/elephant/cubic.py
  39. 111 91
      code/elephant/elephant/current_source_density.py
  40. 1 1
      code/elephant/elephant/current_source_density_src/KCSD.py
  41. 5 3
      code/elephant/elephant/current_source_density_src/utility_functions.py
  42. 671 283
      code/elephant/elephant/kernels.py
  43. 85 56
      code/elephant/elephant/neo_tools.py
  44. 13 7
      code/elephant/elephant/pandas_bridge.py
  45. 59 38
      code/elephant/elephant/phase_analysis.py
  46. 732 250
      code/elephant/elephant/signal_processing.py
  47. 1881 1165
      code/elephant/elephant/spade.py
  48. 0 2
      code/elephant/elephant/spade_src/__init__.py
  49. 246 1097
      code/elephant/elephant/spade_src/fast_fca.py
  50. 319 326
      code/elephant/elephant/spectral.py
  51. 810 488
      code/elephant/elephant/spike_train_correlation.py
  52. 128 98
      code/elephant/elephant/spike_train_dissimilarity.py
  53. 814 502
      code/elephant/elephant/spike_train_generation.py
  54. 1129 259
      code/elephant/elephant/spike_train_surrogates.py
  55. 22 18
      code/elephant/elephant/sta.py
  56. 771 771
      code/elephant/elephant/statistics.py
  57. 57 56
      code/elephant/elephant/test/make_spike_extraction_test_data.py
  58. 408 110
      code/elephant/elephant/test/test_asset.py
  59. 53 42
      code/elephant/elephant/test/test_cell_assembly_detection.py
  60. 59 33
      code/elephant/elephant/test/test_change_point_detection.py
  61. 211 85
      code/elephant/elephant/test/test_conversion.py
  62. 21 13
      code/elephant/elephant/test/test_cubic.py
  63. 514 527
      code/elephant/elephant/test/test_icsd.py
  64. 9 5
      code/elephant/elephant/test/test_kcsd.py
  65. 291 39
      code/elephant/elephant/test/test_kernels.py
  66. 216 176
      code/elephant/elephant/test/test_neo_tools.py
  67. 23 11
      code/elephant/elephant/test/test_pandas_bridge.py
  68. 21 5
      code/elephant/elephant/test/test_phase_analysis.py
  69. 546 53
      code/elephant/elephant/test/test_signal_processing.py
  70. 576 89
      code/elephant/elephant/test/test_spade.py
  71. 143 120
      code/elephant/elephant/test/test_spectral.py
  72. 317 87
      code/elephant/elephant/test/test_spike_train_correlation.py
  73. 331 321
      code/elephant/elephant/test/test_spike_train_dissimilarity.py
  74. 693 300
      code/elephant/elephant/test/test_spike_train_generation.py
  75. 552 164
      code/elephant/elephant/test/test_spike_train_surrogates.py
  76. 130 92
      code/elephant/elephant/test/test_sta.py
  77. 431 213
      code/elephant/elephant/test/test_statistics.py
  78. 279 278
      code/elephant/elephant/test/test_unitary_event_analysis.py
  79. 550 509
      code/elephant/elephant/unitary_event_analysis.py
  80. 21 1
      code/elephant/readthedocs.yml
  81. 0 2
      code/elephant/requirements-docs.txt
  82. 0 2
      code/elephant/requirements-extras.txt
  83. 0 1
      code/elephant/requirements-tests.txt
  84. 0 5
      code/elephant/requirements.txt
  85. 49 35
      code/elephant/setup.py
  86. 23 19
      code/example.py
  87. 0 877
      code/neo_utils.py
  88. 10 65
      code/python-neo/.circleci/config.yml
  89. 1 1
      code/python-neo/.circleci/requirements_testing.txt
  90. 28 5
      code/python-neo/.travis.yml
  91. 1 1
      code/python-neo/CITATION.txt
  92. 1 1
      code/python-neo/LICENSE.txt
  93. 1 1
      code/python-neo/README.rst
  94. 1 1
      code/python-neo/doc/source/api_reference.rst
  95. 28 5
      code/python-neo/doc/source/authors.rst
  96. 6 7
      code/python-neo/doc/source/conf.py
  97. 42 36
      code/python-neo/doc/source/core.rst
  98. 31 53
      code/python-neo/doc/source/developers_guide.rst
  99. 3 1
      code/python-neo/doc/source/examples.rst
  100. 0 0
      code/python-neo/doc/source/images/base_schematic.png

+ 4 - 33
code/data_overview_1.py

@@ -141,42 +141,13 @@ session = reachgraspio.ReachGraspIO(
     odml_directory=datasetdir,
     verbose=False)
 
-bl_lfp = session.read_block(
-    index=None,
-    name=None,
-    description=None,
-    nsx_to_load=nsx_lfp[monkey],
-    n_starts=None,
-    n_stops=None,
-    channels='all',
-    units=chosen_units[monkey],
-    load_waveforms=False,
-    load_events=True,
-    scaling='voltage',
-    lazy=False,
-    cascade=True)
-
-bl_raw = session.read_block(
-    index=None,
-    name=None,
-    description=None,
-    nsx_to_load=nsx_raw[monkey],
-    n_starts=None,
-    n_stops=None,
-    channels=chosen_el[monkey],
-    units=chosen_units[monkey],
-    load_waveforms=True,
-    load_events=True,
-    scaling='voltage',
-    lazy=False,
-    cascade=True)
-
-seg_raw = bl_raw.segments[0]
-seg_lfp = bl_lfp.segments[0]
+bl = session.read_block(lazy=False, load_waveforms=False, scaling='voltage')
+
+seg = bl.segments[0]
 
 # Displaying loaded data structure as string output
 print("\nBlock")
-print('Attributes ', bl_raw.__dict__.keys())
+print('Attributes ', bl.__dict__.keys())
 print('Annotations', bl_raw.annotations)
 print("\nSegment")
 print('Attributes ', seg_raw.__dict__.keys())

+ 16 - 2
code/elephant/.gitignore

@@ -15,7 +15,10 @@ nosetests.xml
 .pydevproject
 .settings
 *.tmp*
-.idea
+.idea/
+venv/
+env/
+.pytest_cache/
 
 # Compiled source #
 ###################
@@ -41,6 +44,10 @@ lib
 lib64
 # sphinx build directory
 doc/_build
+doc/reference/toctree/*
+!doc/reference/toctree/asset/elephant.asset.ASSET.rst
+!doc/reference/toctree/kernels
+*.h5
 # setup.py dist directory
 dist
 sdist
@@ -63,10 +70,17 @@ cover
 ######################
 .directory
 .gdb_history
-.DS_Store?
+.DS_Store
 ehthumbs.db
 Icon?
 Thumbs.db
 
 # Things specific to this project #
 ###################################
+# ignored folder for fast prototyping
+ignored/
+
+.ipynb_checkpoints/
+
+# data
+*.nix

+ 78 - 39
code/elephant/.travis.yml

@@ -1,44 +1,83 @@
-dist: precise
+dist: xenial
 language: python
 sudo: false
 
 addons:
    apt:
-      packages:
-      - libatlas3gf-base
-      - libatlas-dev
-      - libatlas-base-dev
-      - liblapack-dev
-      - gfortran
-      - python-scipy
-
-python:
-  - 2.7.13     
-      
-env:
-  matrix:
-    # This environment tests the newest supported anaconda env
-    - DISTRIB="conda" PYTHON_VERSION="2.7" INSTALL_MKL="true"
-      NUMPY_VERSION="1.15.1" SCIPY_VERSION="1.1.0" PANDAS_VERSION="0.23.4"
-      SIX_VERSION="1.10.0" COVERAGE="true"
-    - DISTRIB="conda" PYTHON_VERSION="3.5" INSTALL_MKL="true"
-      NUMPY_VERSION="1.15.1" SCIPY_VERSION="1.1.0" PANDAS_VERSION="0.23.4"
-      SIX_VERSION="1.10.0" COVERAGE="true"
-    # This environment tests minimal dependency versions
-    - DISTRIB="conda_min" PYTHON_VERSION="2.7" INSTALL_MKL="false"
-      SIX_VERSION="1.10.0" NUMPY_VERSION="1.8.2" SCIPY_VERSION="0.14.0" COVERAGE="true"
-    - DISTRIB="conda_min" PYTHON_VERSION="3.4" INSTALL_MKL="false"
-      SIX_VERSION="1.10.0" NUMPY_VERSION="1.8.2" SCIPY_VERSION="0.14.0" COVERAGE="true"
-    # basic Ubuntu build environment
-    - DISTRIB="ubuntu" PYTHON_VERSION="2.7" INSTALL_ATLAS="true"
-      COVERAGE="true"
-    # This environment tests for mpi
-    - DISTRIB="mpi" PYTHON_VERSION="3.5" INSTALL_MKL="false"
-      NUMPY_VERSION="1.15.1" SCIPY_VERSION="1.1.0" SIX_VERSION="1.10.0"
-      MPI_VERSION="2.0.0" COVERAGE="true" MPI="true"
-
-install: source continuous_integration/install.sh
-script: bash continuous_integration/test_script.sh
-after_success:
-    - if [[ "$COVERAGE" == "true" ]]; then coveralls || echo "failed"; fi
-cache: apt
+     update: true
+
+
+matrix:
+  include:
+    - name: "conda 2.7"
+      python: 2.7
+      env: DISTRIB="conda"
+      before_install: sed -i 's/conda-forge/conda/g' requirements/environment.yml
+
+    - name: "pip 2.7"
+      python: 2.7
+      env: DISTRIB="pip"
+
+    - name: "pip 3.5"
+      python: 3.5
+      env: DISTRIB="pip"
+
+    - name: "pip 3.6 requirements-extras"
+      python: 3.6
+      env: DISTRIB="pip"
+      before_install: sudo apt install -y libopenmpi-dev openmpi-bin
+      before_script: pip install -r requirements/requirements-extras.txt
+      script: mpiexec -n 1 python -m mpi4py.futures -m nose --with-coverage --cover-package=elephant
+      after_success: coveralls || echo "coveralls failed"
+
+    - name: "conda 3.7"
+      python: 3.7
+      env: DISTRIB="conda"
+
+    - name: "conda 3.8"
+      python: 3.8
+      env: DISTRIB="conda"
+
+    - name: "pip 3.8"
+      python: 3.8
+      env: DISTRIB="pip"
+
+    - name: "docs"
+      python: 3.6
+      env: DISTRIB="conda"
+      before_install: sudo apt install -y libopenmpi-dev openmpi-bin
+      before_script:
+        - conda install -c conda-forge pandoc
+        - pip install -r requirements/requirements-docs.txt
+        - pip install -r requirements/requirements-tutorials.txt
+        - pip install -r requirements/requirements-extras.txt
+        - sed -i -E "s/nbsphinx_execute *=.*/nbsphinx_execute = 'always'/g" doc/conf.py
+      script: cd doc && make html
+
+install:
+  - if [[ "${DISTRIB}" == "conda" ]];
+    then
+      py_major=${TRAVIS_PYTHON_VERSION:0:1};
+      wget https://repo.continuum.io/miniconda/Miniconda${py_major}-latest-Linux-x86_64.sh -O miniconda.sh;
+      bash miniconda.sh -b -p $HOME/miniconda;
+      source "$HOME/miniconda/etc/profile.d/conda.sh";
+      conda config --set always_yes yes;
+      conda update conda;
+      sed -i "s/python>=[0-9]\.[0-9]/python=${TRAVIS_PYTHON_VERSION}/g" requirements/environment.yml;
+      conda env create -f requirements/environment.yml;
+      conda activate elephant;
+      conda uninstall -y mpi4py;
+      pip list;
+    else
+      pip install -r requirements/requirements.txt;
+    fi
+
+  - pip -V
+  - pip install coverage coveralls nose
+  - python setup.py install
+  - python -c "from elephant.spade import HAVE_FIM; assert HAVE_FIM"
+  - pip list
+  - python --version
+
+script:
+  nosetests --with-coverage --cover-package=elephant

+ 0 - 1
code/elephant/AUTHORS.txt

@@ -1 +0,0 @@
-See doc/authors.rst

+ 1 - 1
code/elephant/LICENSE.txt

@@ -1,4 +1,4 @@
-Copyright (c) 2014-2018, Elephant authors and contributors
+Copyright (c) 2014-2019, Elephant authors and contributors
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

+ 16 - 6
code/elephant/MANIFEST.in

@@ -1,8 +1,18 @@
-# Include requirements
-include requirement*.txt
-include README.rst
+recursive-include elephant *.py
+include requirements/*
+include README.md
 include LICENSE.txt
-include AUTHORS.txt
-include elephant/test/spike_extraction_test_data.npz
+include CITATION.txt
+include elephant/VERSION
+include elephant/current_source_density_src/README.md
+include elephant/current_source_density_src/test_data.mat
+include elephant/spade_src/LICENSE
+recursive-include elephant/spade_src *.so *.pyd
+include elephant/test/spike_extraction_test_data.txt
 recursive-include doc *
-prune doc/build
+prune doc/_build
+prune doc/tutorials/.ipynb_checkpoints
+prune doc/reference/toctree
+include doc/reference/toctree/kernels/*
+recursive-exclude * *.h5
+recursive-exclude * *~

+ 0 - 23
code/elephant/README.rst

@@ -1,23 +0,0 @@
-Elephant - Electrophysiology Analysis Toolkit
-=============================================
-
-Elephant is a package for the analysis of neurophysiology data, based on Neo.
-
-Code status
------------
-
-.. image:: https://travis-ci.org/NeuralEnsemble/elephant.png?branch=master
-   :target: https://travis-ci.org/NeuralEnsemble/elephant
-   :alt: Unit Test Status
-.. image:: https://coveralls.io/repos/NeuralEnsemble/elephant/badge.png
-   :target: https://coveralls.io/r/NeuralEnsemble/elephant
-   :alt: Unit Test Coverage
-.. image:: https://requires.io/github/NeuralEnsemble/elephant/requirements.png?branch=master
-   :target: https://requires.io/github/NeuralEnsemble/elephant/requirements/?branch=master
-   :alt: Requirements Status
-.. image:: https://readthedocs.org/projects/elephant/badge/?version=latest
-   :target: https://readthedocs.org/projects/elephant/?badge=latest
-   :alt: Documentation Status
-
-:copyright: Copyright 2014-2018 by the Elephant team, see AUTHORS.txt.
-:license: Modified BSD License, see LICENSE.txt for details.

+ 0 - 135
code/elephant/continuous_integration/install.sh

@@ -1,135 +0,0 @@
-#!/bin/bash
-# Based on a script from scikit-learn
-
-# This script is meant to be called by the "install" step defined in
-# .travis.yml. See http://docs.travis-ci.com/ for more details.
-# The behavior of the script is controlled by environment variabled defined
-# in the .travis.yml in the top level folder of the project.
-
-set -e
-
-# Fix the compilers to workaround avoid having the Python 3.4 build
-# lookup for g++44 unexpectedly.
-export CC=gcc
-export CXX=g++
-
-if [[ "$DISTRIB" == "conda_min" ]]; then
-    # Deactivate the travis-provided virtual environment and setup a
-    # conda-based environment instead
-    deactivate
-
-    # Use the miniconda installer for faster download / install of conda
-    # itself
-    wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh \
-        -O miniconda.sh
-    chmod +x miniconda.sh && ./miniconda.sh -b -p $HOME/miniconda
-    export PATH=/home/travis/miniconda/bin:$PATH
-    conda config --set always_yes yes
-    conda update --yes conda
-
-    # Configure the conda environment and put it in the path using the
-    # provided versions
-    conda create -n testenv --yes python=$PYTHON_VERSION pip nose coverage \
-        six=$SIX_VERSION numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION
-    source activate testenv
-    conda install libgfortran=1
-
-    if [[ "$INSTALL_MKL" == "true" ]]; then
-        # Make sure that MKL is used
-        conda install --yes --no-update-dependencies mkl
-    else
-        # Make sure that MKL is not used
-        conda remove --yes --features mkl || echo "MKL not installed"
-    fi
-
-elif [[ "$DISTRIB" == "conda" ]]; then
-    # Deactivate the travis-provided virtual environment and setup a
-    # conda-based environment instead
-    deactivate
-
-    # Use the miniconda installer for faster download / install of conda
-    # itself
-    wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh \
-        -O miniconda.sh
-    chmod +x miniconda.sh && ./miniconda.sh -b -p $HOME/miniconda
-    export PATH=/home/travis/miniconda/bin:$PATH
-    conda config --set always_yes yes
-    conda update --yes conda
-
-    # Configure the conda environment and put it in the path using the
-    # provided versions
-    conda create -n testenv --yes python=$PYTHON_VERSION pip nose coverage six=$SIX_VERSION \
-        numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION pandas=$PANDAS_VERSION scikit-learn
-    source activate testenv
-
-    if [[ "$INSTALL_MKL" == "true" ]]; then
-        # Make sure that MKL is used
-        conda install --yes --no-update-dependencies mkl
-    else
-        # Make sure that MKL is not used
-        conda remove --yes --features mkl || echo "MKL not installed"
-    fi
-
-    if [[ "$COVERAGE" == "true" ]]; then
-        pip install coveralls
-    fi
-
-    python -c "import pandas; import os; assert os.getenv('PANDAS_VERSION') == pandas.__version__"
-
-elif [[ "$DISTRIB" == "mpi" ]]; then
-    # Deactivate the travis-provided virtual environment and setup a
-    # conda-based environment instead
-    deactivate
-
-    # Use the miniconda installer for faster download / install of conda
-    # itself
-    wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh \
-        -O miniconda.sh
-    chmod +x miniconda.sh && ./miniconda.sh -b -p $HOME/miniconda
-    export PATH=/home/travis/miniconda/bin:$PATH
-    conda config --set always_yes yes
-    conda update --yes conda
-
-    # Configure the conda environment and put it in the path using the
-    # provided versions
-    conda create -n testenv --yes python=$PYTHON_VERSION pip nose coverage six=$SIX_VERSION \
-        numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION scikit-learn mpi4py=$MPI_VERSION
-    source activate testenv
-
-    if [[ "$INSTALL_MKL" == "true" ]]; then
-        # Make sure that MKL is used
-        conda install --yes --no-update-dependencies mkl
-    else
-        # Make sure that MKL is not used
-        conda remove --yes --features mkl || echo "MKL not installed"
-    fi
-
-    if [[ "$COVERAGE" == "true" ]]; then
-        pip install coveralls
-    fi
-
-elif [[ "$DISTRIB" == "ubuntu" ]]; then
-    # deactivate
-    # Create a new virtualenv using system site packages for numpy and scipy
-    # virtualenv --system-site-packages testenv
-    # source testenv/bin/activate
-    pip install -r requirements.txt    
-fi
-
-if [[ "$COVERAGE" == "true" ]]; then
-    pip install coveralls
-fi
-
-# pip install neo==0.3.3
-wget https://github.com/NeuralEnsemble/python-neo/archive/master.tar.gz
-tar -xzvf master.tar.gz
-pushd python-neo-master
-python setup.py install
-popd
-
-pip install .
-
-if ! [[ "$DISTRIB" == "ubuntu" ]]; then
-    python -c "import numpy; import os; assert os.getenv('NUMPY_VERSION') == numpy.__version__, 'Numpy versions do not match: {0} - {1}'.format(os.getenv('NUMPY_VERSION'), numpy.__version__)"
-    python -c "import scipy; import os; assert os.getenv('SCIPY_VERSION') == scipy.__version__, 'Scipy versions do not match: {0} - {1}'.format(os.getenv('SCIPY_VERSION'), scipy.__version__)"
-fi

+ 0 - 23
code/elephant/continuous_integration/test_script.sh

@@ -1,23 +0,0 @@
-#!/bin/bash
-# Based on a script from scikit-learn
-
-# This script is meant to be called by the "script" step defined in
-# .travis.yml. See http://docs.travis-ci.com/ for more details.
-# The behavior of the script is controlled by environment variables defined
-# in the .travis.yml in the top level folder of the project.
-
-set -e
-
-python --version
-python -c "import numpy; print('numpy %s' % numpy.__version__)"
-python -c "import scipy; print('scipy %s' % scipy.__version__)"
-
-if [[ "$COVERAGE" == "true" ]]; then
-    if [[ "$MPI" == "true" ]]; then
-	mpiexec -n 1 nosetests --with-coverage --cover-package=elephant
-    else
-	nosetests --with-coverage --cover-package=elephant
-    fi
-else
-    nosetests
-fi

+ 22 - 1
code/elephant/doc/authors.rst

@@ -8,12 +8,17 @@ The following people have contributed code and/or ideas to the current version
 of Elephant. The institutional affiliations are those at the time of the
 contribution, and may not be the current affiliation of a contributor.
 
+Do you want to contribute to Elephant? Please refer to the
+:ref:`developers_guide`.
+
 * Alper Yegenoglu [1]
 * Andrew Davison [2]
+* Björn Müller [1]
 * Detlef Holstein [2]
 * Eilif Muller [3, 4]
 * Emiliano Torre [1]
 * Espen Hagen [1]
+* Jeffrey Gill [11]
 * Jan Gosmann [6, 8]
 * Julia Sprenger [1]
 * Junji Ito [1]
@@ -32,6 +37,20 @@ contribution, and may not be the current affiliation of a contributor.
 * Michael von Papen [1]
 * Robin Gutzen [1]
 * Felipe Méndez [10]
+* Simon Essink [1]
+* Alessandra Stella [1]
+* Peter Bouss [1]
+* Alexander van Meegen [1]
+* Aitor Morales-Gregorio [1]
+* Cristiano Köhler [1]
+* Paulina Dąbrowska [1]
+* Jan Lewen [1]
+* Alexander Kleinjohann [1]
+* Danylo Ulianych [1]
+* Anno Kurth [1]
+* Regimantas Jurkus [1]
+* Philipp Steigerwald [12]
+* Manuel Ciba [12]
 
 1. Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience & Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience, Jülich Research Centre and JARA, Jülich, Germany
 2. Unité de Neurosciences, Information et Complexité, CNRS UPR 3293, Gif-sur-Yvette, France
@@ -42,6 +61,8 @@ contribution, and may not be the current affiliation of a contributor.
 7. Arizona State University School of Life Sciences, USA
 8. Computational Neuroscience Research Group (CNRG), Waterloo Centre for Theoretical Neuroscience, Waterloo, Canada
 9. Nencki Institute of Experimental Biology, Warsaw, Poland
-10.  Instituto de Neurobiología, Universidad Nacional Autónoma de México, Mexico City, Mexico
+10. Instituto de Neurobiología, Universidad Nacional Autónoma de México, Mexico City, Mexico
+11. Case Western Reserve University (CWRU), Cleveland, OH, USA
+12. BioMEMS Lab, TH Aschaffenburg University of applied sciences, Germany
 
 If we've somehow missed you off the list we're very sorry - please let us know.

+ 58 - 14
code/elephant/doc/conf.py

@@ -11,8 +11,9 @@
 # All configuration values have a default; values that are commented out
 # serve to show the default.
 
-import sys
 import os
+import sys
+from datetime import date
 
 # If extensions (or modules to document with autodoc) are in another directory,
 # add these directories to sys.path here. If the directory is relative to the
@@ -28,12 +29,19 @@ sys.path.insert(0, '..')
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
 extensions = [
     'sphinx.ext.autodoc',
+    'sphinx.ext.autosummary',
     'sphinx.ext.doctest',
     'sphinx.ext.intersphinx',
     'sphinx.ext.todo',
     'sphinx.ext.imgmath',
     'sphinx.ext.viewcode',
-    'numpydoc']
+    'sphinx.ext.mathjax',
+    'sphinxcontrib.bibtex',
+    'matplotlib.sphinxext.plot_directive',
+    'numpydoc',
+    'nbsphinx',
+    'sphinx_tabs.tabs',
+]
 
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['_templates']
@@ -50,16 +58,21 @@ master_doc = 'index'
 # General information about the project.
 project = u'Elephant'
 authors = u'Elephant authors and contributors'
-copyright = u'2014-2018, ' + authors
+copyright = u"2014-{this_year}, {authors}".format(this_year=date.today().year,
+                                                  authors=authors)
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the
 # built documents.
 #
+
+root_dir = os.path.dirname(os.path.dirname(__file__))
+with open(os.path.join(root_dir, 'elephant', 'VERSION')) as version_file:
+    # The full version, including alpha/beta/rc tags.
+    release = version_file.read().strip()
+
 # The short X.Y version.
-version = '0.6'
-# The full version, including alpha/beta/rc tags.
-release = '0.6.0'
+version = '.'.join(release.split('.')[:-1])
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.
@@ -73,7 +86,11 @@ release = '0.6.0'
 
 # List of patterns, relative to source directory, that match files and
 # directories to ignore when looking for source files.
-exclude_patterns = ['_build']
+exclude_patterns = [
+    '_build',
+    '**.ipynb_checkpoints',
+    'maintainers_guide.rst',  # should not be visible for users
+]
 
 # The reST default role (used for this markup: `text`) to use for all documents.
 #default_role = None
@@ -95,12 +112,30 @@ pygments_style = 'sphinx'
 # A list of ignored prefixes for module index sorting.
 #modindex_common_prefix = []
 
+# Only execute Jupyter notebooks that have no evaluated cells
+nbsphinx_execute = 'auto'
+# Kernel to use for execution
+nbsphinx_kernel_name = 'python3'
+# Cancel compile on errors in notebooks
+nbsphinx_allow_errors = False
+
+# Required to automatically create a summary page for each function listed in
+# the autosummary fields of each module.
+autosummary_generate = True
+
+# don't overwrite our custom toctree/*.rst
+autosummary_generate_overwrite = False
 
 # -- Options for HTML output ---------------------------------------------
 
 # The theme to use for HTML and HTML Help pages.  See the documentation for
 # a list of builtin themes.
-html_theme = 'sphinxdoc'
+html_theme = 'alabaster'
+html_theme_options = {
+    'font_family': 'Arial',
+    'page_width': '1200px',  # default is 940
+    'sidebar_width': '280px',  # default is 220
+}
 
 # Theme options are theme-specific and customize the look and feel of a theme
 # further.  For a list of options available for each theme, see the
@@ -129,7 +164,7 @@ html_favicon = 'images/elephant_favicon.ico'
 # Add any paths that contain custom static files (such as style sheets) here,
 # relative to this directory. They are copied after the builtin static files,
 # so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
+# html_static_path = ['_static']
 
 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
 # using the given strftime format.
@@ -159,10 +194,10 @@ html_use_index = True
 #html_show_sourcelink = True
 
 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
-#html_show_sphinx = True
+html_show_sphinx = False
 
 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
-#html_show_copyright = True
+html_show_copyright = True
 
 # If true, an OpenSearch description file will be output, and all pages will
 # contain a <link> tag referring to it.  The value of this option must be the
@@ -175,18 +210,27 @@ html_use_index = True
 # Output file base name for HTML help builder.
 htmlhelp_basename = 'elephantdoc'
 
+# Suppresses  wrong numpy doc warnings
+# see here https://github.com/phn/pytpm/issues/3#issuecomment-12133978
+numpydoc_show_class_members = False
+
+# A fix for Alabaster theme for no space between a citation reference
+# and citation text
+# https://github.com/sphinx-doc/sphinx/issues/6705#issuecomment-536197438
+html4_writer = True
+
 
 # -- Options for LaTeX output --------------------------------------------
 
 latex_elements = {
     # The paper size ('letterpaper' or 'a4paper').
-    #'papersize': 'letterpaper',
+    # 'papersize': 'letterpaper',
 
     # The font size ('10pt', '11pt' or '12pt').
-    #'pointsize': '10pt',
+    # 'pointsize': '10pt',
 
     # Additional stuff for the LaTeX preamble.
-    #'preamble': '',
+    # 'preamble': '',
 }
 
 # Grouping the document tree into LaTeX files. List of tuples

+ 46 - 197
code/elephant/doc/developers_guide.rst

@@ -1,226 +1,75 @@
+.. _developers_guide:
+
 =================
-Developers' guide
+Developers' Guide
 =================
 
-These instructions are for developing on a Unix-like platform, e.g. Linux or
-Mac OS X, with the bash shell. If you develop on Windows, please get in touch.
-
-
-Mailing lists
--------------
-
-General discussion of Elephant development takes place in the `NeuralEnsemble Google
-group`_.
-
-Discussion of issues specific to a particular ticket in the issue tracker should
-take place on the tracker.
-
-
-Using the issue tracker
------------------------
+.. note:: The documentation guide (how to write a good documentation, naming
+          conventions, docstring examples) is in :ref:`documentation_guide`.
 
-If you find a bug in Elephant, please create a new ticket on the `issue tracker`_,
-setting the type to "defect".
-Choose a name that is as specific as possible to the problem you've found, and
-in the description give as much information as you think is necessary to
-recreate the problem. The best way to do this is to create the shortest possible
-Python script that demonstrates the problem, and attach the file to the ticket.
 
-If you have an idea for an improvement to Elephant, create a ticket with type
-"enhancement". If you already have an implementation of the idea, open a pull request.
+1. Follow the instructions in :ref:`prerequisites` to setup a clean conda
+   environment. To be safe, run::
 
+    $ pip uninstall elephant
 
-Requirements
-------------
+   to uninstall ``elephant`` in case you've installed it previously as a pip
+   package.
 
-See :doc:`install`. We strongly recommend using virtualenv_ or similar.
-
-
-Getting the source code
------------------------
-
-We use the Git version control system. The best way to contribute is through
-GitHub_. You will first need a GitHub account, and you should then fork the
-repository at https://github.com/NeuralEnsemble/elephant
-(see http://help.github.com/fork-a-repo/).
-
-To get a local copy of the repository::
-
-    $ cd /some/directory
-    $ git clone git@github.com:<username>/elephant.git
-    
-Now you need to make sure that the ``elephant`` package is on your PYTHONPATH.
-You can do this by installing Elephant::
+2. Fork `Elephant <https://github.com/NeuralEnsemble/elephant>`_ as described
+   in `Fork a repo <https://help.github.com/en/github/getting-started-with-github/fork-a-repo>`_.
+   Download Elephant source code from your forked repo::
 
+    $ git clone git://github.com/<your-github-profile>/elephant.git
     $ cd elephant
-    $ python setup.py install
-    $ python3 setup.py install
-
-but if you do this, you will have to re-run ``setup.py install`` any time you make
-changes to the code. A better solution is to install Elephant with the *develop* option,
-this avoids reinstalling when there are changes in the code::
-
-    $ python setup.py develop
-
-or::
-
-    $ pip install -e .
-
-To update to the latest version from the repository::
-
-    $ git pull
-
-
-Running the test suite
-----------------------
-
-Before you make any changes, run the test suite to make sure all the tests pass
-on your system::
-
-    $ cd elephant/test
-
-With Python 2.7 or 3.x::
-
-    $ python -m unittest discover
-    $ python3 -m unittest discover
-
-If you have nose installed::
-
-    $ nosetests
-
-At the end, if you see "OK", then all the tests
-passed (or were skipped because certain dependencies are not installed),
-otherwise it will report on tests that failed or produced errors.
-
 
-Writing tests
--------------
+3. Install requirements.txt, (optionally) requirements-extras.txt, and
+   requirements-tests.txt::
 
-You should try to write automated tests for any new code that you add. If you
-have found a bug and want to fix it, first write a test that isolates the bug
-(and that therefore fails with the existing codebase). Then apply your fix and
-check that the test now passes.
+    $ pip install -r requirements/requirements.txt
+    $ pip install -r requirements/requirements-extras.txt  # optional
+    $ pip install -r requirements/requirements-tests.txt
 
-To see how well the tests cover the code base, run::
+4. Before you make any changes, run the test suite to make sure all the tests
+   pass on your system::
 
-    $ nosetests --with-coverage --cover-package=elephant --cover-erase
+    $ nosetests .
 
+   You can specify a particular module to test, for example
+   ``test_statistics.py``::
 
-Working on the documentation
-----------------------------
+    $ nosetests elephant/test/test_statistics.py
 
-The documentation is written in `reStructuredText`_, using the `Sphinx`_
-documentation system. To build the documentation::
+   At the end, if you see "OK", then all the tests passed (or were skipped
+   because certain dependencies are not installed), otherwise it will report
+   on tests that failed or produced errors.
 
-    $ cd elephant/doc
-    $ make html
-    
-Then open `some/directory/elephant/doc/_build/html/index.html` in your browser.
-Docstrings should conform to the `NumPy docstring standard`_.
+5. **Implement the functional you want to add in Elephant**. This includes
+   (either of them):
 
-To check that all example code in the documentation is correct, run::
+   * fixing a bug;
+   * improving the documentation;
+   * adding a new functional.
 
-    $ make doctest
+6. If it was a new functional, please write:
 
-To check that all URLs in the documentation are correct, run::
+   - documentation (refer to :ref:`documentation_guide`);
+   - tests to cover your new functions as much as possible.
 
-    $ make linkcheck
+7. Run the tests again as described in step 4.
 
+8. Commit your changes::
 
-Committing your changes
------------------------
-
-Once you are happy with your changes, **run the test suite again to check
-that you have not introduced any new bugs**. Then you can commit them to your
-local repository::
-
-    $ git commit -m 'informative commit message'
-    
-If this is your first commit to the project, please add your name and
-affiliation/employer to :file:`doc/source/authors.rst`
-
-You can then push your changes to your online repository on GitHub::
-
+    $ git add .
+    $ git commit -m "informative commit message"
     $ git push
-    
-Once you think your changes are ready to be included in the main Elephant repository,
-open a pull request on GitHub (see https://help.github.com/articles/using-pull-requests).
-
-
-Python 3
---------
-
-Elephant should work with Python 2.7 and Python 3.
-
-So far, we have managed to write code that works with both Python 2 and 3.
-Mainly this involves avoiding the ``print`` statement (use ``logging.info``
-instead), and putting ``from __future__ import division`` at the beginning of
-any file that uses division.
-
-If in doubt, `Porting to Python 3`_ by Lennart Regebro is an excellent resource.
-
-The most important thing to remember is to run tests with at least one version
-of Python 2 and at least one version of Python 3. There is generally no problem
-in having multiple versions of Python installed on your computer at once: e.g.,
-on Ubuntu Python 2 is available as `python` and Python 3 as `python3`, while
-on Arch Linux Python 2 is `python2` and Python 3 `python`. See `PEP394`_ for
-more on this.
-
-
-Coding standards and style
---------------------------
-
-All code should conform as much as possible to `PEP 8`_, and should run with
-Python 2.7 and 3.2-3.5.
-
-
-Making a release
-----------------
-
-.. TODO: discuss branching/tagging policy.
-
-.. Add a section in /doc/releases/<version>.rst for the release.
-
-First, check that the version string (in :file:`elephant/__init__.py`, :file:`setup.py`,
-:file:`doc/conf.py`, and :file:`doc/install.rst`) is correct.
-
-Second, check that the copyright statement (in :file:`LICENCE.txt`, :file:`README.md`, and :file:`doc/conf.py`) is correct.
-
-To build a source package::
-
-    $ python setup.py sdist
-
-To upload the package to `PyPI`_ (if you have the necessary permissions)::
-
-    $ python setup.py sdist upload
-
-.. should we also distribute via software.incf.org
-
-Finally, tag the release in the Git repository and push it::
-
-    $ git tag <version>
-    $ git push --tags upstream
-
-Here, version should be of the form `vX.Y.Z`.
 
-.. make a release branch
+   If this is your first commit to the project, please add your name and
+   affiliation/employer to :file:`doc/authors.rst`
 
+9. Open a `pull request <https://github.com/NeuralEnsemble/elephant/pulls>`_.
+   Then we'll merge your code in Elephant.
 
 
-.. _Python: http://www.python.org
-.. _nose: http://somethingaboutorange.com/mrl/projects/nose/
-.. _neo: http://neuralensemble.org/neo
-.. _coverage: http://nedbatchelder.com/code/coverage/
-.. _`PEP 8`: http://www.python.org/dev/peps/pep-0008/
-.. _`issue tracker`: https://github.com/NeuralEnsemble/elephant/issues
-.. _`Porting to Python 3`: http://python3porting.com/
-.. _`NeuralEnsemble Google group`: http://groups.google.com/group/neuralensemble
-.. _reStructuredText: http://docutils.sourceforge.net/rst.html
-.. _Sphinx: http://sphinx.pocoo.org/
-.. _numpy: http://www.numpy.org/
-.. _quantities: http://pypi.python.org/pypi/quantities
-.. _PEP394: http://www.python.org/dev/peps/pep-0394/
-.. _PyPI: http://pypi.python.org
-.. _GitHub: http://github.com
-.. _`NumPy docstring standard`: https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
-.. _`virtualenv`: https://virtualenv.pypa.io/en/latest/
+.. note:: If you experience a problem during one of the steps above, please
+          contact us by :ref:`get_in_touch`.

+ 0 - 34
code/elephant/doc/environment.yml

@@ -1,34 +0,0 @@
-name: elephant
-dependencies:
-- libgfortran=1.0=0
-- alabaster=0.7.7=py35_0
-- babel=2.2.0=py35_0
-- docutils
-- jinja2=2.8=py35_0
-- markupsafe=0.23=py35_0
-- mkl=11.3.1=0
-- numpy
-- numpydoc
-- openssl=1.0.2g=0
-- pip=8.1.1=py35_0
-- pygments=2.1.1=py35_0
-- python=3.5.1=0
-- pytz=2016.2=py35_0
-- readline=6.2=2
-- scipy=0.17.0=np110py35_0
-- setuptools=20.3=py35_0
-- six=1.10.0=py35_0
-- scikit-learn==0.17.1
-- snowballstemmer=1.2.1=py35_0
-- sphinx=1.3.5=py35_0
-- sphinx_rtd_theme=0.1.9=py35_0
-- sqlite=3.9.2=0
-- tk=8.5.18=0
-- wheel=0.29.0=py35_0
-- xz=5.0.5=1
-- zlib=1.2.8=0
-- pip:
-  - neo
-  - quantities
-  - sphinx-rtd-theme
- 

BIN
code/elephant/doc/images/elephant_structure.png


BIN
code/elephant/doc/images/tutorials/tutorial_1_figure_1.png


BIN
code/elephant/doc/images/tutorials/tutorial_1_figure_2.png


+ 28 - 14
code/elephant/doc/index.rst

@@ -1,33 +1,42 @@
-.. Elephant documentation master file, created by
-   sphinx-quickstart on Thu Aug 22 08:39:42 2013.
-
-
 *********************************************
 Elephant - Electrophysiology Analysis Toolkit
 *********************************************
 
-Synopsis
---------
-    
+*Elephant* (Electrophysiology Analysis Toolkit) is an emerging open-source,
+community centered library for the analysis of electrophysiological data in
+the Python programming language.
 
-*Elephant* is a toolbox for the analysis of electrophysiological data based on the Neo_ framework. This manual covers the installation of Elephant in an existing Python environment, several tutorials to help get you started, information on the structure and conventions of the library, a list of modules, and help for future contributors to Elephant.
+The focus of Elephant is on generic analysis functions for spike train data and
+time series recordings from electrodes, such as the local field potentials
+(LFP) or intracellular voltages. In addition to providing a common platform for
+analysis codes from different laboratories, the Elephant project aims to
+provide a consistent and homogeneous analysis framework that is built on a
+modular foundation. Elephant is the direct successor to Neurotools_ and
+maintains ties to complementary projects such as ephyviewer_ and
+neurotic_ for raw data visualization.
+
+The input-output data format is either Neo_, Quantity_ or Numpy_ array.
+Quantity is a Numpy-wrapper package for handling physical quantities like
+seconds, milliseconds, Hz, volts, etc. Quantity is used in both Neo and
+Elephant.
 
-	
 Table of Contents
 -----------------
 
 .. toctree::
     :maxdepth: 1
 
-    overview
     install
-    tutorial
+    tutorials
     modules
     developers_guide
     authors
-    release_notes	       
+    release_notes
+    get_in_touch
+    acknowledgments
+    citation
+
 
-   
 
 .. Indices and tables
 .. ==================
@@ -37,7 +46,12 @@ Table of Contents
 .. * :ref:`search`
 
 
-.. _`Neo`: https://github.com/NeuralEnsemble/python-neo
+.. _Neurotools:  http://neuralensemble.org/NeuroTools/
+.. _ephyviewer:  https://ephyviewer.readthedocs.io/en/latest/
+.. _neurotic:  https://neurotic.readthedocs.io/en/latest/
+.. _Neo: http://neuralensemble.org/neo/
+.. _Numpy: http://www.numpy.org/
+.. _Quantity: https://python-quantities.readthedocs.io/en/latest/
 
 
 .. |date| date::

+ 96 - 67
code/elephant/doc/install.rst

@@ -1,107 +1,136 @@
 .. _install:
 
-****************************
-Prerequisites / Installation
-****************************
+************
+Installation
+************
 
-Elephant is a pure Python package so that it should be easy to install on any system.
+The easiest way to install Elephant is by creating a conda environment, followed by ``pip install elephant``.
+Below is the explanation of how to proceed with these two steps.
 
 
-Dependencies
-============
+.. _prerequisites:
 
-The following packages are required to use Elephant:
-    * Python_ >= 2.7
-    * numpy_ >= 1.8.2
-    * scipy_ >= 0.14.0
-    * quantities_ >= 0.10.1
-    * neo_ >= 0.5.0
+Prerequisites
+=============
 
-The following packages are optional in order to run certain parts of Elephant:
-    * For using the pandas_bridge module: 
-        * pandas >= 0.14.1
-    * For using the ASSET analysis
-    * scikit-learn >= 0.15.1
-    * For building the documentation:
-        * numpydoc >= 0.5
-        * sphinx >= 1.2.2
-    * For running tests:
-        * nose >= 1.3.3
+Elephant requires Python_ 2.7, 3.5, 3.6, 3.7, or 3.8.
+
+.. tabs::
+
+
+    .. tab:: (recommended) Conda (Linux/MacOS/Windows)
+
+        1. Create your conda environment (e.g., `elephant_env`):
 
-All dependencies can be found on the Python package index (PyPI).
+           .. code-block:: sh
 
+              conda create --name elephant_env python=3.7 numpy scipy tqdm
 
-Debian/Ubuntu
--------------
-For Debian/Ubuntu, we recommend to install numpy and scipy as system packages using apt-get::
-    
-    $ apt-get install python-numpy python-scipy python-pip python-six
+        2. Activate your environment:
 
-Further packages are found on the Python package index (pypi) and should be installed with pip_::
-    
-    $ pip install quantities
-    $ pip install neo
+           .. code-block:: sh
 
-We highly recommend to install these packages using a virtual environment provided by virtualenv_ or locally in the home directory using the ``--user`` option of pip (e.g., ``pip install --user quantities``), neither of which require administrator privileges.
+              conda activate elephant_env
 
-Windows/Mac OS X
-----------------
 
-On non-Linux operating systems we recommend using the Anaconda_ Python distribution, and installing all dependencies in a `Conda environment`_, e.g.::
+    .. tab:: Debian/Ubuntu
+
+        Open a terminal and run:
+
+        .. code-block:: sh
+
+           sudo apt-get install python-pip python-numpy python-scipy python-pip python-six python-tqdm
 
-    $ conda create -n neuroscience python numpy scipy pip six
-    $ source activate neuroscience
-    $ pip install quantities
-    $ pip install neo
 
 
 Installation
 ============
 
-Automatic installation from PyPI
---------------------------------
+.. tabs::
+
+
+    .. tab:: Stable release version
+
+        The easiest way to install Elephant is via pip_:
+
+           .. code-block:: sh
+
+              pip install elephant
 
-The easiest way to install Elephant is via pip_::
+        To upgrade to a newer release use the ``--upgrade`` flag:
 
-    $ pip install elephant    
+           .. code-block:: sh
 
+              pip install --upgrade elephant
 
-Manual installation from pypi
------------------------------
+        If you do not have permission to install software systemwide, you can
+        install into your user directory using the ``--user`` flag:
 
-To download and install manually, download the latest package from http://pypi.python.org/pypi/elephant
+           .. code-block:: sh
 
-Then::
+              pip install --user elephant
 
-    $ tar xzf elephant-0.6.0.tar.gz
-    $ cd elephant-0.6.0
-    $ python setup.py install
-    
-or::
+        To install Elephant with all extra packages, do:
 
-    $ python3 setup.py install
-    
-depending on which version of Python you are using.
+           .. code-block:: sh
 
+              pip install elephant[extras]
 
-Installation of the latest build from source
---------------------------------------------
 
-To install the latest version of Elephant from the Git repository::
+    .. tab:: Development version
 
-    $ git clone git://github.com/NeuralEnsemble/elephant.git
-    $ cd elephant
-    $ python setup.py install
+        If you have `Git <https://git-scm.com/>`_ installed on your system,
+        it is also possible to install the development version of Elephant.
+
+        1. Before installing the development version, you may need to uninstall
+           the previously installed version of Elephant:
+
+           .. code-block:: sh
+
+              pip uninstall elephant
+
+        2. Clone the repository and install the local version:
+
+           .. code-block:: sh
+
+              git clone git://github.com/NeuralEnsemble/elephant.git
+              cd elephant
+              pip install -e .
+
+
+
+Dependencies
+------------
+
+The following packages are required to use Elephant (refer to requirements_ for the exact package versions):
+
+    * numpy_ - fast array computations
+    * scipy_ - scientific library for Python
+    * quantities_ - support for physical quantities with units (mV, ms, etc.)
+    * neo_ - electrophysiology data manipulations
+    * tqdm_ - progress bar
+    * six_ - Python 2 and 3 compatibility utilities
+
+These packages are automatically installed when you run ``pip install elephant``.
+
+The following packages are optional in order to run certain parts of Elephant:
 
+    * `pandas <https://pypi.org/project/pandas/>`_ - for the :doc:`pandas_bridge <reference/pandas_bridge>` module
+    * `scikit-learn <https://pypi.org/project/scikit-learn/>`_ - for the :doc:`ASSET <reference/asset>` analysis
+    * `nose <https://pypi.org/project/nose/>`_ - for running tests
+    * `numpydoc <https://pypi.org/project/numpydoc/>`_ and `sphinx <https://pypi.org/project/Sphinx/>`_ - for building the documentation
 
+These and above packages are automatically installed when you run ``pip install elephant[extras]``.
 
 .. _`Python`: http://python.org/
 .. _`numpy`: http://www.numpy.org/
-.. _`scipy`: http://scipy.org/scipylib/
+.. _`scipy`: https://www.scipy.org/
 .. _`quantities`: http://pypi.python.org/pypi/quantities
 .. _`neo`: http://pypi.python.org/pypi/neo
 .. _`pip`: http://pypi.python.org/pypi/pip
-.. _`virtualenv`: https://virtualenv.pypa.io/en/latest/
-.. _`this snapshot`: https://github.com/NeuralEnsemble/python-neo/archive/snapshot-20150821.zip
-.. _Anaconda: http://continuum.io/downloads
-.. _`Conda environment`: http://conda.pydata.org/docs/faq.html#creating-new-environments
+.. _Anaconda: https://docs.anaconda.com/anaconda/install/
+.. _`Conda environment`: https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html
+.. _`tqdm`: https://pypi.org/project/tqdm/
+.. _`six`: https://pypi.org/project/six/
+.. _requirements: https://github.com/NeuralEnsemble/elephant/blob/master/requirements/requirements.txt
+.. _PyPI: https://pypi.org/

+ 21 - 12
code/elephant/doc/modules.rst

@@ -5,22 +5,31 @@ Function Reference by Module
 .. toctree::
    :maxdepth: 1
 
-   reference/statistics
-   reference/signal_processing
-   reference/spectral
-   reference/current_source_density
-   reference/kernels
-   reference/spike_train_dissimilarity
-   reference/sta
-   reference/spike_train_correlation
-   reference/unitary_event_analysis
-   reference/cubic
    reference/asset
+   reference/causality
    reference/cell_assembly_detection
-   reference/spike_train_generation
-   reference/spike_train_surrogates
+   reference/change_point_detection
    reference/conversion
+   reference/cubic
+   reference/current_source_density
+   reference/gpfa
+   reference/kernels
    reference/neo_tools
    reference/pandas_bridge
+   reference/parallel
+   reference/phase_analysis
+   reference/signal_processing
+   reference/spade
+   reference/spectral
+   reference/spike_train_generation
+   reference/spike_train_surrogates
+   reference/sta
+   reference/statistics
+   reference/unitary_event_analysis
+   reference/waveform_features
 
 
+.. toctree::
+   :maxdepth: 2
+
+   reference/_spike_train_processing

+ 0 - 113
code/elephant/doc/overview.rst

@@ -1,113 +0,0 @@
-********
-Overview
-********
-
-What is Elephant?
-=====================
-
-As a result of the complexity inherent in modern recording technologies that yield massively parallel data streams and in advanced analysis methods to explore such rich data sets, the need for more reproducible research in the neurosciences can no longer be ignored. Reproducibility rests on building workflows that may allow users to transparently trace their analysis steps from data acquisition to final publication. A key component of such a workflow is a set of defined analysis methods to perform the data processing.
-
-Elephant (Electrophysiology Analysis Toolkit) is an emerging open-source, community centered library for the analysis of electrophysiological data in the Python programming language. The focus of Elephant is on generic analysis functions for spike train data and time series recordings from electrodes, such as the local field potentials (LFP) or intracellular voltages. In addition to providing a common platform for analysis codes from different laboratories, the Elephant project aims to provide a consistent and homogeneous analysis framework that is built on a modular foundation. Elephant is the direct successor to Neurotools [#f1]_ and maintains ties to complementary projects such as OpenElectrophy [#f2]_ and spykeviewer [#f3]_.
-
-* Analysis functions use consistent data formats and conventions as input arguments and outputs. Electrophysiological data will generally be represented by data models defined by the Neo_ [#f4]_ project.
-* Library functions are based on a set of core functions for commonly used operations, such as sliding windows, converting data to alternate representations, or the generation of surrogates for hypothesis testing.
-* Accepted analysis functions must be equipped with a range of unit tests to ensure a high standard of code quality.
-
-
-Elephant library structure
-==========================
-
-Elephant is a standard python package and is structured into a number of submodules. The following is a sketch of the layout of the Elephant library (0.3.0 release).
-
-.. figure:: images/elephant_structure.png
-    :width: 400 px
-    :align: center
-    :figwidth: 80 %
-    
-    Modules of the Elephant library. Modules containing analysis functions are colored in blue shades, core functionality in green shades.
-   
-
-Conceptually, modules of the Elephant library can be divided into those related to a specific category of analysis methods, and supporting modules that provide a layer of various core utility functions. All available modules are available directly on the the top level of the Elephant package in the ``elephant`` subdirectory to avoid unnecessary hierarchical clutter. Unit tests for all functions are located in the ``elephant/test`` subdirectory and are named according the module name. This documentation is located in the top level ``doc`` subdirectory.
-
-In the following we provide a brief overview of the modules available in Elephant.
-
-
-Analysis modules
-----------------
-
-``statistics``
-^^^^^^^^^^^^^^
-Statistical measures of spike trains (e.g., Fano factor) and functions to estimate firing rates.
-
-``signal_processing``
-^^^^^^^^^^^^^^^^^^^^^
-Basic processing procedures for analog signals (e.g., performing a z-score of a signal, or filtering a signal).
-
-``spectral``
-^^^^^^^^^^^^
-Identification of spectral properties in analog signals (e.g., the power spectrum)
-
-``kernels``
-^^^^^^^^^^^^^^
-A class that provides representations for commonly used kernel functions.
-
-``spike_train_dissimilarity_measures``
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Spike train metrics (e.g., the Victor-Purpura measure) to measure the (dis-)similarity between spike trains.
-
-``sta``
-^^^^^^^
-Calculate the spike-triggered average and spike-field-coherence of an analog signal.
-
-``spike_train_correlation``
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Functions to quantify correlations between sets of spike trains.
-
-``unitary_event_analysis``
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-Determine periods where neurons synchronize their activity beyond chance level.
-
-``cubic``
-^^^^^^^^^
-Implements the method Cumulant Based Inference of higher-order Correlation (CuBIC) to detect the presence of higher-order correlations in massively parallel data based on its complexity distribution.
-
-``asset``
-^^^^^^^^^
-Implementation of the Analysis of Sequences of Synchronous EvenTs (ASSET) to detect, in particular, syn-fire chain like activity.
-
-``csd``
-^^^^^^^
-Inverse and standard methods to estimate of current source density (CSD) of laminar LFP recordings.
-
-
-Supporting modules
-------------------
-
-``conversion``
-^^^^^^^^^^^^^^
-This module allows to convert standard data representations (e.g., a spike train stored as Neo ``SpikeTrain`` object) into other representations useful to perform calculations on the data. An example is the representation of a spike train as a sequence of 0-1 values (*binned spike train*). 
-
-``spike_train_generation``
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-This module provides functions to generate spike trains according to prescribed stochastic models (e.g., a Poisson spike train). 
-
-``spike_train_surrogates``
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-This module provides functionality to generate surrogate spike trains from given spike train data. This is particularly useful in the context of determining the significance of analysis results via Monte-Carlo methods.
-
-``neo_tools``
-^^^^^^^^^^^^^
-Provides useful convenience functions to work efficiently with Neo objects.
-
-``pandas_bridge``
-^^^^^^^^^^^^^^^^^
-Bridge from Elephant to the pandas library.
-
-
-References
-==========
-.. [#f1]  http://neuralensemble.org/NeuroTools/
-.. [#f2]  http://neuralensemble.org/OpenElectrophy/
-.. [#f3]  http://spykeutils.readthedocs.org/en/0.4.1/
-.. [#f4]  Garcia et al. (2014) Front.~Neuroinform. 8:10
-.. _`Neo`: http://neuralensemble.org/neo/

+ 10 - 2
code/elephant/doc/reference/asset.rst

@@ -1,6 +1,14 @@
 ===================================================
-Analysis of Sequences of Synchronous EvenTs (ASSET) 
+Analysis of Sequences of Synchronous EvenTs (ASSET)
 ===================================================
 
 .. automodule:: elephant.asset
-   :members:
+
+
+References
+----------
+
+.. bibliography:: ../bib/elephant.bib
+   :labelprefix: as
+   :keyprefix: asset-
+   :style: unsrt

+ 3 - 3
code/elephant/doc/reference/conversion.rst

@@ -1,6 +1,6 @@
-=======================
-Data format conversions
-=======================
+=============================
+BinnedSpikeTrain (conversion)
+=============================
 
 .. automodule:: elephant.conversion
    :members:

+ 1 - 1
code/elephant/doc/reference/cubic.rst

@@ -1,5 +1,5 @@
 ============================================================
-Cumulant Based Inference of higher-order Correlation (CuBIC) 
+Cumulant Based Inference of higher-order Correlation (CuBIC)
 ============================================================
 
 .. automodule:: elephant.cubic

+ 0 - 1
code/elephant/doc/reference/kernels.rst

@@ -3,4 +3,3 @@ Kernels
 =======
 
 .. automodule:: elephant.kernels
-   :members:

+ 3 - 3
code/elephant/doc/reference/neo_tools.rst

@@ -1,6 +1,6 @@
-===========================================
-Utility functions to manipulate Neo objects
-===========================================
+=====================
+Neo objects utilities
+=====================
 
 .. automodule:: elephant.neo_tools
    :members:

+ 3 - 3
code/elephant/doc/reference/spike_train_dissimilarity.rst

@@ -1,6 +1,6 @@
-=================================================
-Spike train dissimilarity / spike train synchrony
-=================================================
+=========================
+Spike train dissimilarity
+=========================
 
 
 .. automodule:: elephant.spike_train_dissimilarity

+ 0 - 12
code/elephant/doc/reference/sta.rst

@@ -2,17 +2,5 @@
 Spike-triggered average
 =======================
 
-.. testsetup::
-
-   import numpy as np
-   import neo
-   from quantities import ms
-   from elephant.sta import spike_triggered_average
-
-   signal1 = np.arange(1000.0)
-   signal2 = np.arange(1, 1001.0)
-   spiketrain1 = neo.SpikeTrain([10.12, 20.23, 30.45], units=ms, t_stop=50*ms)
-   spiketrain2 = neo.SpikeTrain([10.34, 20.56, 30.67], units=ms, t_stop=50*ms)
-
 .. automodule:: elephant.sta
    :members:

+ 3 - 4
code/elephant/doc/reference/statistics.rst

@@ -1,6 +1,5 @@
-======================
-Spike train statistics
-======================
+==========================
+Statistics of spike trains
+==========================
 
 .. automodule:: elephant.statistics
-   :members:

+ 21 - 4
code/elephant/doc/reference/unitary_event_analysis.rst

@@ -1,6 +1,23 @@
-===========================
-Unitary Event (UE) Analysis
-===========================
+======================
+Unitary Event Analysis
+======================
 
 .. automodule:: elephant.unitary_event_analysis
-   :members:
+
+Author Contributions
+--------------------
+
+- Vahid Rostami (VR)
+- Sonja Gruen (SG)
+- Markus Diesmann (MD)
+
+VR implemented the method, SG and MD provided guidance.
+
+
+References
+----------
+
+.. bibliography:: ../bib/elephant.bib
+   :labelprefix: ue
+   :keyprefix: unitary_event_analysis-
+   :style: unsrt

+ 169 - 7
code/elephant/doc/release_notes.rst

@@ -2,6 +2,168 @@
 Release Notes
 *************
 
+Elephant 0.8.0 release notes
+============================
+
+New features
+------------
+* The `parallel` module is a new experimental module (https://github.com/NeuralEnsemble/elephant/pull/307) to run python functions concurrently. Supports native (pythonic) ProcessPollExecutor and MPI. Not limited to Elephant functional.
+* Added an optional `refractory_period` argument, set to None by default, to `dither_spikes` function (https://github.com/NeuralEnsemble/elephant/pull/297).
+* Added `cdf` and `icdf` functions in Kernel class to correctly estimate the median index, needed for `instantaneous_rate` function in statistics.py (https://github.com/NeuralEnsemble/elephant/pull/313).
+* Added an optional `center_kernel` argument, set to True by default (to behave as in Elephant <0.8.0 versions) to `instantaneous_rate` function in statistics.py (https://github.com/NeuralEnsemble/elephant/pull/313).
+
+New tutorials
+-------------
+* Analysis of Sequences of Synchronous EvenTs (ASSET) tutorial: https://elephant.readthedocs.io/en/latest/tutorials/asset.html
+* Parallel module tutorial: https://elephant.readthedocs.io/en/latest/tutorials/parallel.html
+
+Optimization
+------------
+* Optimized ASSET runtime by a factor of 10 and more (https://github.com/NeuralEnsemble/elephant/pull/259, https://github.com/NeuralEnsemble/elephant/pull/333).
+
+Python 2.7 and 3.5 deprecation
+------------------------------
+Python 2.7 and 3.5 are deprecated and will not be maintained by the end of 2020. Switch to Python 3.6+.
+
+Breaking changes
+----------------
+* Naming convention changes (`binsize` -> `bin_size`, etc.) in almost all Elephant functions (https://github.com/NeuralEnsemble/elephant/pull/316).
+
+
+Elephant 0.7.0 release notes
+============================
+
+Breaking changes
+----------------
+* [gpfa] GPFA dimensionality reduction method is rewritten in easy-to-use scikit-learn class style format (https://github.com/NeuralEnsemble/elephant/pull/287):
+
+.. code-block:: python
+
+    gpfa = GPFA(bin_size=20*pq.ms, x_dim=8)
+    results = gpfa.fit_transform(spiketrains, returned_data=['xorth', 'xsm'])
+
+New tutorials
+-------------
+* GPFA dimensionality reduction method: https://elephant.readthedocs.io/en/latest/tutorials/gpfa.html
+* Unitary Event Analysis of coordinated spiking activity: https://elephant.readthedocs.io/en/latest/tutorials/unitary_event_analysis.html
+* (Introductory) statistics module: https://elephant.readthedocs.io/en/latest/tutorials/statistics.html
+
+Deprecations
+------------
+* **Python 2.7 support will be dropped on Dec 31, 2020.** Please switch to Python 3.6, 3.7, or 3.8.
+* [spike train generation] `homogeneous_poisson_process_with_refr_period()`, introduced in v0.6.4, is deprecated and will be deleted in v0.8.0. Use `homogeneous_poisson_process(refractory_period=...)` instead.
+* [pandas bridge] pandas\_bridge module is deprecated and will be deleted in v0.8.0.
+
+New features
+------------
+* New documentation style, guidelines, tutorials, and more (https://github.com/NeuralEnsemble/elephant/pull/294).
+* Python 3.8 support (https://github.com/NeuralEnsemble/elephant/pull/282).
+* [spike train generation] Added `refractory_period` flag in `homogeneous_poisson_process()` (https://github.com/NeuralEnsemble/elephant/pull/292) and `inhomogeneous_poisson_process()` (https://github.com/NeuralEnsemble/elephant/pull/295) functions. The default is `refractory_period=None`, meaning no refractoriness.
+* [spike train correlation] `cross_correlation_histogram()` supports different t_start and t_stop of input spiketrains.
+* [waveform features] `waveform_width()` function extracts the width (trough-to-peak TTP) of a waveform (https://github.com/NeuralEnsemble/elephant/pull/279).
+* [signal processing] Added `scaleopt` flag in `pairwise_cross_correlation()` to mimic the behavior of Matlab's `xcorr()` function (https://github.com/NeuralEnsemble/elephant/pull/277). The default is `scaleopt=unbiased` to be consistent with the previous versions of Elephant.
+* [spike train surrogates] Joint-ISI dithering method via `JointISI` class (https://github.com/NeuralEnsemble/elephant/pull/275).
+
+Bug fixes
+---------
+* [spike train correlation] Fix CCH Border Correction (https://github.com/NeuralEnsemble/elephant/pull/298). Now, the border correction in `cross_correlation_histogram()` correctly reflects the number of bins used for the calculation at each lag. The correction factor is now unity at full overlap.
+* [phase analysis] `spike_triggered_phase()` incorrect behavior when the spike train and the analog signal had different time units (https://github.com/NeuralEnsemble/elephant/pull/270).
+
+Performance
+-----------
+* [spade] SPADE x7 speedup (https://github.com/NeuralEnsemble/elephant/pull/280, https://github.com/NeuralEnsemble/elephant/pull/285, https://github.com/NeuralEnsemble/elephant/pull/286). Moreover, SPADE is now able to handle all surrogate types that are available in Elephant, as well as more types of statistical corrections.
+* [conversion] Fast & memory-efficient `covariance()` and Pearson `corrcoef()` (https://github.com/NeuralEnsemble/elephant/pull/274). Added flag `fast=True` by default in both functions.
+* [conversion] Use fast fftconvolve instead of np.correlate in `cross_correlation_histogram()` (https://github.com/NeuralEnsemble/elephant/pull/273).
+
+
+Elephant 0.6.4 release notes
+============================
+
+This release has been made for the "1st Elephant User Workshop" (https://www.humanbrainproject.eu/en/education/participatecollaborate/infrastructure-events-trainings/1st-elephant-user-workshop-accelerate-structured-and-reproducibl).
+
+
+Main features
+-------------
+* neo v0.8.0 compatible
+
+
+New modules
+-----------
+* GPFA - Gaussian-process factor analysis - dimensionality reduction method for neural trajectory visualization (https://github.com/NeuralEnsemble/elephant/pull/233). _Note: the API could change in the future._
+
+
+Bug fixes
+---------
+* [signal processing] Keep `array_annotations` in the output of signal processing functions (https://github.com/NeuralEnsemble/elephant/pull/258).
+* [SPADE] Fixed the calculation of the duration of a pattern in the output (https://github.com/NeuralEnsemble/elephant/pull/254).
+* [statistics] Fixed automatic kernel selection yields incorrect values (https://github.com/NeuralEnsemble/elephant/pull/246).
+
+
+Improvements
+------------
+* Vectorized `spike_time_tiling_coefficient()` function - got rid of a double for-loop (https://github.com/NeuralEnsemble/elephant/pull/244)
+* Reduced the number of warnings during the tests (https://github.com/NeuralEnsemble/elephant/pull/238).
+* Removed unused debug code in `spade/fast_fca.py` (https://github.com/NeuralEnsemble/elephant/pull/249).
+* Improved doc string of `covariance()` and `corrcoef()` (https://github.com/NeuralEnsemble/elephant/pull/260).
+
+
+
+Elephant 0.6.3 release notes
+============================
+July 22nd 2019
+
+The release v0.6.3 is mostly about improving maintenance.
+
+New functions
+-------------
+* `waveform_features` module
+    * Waveform signal-to-noise ratio (https://github.com/NeuralEnsemble/elephant/pull/219).
+* Added support for Butterworth `sosfiltfilt` - numerically stable (in particular, higher order) filtering (https://github.com/NeuralEnsemble/elephant/pull/234).
+
+Bug fixes
+---------
+* Fixed neo version typo in requirements file (https://github.com/NeuralEnsemble/elephant/pull/218)
+* Fixed broken docs (https://github.com/NeuralEnsemble/elephant/pull/230, https://github.com/NeuralEnsemble/elephant/pull/232)
+* Fixed issue with 32-bit arch (https://github.com/NeuralEnsemble/elephant/pull/229)
+
+Other changes
+-------------
+* Added issue templates (https://github.com/NeuralEnsemble/elephant/pull/226)
+* Single VERSION file (https://github.com/NeuralEnsemble/elephant/pull/231)
+
+Elephant 0.6.2 release notes
+============================
+April 23rd 2019
+
+New functions
+-------------
+* `signal_processing` module
+    * New functions to calculate the area under a time series and the derivative of a time series.
+
+Other changes
+-------------
+* Added support to initialize binned spike train representations with a matrix
+* Multiple bug fixes
+
+
+Elephant 0.6.1 release notes
+============================
+April 1st 2019
+
+New functions
+-------------
+* `signal_processing` module
+    * New function to calculate the cross-correlation function for analog signals.
+* `spade` module
+    * Spatio-temporal spike pattern detection now includes the option to assess significance also based on time-lags of patterns, in addition to patterns size and frequency (referred to as 3D pattern spectrum).
+
+Other changes
+-------------
+* This release fixes a number of compatibility issues in relation to API breaking changes in the Neo library.
+* Fixed error in STTC calculation (spike time tiling coefficient)
+* Minor bug fixes
+
+
 Elephant 0.6.0 release notes
 ============================
 October 12th 2018
@@ -17,7 +179,7 @@ Other changes
 -------------
 * Switched to multiple `requirements.txt` files which are directly read into the `setup.py`
 * `instantaneous_rate()` accepts now list of spiketrains
-* Minor bug fixes  
+* Minor bug fixes
 
 
 Elephant 0.5.0 release notes
@@ -34,12 +196,12 @@ New functions
     * New function to extract spike-triggered phases of an AnalogSignal
 * `unitary_event_analysis` module:
     * Added new unit test to the UE function to verify the method based on data of a recent [Re]Science publication
-  
+
 Other changes
 -------------
 * Minor bug fixes
-  
-  
+
+
 Elephant 0.4.3 release notes
 ============================
 March 2nd 2018
@@ -49,7 +211,7 @@ Other changes
 * Bug fixes in `spade` module:
     * Fixed an incompatibility with the latest version of an external library
 
-  
+
 Elephant 0.4.2 release notes
 ============================
 March 1st 2018
@@ -74,8 +236,8 @@ Other changes
 * Fixed bug in ISI function `isi()`, `statistics.py` module
 * Fixed bug in `dither_spikes()`, `spike_train_surrogates.py`
 * Minor bug fixes
- 
- 
+
+
 Elephant 0.4.1 release notes
 ============================
 March 23rd 2017

+ 0 - 6
code/elephant/doc/requirements.txt

@@ -1,6 +0,0 @@
-# Requirements for building documentation
-numpy>=1.8.2
-quantities>=0.10.1
-neo>=0.5.0
-numpydoc
-sphinx

+ 0 - 85
code/elephant/doc/tutorial.rst

@@ -1,85 +0,0 @@
-*********
-Tutorials
-*********
-
-Getting Started
----------------
-
-In this first tutorial, we will go through a very simple example of how to use Elephant. We will numerically verify that the coefficient of variation (CV), a measure of the variability of inter-spike intervals, of a spike train that is modeled as a random (stochastic) Poisson process is 1.
-
-As a first step, install Elephant and its dependencies as outlined in :ref:`install`. Next, start up your Python shell. Under Windows, you can likely launch a Python shell from the Start menu. Under Linux or Mac, you may start Python by typing::
-
-    $ python
-
-As a first step, we want to generate spike train data modeled as a stochastic Poisson process. For this purpose, we can use the :mod:`elephant.spike_train_generation` module, which provides the :func:`homogeneous_poisson_process` function::
-
-    >>> from elephant.spike_train_generation import homogeneous_poisson_process
-
-Use the :func:`help()` function of Python to display the documentation for this function::
-
-    >>> help(homogeneous_poisson_process)
-
-As you can see, the function requires three parameters: the firing rate of the Poisson process, the start time and the stop time. These three parameters are specified as :class:`Quantity` objects: these are essentially arrays or numbers with a unit of measurement attached. We will see how to use these objects in a second. You can quit the help screen by typing ``q``.
-
-Let us now generate 100 independent Poisson spike trains for 100 seconds each with a rate of 10 Hz for which we later will calculate the CV. For simplicity, we will store the spike trains in a list::
-
-    >>> from quantities import Hz, s, ms
-    >>> spiketrain_list = [
-    ...     homogeneous_poisson_process(rate=10.0*Hz, t_start=0.0*s, t_stop=100.0*s)
-    ...     for i in range(100)]
-
-Notice that the units ``s`` and ``Hz`` have both been imported from the :mod:`quantities` library and can be directly attached to the values by multiplication. The output is a list of 100 Neo :class:`SpikeTrain` objects::
-
-    >>> print(len(spiketrain_list))
-    100
-    >>> print(type(spiketrain_list[0]))
-    <class 'neo.core.spiketrain.SpikeTrain'>
-
-Before we continue, let us (optionally) have a look at the spike trains in a spike raster plot. This can be created, e.g., using the `matplotlib`_ framework (you may need to install this library, as it is not one of the dependencies of Elephant)::
-
-    >>> import matplotlib.pyplot as plt
-    >>> import numpy as np
-    >>> for i, spiketrain in enumerate(spiketrain_list):
-            t = spiketrain.rescale(ms)
-            plt.plot(t, i * np.ones_like(t), 'k.', markersize=2)
-    >>> plt.axis('tight')
-    >>> plt.xlim(0, 1000)
-    >>> plt.xlabel('Time (ms)', fontsize=16)
-    >>> plt.ylabel('Spike Train Index', fontsize=16)
-    >>> plt.gca().tick_params(axis='both', which='major', labelsize=14)
-    >>> plt.show()
-
-Notice how the spike times of each spike train are extracted from each of the spike trains in the for-loop. The :meth:`rescale` operation of the quantities library is used to transform units to milliseconds. In order to aid the visualization, we restrict the plot to the first 1000 ms (:func:`xlim` function). The :func:`show` command plots the spike raster in a new figure window on the screen.
-
-.. figure:: images/tutorials/tutorial_1_figure_1.png
-    :width: 600 px
-    :align: center
-    :figwidth: 80 %
-    
-    Spike raster plot of the 100 Poisson spike trains showing the first second of data.
-
-From the plot you can see the random nature of each Poisson spike train. Let us now calculate the distribution of the 100 CVs obtained from inter-spike intervals (ISIs) of these spike trains. Close the graphics window to get back to the Python prompt. The functions to calculate the list of ISIs and the CV are both located in the :mod:`elephant.statistics` module. Thus, for each spike train in our list, we first call the :func:`isi` function which returns an array of all *N-1* ISIs for the *N* spikes in the input spike train (refer to the online help using ``help(isi)``). We then feed the list of ISIs into the :func:`cv` function, which returns a single value for the coefficient of variation::
-
-    >>> from elephant.statistics import isi, cv
-    >>> cv_list = [cv(isi(spiketrain)) for spiketrain in spiketrain_list]
-
-In a final step, let's plot a histogram of the obtained CVs (again illustrated using the matplotlib framework for plotting)::
-
-    >>> plt.hist(cv_list)
-    >>> plt.xlabel('CV', fontsize=16)
-    >>> plt.ylabel('count', fontsize=16)
-    >>> plt.gca().tick_params(axis='both', which='major', labelsize=14)
-    >>> plt.show()
-
-As predicted by theory, the CV values are clustered around 1. This concludes our first "getting started" tutorial on the use of Elephant. More tutorials will be added soon.
-
-.. figure:: images/tutorials/tutorial_1_figure_2.png
-    :width: 600 px
-    :align: center
-    :figwidth: 80 %
-    
-    Distribution of CV values of the ISIs of 100 Poisson spike trains.
-
-
-
-.. _`matplotlib`: http://matplotlib.org/

+ 20 - 4
code/elephant/elephant/__init__.py

@@ -2,7 +2,7 @@
 """
 Elephant is a package for the analysis of neurophysiology data, based on Neo.
 
-:copyright: Copyright 2014-2018 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2019 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
@@ -22,13 +22,29 @@ from . import (statistics,
                sta,
                conversion,
                neo_tools,
+               cell_assembly_detection,
                spade,
-               cell_assembly_detection)
+               waveform_features,
+               gpfa)
+
+# not included modules on purpose:
+#   parallel: avoid warns when elephant is imported
 
 try:
-    from . import pandas_bridge
     from . import asset
+    from . import spade
 except ImportError:
+    # requirements-extras are missing
+    # please install Elephant with `pip install elephant[extras]`
     pass
 
-__version__ = "0.6.0"
+
+def _get_version():
+    import os
+    elephant_dir = os.path.dirname(__file__)
+    with open(os.path.join(elephant_dir, 'VERSION')) as version_file:
+        version = version_file.read().strip()
+    return version
+
+
+__version__ = _get_version()

Разница между файлами не показана из-за своего большого размера
+ 1563 - 1381
code/elephant/elephant/asset.py


+ 227 - 198
code/elephant/elephant/cell_assembly_detection.py

@@ -6,7 +6,7 @@ between spikes in a pattern), and at multiple time scales,
 e.g. from synchronous patterns to firing rate co-modulations.
 
 CAD consists of a statistical parametric testing done on the level of pairs
-of neurons, followed by an agglomerative recursive algorithm, in order to 
+of neurons, followed by an agglomerative recursive algorithm, in order to
 detect and test statistically precise repetitions of spikes in the data.
 In particular, pairs of neurons are tested for significance under the null
 hypothesis of independence, and then the significant pairs are agglomerated
@@ -16,12 +16,12 @@ The method was published in Russo et al. 2017 [1]. The original
 code is in Matlab language.
 
 Given a list of discretized (binned) spike trains by a given temporal
-scale (binsize), assumed to be recorded in parallel, the CAD analysis can be
+scale (bin_size), assumed to be recorded in parallel, the CAD analysis can be
 applied as demonstrated in this short toy example of 5 parallel spike trains
 that exhibit fully synchronous events of order 5.
 
-Example
--------
+Examples
+--------
 >>> import matplotlib.pyplot as plt
 >>> import elephant.conversion as conv
 >>> import elephant.spike_train_generation
@@ -29,23 +29,23 @@ Example
 >>> import numpy as np
 >>> import elephant.cell_assembly_detection as cad
 >>> np.random.seed(30)
->>> # Generate correlated data and bin it with a binsize of 10ms
+>>> # Generate correlated data and bin it with a bin_size of 10ms
 >>> sts = elephant.spike_train_generation.cpp(
 >>>     rate=15*pq.Hz, A=[0]+[0.95]+[0]*4+[0.05], t_stop=10*pq.s)
->>> binsize = 10*pq.ms
->>> spM = conv.BinnedSpikeTrain(sts, binsize=binsize)
+>>> bin_size = 10*pq.ms
+>>> spM = conv.BinnedSpikeTrain(sts, bin_size=bin_size)
 >>> # Call of the method
->>> patterns = cad.cell_assembly_detection(spM=spM, maxlag=2)[0]
+>>> patterns = cad.cell_assembly_detection(spM=spM, max_lag=2)[0]
 >>> # Plotting
 >>> plt.figure()
 >>> for neu in patterns['neurons']:
 >>>     if neu == 0:
 >>>         plt.plot(
->>>             patterns['times']*binsize, [neu]*len(patterns['times']),
+>>>             patterns['times']*bin_size, [neu]*len(patterns['times']),
 >>>             'ro', label='pattern')
 >>>     else:
 >>>         plt.plot(
->>>             patterns['times']*binsize, [neu] * len(patterns['times']),
+>>>             patterns['times']*bin_size, [neu] * len(patterns['times']),
 >>>             'ro')
 >>> # Raster plot of the data
 >>> for st_idx, st in enumerate(sts):
@@ -68,34 +68,47 @@ Elife, 6.
 
 """
 
-import numpy as np
+from __future__ import division, print_function, unicode_literals
+
 import copy
 import math
-import elephant.conversion as conv
-from scipy.stats import f
 import time
 
+import numpy as np
+from scipy.stats import f
+
+import elephant.conversion as conv
+from elephant.utils import deprecated_alias
 
-def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
-                            min_occ=1, size_chunks=100, max_spikes=np.inf,
-                            significance_pruning=True, subgroup_pruning=True,
-                            same_config_cut=False, bool_times_format=False,
-                            verbose=False):
+__all__ = [
+    "cell_assembly_detection"
+]
+
+
+@deprecated_alias(data='binned_spiketrain', maxlag='max_lag',
+                  min_occ='min_occurrences',
+                  same_config_cut='same_configuration_pruning')
+def cell_assembly_detection(binned_spiketrain, max_lag, reference_lag=2,
+                            alpha=0.05, min_occurrences=1, size_chunks=100,
+                            max_spikes=np.inf, significance_pruning=True,
+                            subgroup_pruning=True,
+                            same_configuration_pruning=False,
+                            bool_times_format=False, verbose=False):
 
     """
     The function performs the CAD analysis for the binned (discretized) spike
     trains given in input. The method looks for candidate
     significant patterns with lags (number of bins between successive spikes
-    in the pattern) going from `-maxlag` and `maxlag` (second parameter of the
+    in the pattern) going from `-max_lag` to `max_lag` (second parameter of the
     function). Thus, between two successive spikes in the pattern there can
-    be at most `maxlag`*`binsize` units of time.
+    be at most `max_lag`*`bin_size` units of time.
 
     The method agglomerates pairs of units (or a unit and a preexisting
     assembly), tests their significance by a statistical test
     and stops when the detected assemblies reach their maximal dimension
     (parameter `max_spikes`).
 
-    At every agglomeration size step (ex. from triplets to quadruplets), the
+    At every agglomeration size step (e.g. from triplets to quadruplets), the
     method filters patterns having the same neurons involved, and keeps only
     the most significant one. This pruning is optional and the choice is
     identified by the parameter 'significance_pruning'.
@@ -105,126 +118,138 @@ def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
 
     Parameters
     ----------
-    data : BinnedSpikeTrain object
-        binned spike trains containing data to be analysed
-    maxlag: int
-        maximal lag to be tested. For a binning dimension of binsize the
+    binned_spiketrain : elephant.conversion.BinnedSpikeTrain
+        Binned spike trains containing data to be analyzed.
+    max_lag : int
+        Maximal lag to be tested. For a binning dimension of bin_size the
         method will test all pairs configurations with a time
-        shift between '-maxlag' and 'maxlag'
-    reference_lag : int
-        reference lag (in bins) for the non-stationarity correction in the
-        statistical test
-        Default value : 2
-    alpha : float
-        significance level for the statistical test
-        Default : 0.05
-    min_occ : int
-        minimal number of occurrences required for an assembly
+        shift between '-max_lag' and 'max_lag'.
+    reference_lag : int, optional
+        Reference lag (in bins) for the non-stationarity correction in the
+        statistical test.
+        Default: 2.
+    alpha : float, optional
+        Significance level for the statistical test.
+        Default: 0.05.
+    min_occurrences : int, optional
+        Minimal number of occurrences required for an assembly
         (all assemblies, even if significant, with fewer occurrences
         than min_occurrences are discarded).
-        Default : 0.
-    size_chunks : int
-        size (in bins) of chunks in which the spike trains is divided
+        Default: 0.
+    size_chunks : int, optional
+        Size (in bins) of chunks in which the spike trains are divided
         to compute the variance (to reduce non stationarity effects
-        on variance estimation)
-        Default : 100.
-    max_spikes : int
-        maximal assembly order (the algorithm will return assemblies of
-        composed by maximum max_spikes elements).
-        Default : numpy.inf
-    significance_pruning : bool
-        if True the method performs significance pruning among
-        the detected assemblies
-        Default: True
-    subgroup_pruning : bool
-        if True the method performs subgroup pruning among
-        the detected assemblies
-        Default: True
-    same_config_cut: bool
-        if True performs pruning (not present in the original code and more
+        on variance estimation).
+        Default: 100.
+    max_spikes : int, optional
+        Maximal assembly order (the algorithm will return assemblies
+        composed of maximum `max_spikes` elements).
+        Default: `np.inf`.
+    significance_pruning : bool, optional
+        If True, the method performs significance pruning among
+        the detected assemblies.
+        Default: True.
+    subgroup_pruning : bool, optional
+        If True, the method performs subgroup pruning among
+        the detected assemblies.
+        Default: True.
+    same_configuration_pruning : bool, optional
+        If True, performs pruning (not present in the original code and more
         efficient), not testing assemblies already formed
-        if they appear in the very same configuration
-        Default: False
-    bool_times_format: bool
-        if True the activation time series is a list of 0/1 elements, where
-        1 indicates the first spike of the pattern
+        if they appear in the very same configuration.
+        Default: False.
+    bool_times_format : bool, optional
+        If True, the activation time series is a list of 0/1 elements, where
+        1 indicates the first spike of the pattern.
         Otherwise, the activation times of the assemblies are indicated by the
         indices of the bins in which the first spike of the pattern
-        is happening
-        Default: False
-    verbose: bool
+        is happening.
+        Default: False.
+    verbose : bool, optional
         Regulates the number of prints given by the method. If true all prints
         are given, otherwise the method does give any prints.
-        Default: False
+        Default: False.
 
     Returns
     -------
-    assembly_bin : list
-        contains the assemblies detected for the binsize chosen
-        each assembly is a dictionary with attributes:
-        'neurons' : vector of units taking part to the assembly
-                    (unit order correspond to the agglomeration order)
-        'lag' : vector of time lags `lag[z]` is the activation delay between
-                `neurons[1]` and `neurons[z+1]`
-        'pvalue' : vector of pvalues. `pvalue[z]` is the p-value of the
-                   statistical test between performed adding
-                   `neurons[z+1]` to the `neurons[1:z]`
-        'times' : assembly activation time. It reports how many times the
-                  complete assembly activates in that bin.
-                  time always refers to the activation of the first listed
-                  assembly element (`neurons[1]`), that doesn't necessarily
-                  corresponds to the first unit firing.
-                  The format is  identified by the variable bool_times_format.
-        'signature' : array of two entries `(z,c)`. The first is the number of
-                      neurons participating in the assembly (size),
-                      the second is number of assembly occurrences.
+    assembly : list of dict
+        Contains the assemblies detected for the bin size chosen. Each
+        assembly is a dictionary with attributes:
+
+        'neurons' : list
+            Vector of units taking part to the assembly (unit order correspond
+            to the agglomeration order).
+        'lag' : list
+            Vector of time lags.
+            `lag[z]` is the activation delay between `neurons[1]` and
+            `neurons[z+1]`.
+        'pvalue' : list
+            Vector containing p-values.
+            `pvalue[z]` is the p-value of the statistical test between
+            performed adding `neurons[z+1]` to the `neurons[1:z]`.
+        'times' : list
+            Assembly activation time. It reports how many times the
+            complete assembly activates in that bin. Time always refers to the
+            activation of the first listed assembly element (`neurons[1]`),
+            that doesn't necessarily corresponds to the first unit firing.
+            The format is  identified by the variable `bool_times_format`.
+        'signature' : list of list
+            Array of two entries `(z,c)`. The first is the number of neurons
+            participating in the assembly (size), and the second is number of
+            assembly occurrences.
 
     Raises
     ------
     TypeError
-        if the data is not an elephant.conv.BinnedSpikeTrain object
+        If `binned_spiketrain` is not an instance of
+        `elephant.conv.BinnedSpikeTrain`.
     ValueError
-        if the parameters are out of bounds
+        If the parameters are out of bounds.
+
+    Notes
+    -----
+    Alias: cad
+
+    References
+    ----------
+    [1] Russo, E., & Durstewitz, D. (2017). Cell assemblies at multiple time
+    scales with arbitrary lag constellations. Elife, 6.
 
     Examples
-    -------
+    --------
     >>> import elephant.conversion as conv
     >>> import elephant.spike_train_generation
     >>> import quantities as pq
     >>> import numpy as np
     >>> import elephant.cell_assembly_detection as cad
+    ...
     >>> np.random.seed(30)
-    >>> # Generate correlated data and bin it with a binsize of 10ms
+    ...
+    >>> # Generate correlated data and bin it with a bin_size of 10ms
     >>> sts = elephant.spike_train_generation.cpp(
     >>>     rate=15*pq.Hz, A=[0]+[0.95]+[0]*4+[0.05], t_stop=10*pq.s)
-    >>> binsize = 10*pq.ms
-    >>> spM = conv.BinnedSpikeTrain(sts, binsize=binsize)
+    >>> bin_size = 10*pq.ms
+    >>> spM = conv.BinnedSpikeTrain(sts, bin_size=bin_size)
+    ...
     >>> # Call of the method
-    >>> patterns = cad.cell_assembly_detection(spM=spM, maxlag=2)[0]
-
-
-    References
-    ----------
-    [1] Russo, E., & Durstewitz, D. (2017).
-    Cell assemblies at multiple time scales with arbitrary lag constellations.
-    Elife, 6.
+    >>> patterns = cad.cell_assembly_detection(spM=spM, max_lag=2)[0]
 
     """
     initial_time = time.time()
 
     # check parameter input and raise errors if necessary
-    _raise_errors(data=data,
-                  maxlag=maxlag,
+    _raise_errors(binned_spiketrain=binned_spiketrain,
+                  max_lag=max_lag,
                   alpha=alpha,
-                  min_occ=min_occ,
+                  min_occurrences=min_occurrences,
                   size_chunks=size_chunks,
                   max_spikes=max_spikes)
 
     # transform the binned spiketrain into array
-    data = data.to_array()
+    binned_spiketrain = binned_spiketrain.to_array()
 
     # zero order
-    n_neurons = len(data)
+    n_neurons = len(binned_spiketrain)
 
     # initialize empty assembly
 
@@ -241,15 +266,15 @@ def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
         assembly_in[w1]['neurons'] = [w1]
         assembly_in[w1]['lags'] = []
         assembly_in[w1]['pvalue'] = []
-        assembly_in[w1]['times'] = data[w1]
-        assembly_in[w1]['signature'] = [[1, sum(data[w1])]]
+        assembly_in[w1]['times'] = binned_spiketrain[w1]
+        assembly_in[w1]['signature'] = [[1, sum(binned_spiketrain[w1])]]
 
     # first order = test over pairs
 
     # denominator of the Bonferroni correction
     # divide alpha by the number of tests performed in the first
     # pairwise testing loop
-    number_test_performed = n_neurons * (n_neurons - 1) * (2 * maxlag + 1)
+    number_test_performed = n_neurons * (n_neurons - 1) * (2 * max_lag + 1)
     alpha = alpha * 2 / float(number_test_performed)
     if verbose:
         print('actual significance_level', alpha)
@@ -271,20 +296,21 @@ def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
     # for loop for the pairwise testing
     for w1 in range(n_neurons - 1):
         for w2 in range(w1 + 1, n_neurons):
-            spiketrain2 = data[w2]
+            spiketrain2 = binned_spiketrain[w2]
             n2 = w2
             assembly_flag = 0
 
             # call of the function that does the pairwise testing
-            call_tp = _test_pair(ensemble=assembly_in[w1],
-                                 spiketrain2=spiketrain2,
-                                 n2=n2,
-                                 maxlag=maxlag,
-                                 size_chunks=size_chunks,
-                                 reference_lag=reference_lag,
-                                 existing_patterns=existing_patterns,
-                                 same_config_cut=same_config_cut)
-            if same_config_cut:
+            call_tp = _test_pair(
+                ensemble=assembly_in[w1],
+                spiketrain2=spiketrain2,
+                n2=n2,
+                max_lag=max_lag,
+                size_chunks=size_chunks,
+                reference_lag=reference_lag,
+                existing_patterns=existing_patterns,
+                same_configuration_pruning=same_configuration_pruning)
+            if same_configuration_pruning:
                 assem_tp = call_tp[0]
             else:
                 assem_tp = call_tp
@@ -292,13 +318,13 @@ def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
             # if the assembly given in output is significant and the number
             # of occurrences is higher than the minimum requested number
             if assem_tp['pvalue'][-1] < alpha and \
-                    assem_tp['signature'][-1][1] > min_occ:
+                    assem_tp['signature'][-1][1] > min_occurrences:
                 # save the assembly in the output
                 assembly.append(assem_tp)
                 sign_pairs_matrix[w1][w2] = 1
                 assembly_flag = 1  # flag : it is indeed an assembly
                 # put the item_candidate into the existing_patterns list
-                if same_config_cut:
+                if same_configuration_pruning:
                     item_candidate = call_tp[1]
                     if not existing_patterns:
                         existing_patterns = [item_candidate]
@@ -359,25 +385,26 @@ def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
         if w2_to_test:
 
             # bonferroni correction only for the tests actually performed
-            alpha = alpha / float(len(w2_to_test) * n_as * (2 * maxlag + 1))
+            alpha = alpha / float(len(w2_to_test) * n_as * (2 * max_lag + 1))
 
             # testing for the element in w2_to_test
             for ww2 in range(len(w2_to_test)):
                 w2 = w2_to_test[ww2]
-                spiketrain2 = data[w2]
+                spiketrain2 = binned_spiketrain[w2]
                 assembly_flag = 0
                 pop_flag = max(assembly_flag, 0)
                 # testing for the assembly and the new neuron
 
-                call_tp = _test_pair(ensemble=assembly[w1],
-                                     spiketrain2=spiketrain2,
-                                     n2=w2,
-                                     maxlag=maxlag,
-                                     size_chunks=size_chunks,
-                                     reference_lag=reference_lag,
-                                     existing_patterns=existing_patterns,
-                                     same_config_cut=same_config_cut)
-                if same_config_cut:
+                call_tp = _test_pair(
+                    ensemble=assembly[w1],
+                    spiketrain2=spiketrain2,
+                    n2=w2,
+                    max_lag=max_lag,
+                    size_chunks=size_chunks,
+                    reference_lag=reference_lag,
+                    existing_patterns=existing_patterns,
+                    same_configuration_pruning=same_configuration_pruning)
+                if same_configuration_pruning:
                     assem_tp = call_tp[0]
                 else:
                     assem_tp = call_tp
@@ -386,7 +413,7 @@ def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
                 # the number of occurrences is sufficient and
                 # the length of the assembly is less than the input limit
                 if assem_tp['pvalue'][-1] < alpha and \
-                        assem_tp['signature'][-1][1] > min_occ and \
+                        assem_tp['signature'][-1][1] > min_occurrences and \
                         assem_tp['signature'][-1][0] <= max_spikes:
                     # the assembly is saved in the output list of
                     # assemblies
@@ -405,7 +432,7 @@ def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
                             assembly, n_filtered_assemblies = \
                                 _significance_pruning_step(
                                     pre_pruning_assembly=assembly)
-                    if same_config_cut:
+                    if same_configuration_pruning:
                         item_candidate = call_tp[1]
                         existing_patterns.append(item_candidate)
                 if assembly_flag:
@@ -457,7 +484,7 @@ def cell_assembly_detection(data, maxlag, reference_lag=2, alpha=0.05,
     return assembly
 
 
-def _chunking(binned_pair, size_chunks, maxlag, best_lag):
+def _chunking(binned_pair, size_chunks, max_lag, best_lag):
     """
     Chunking the object binned_pair into parts with the same bin length
 
@@ -467,8 +494,8 @@ def _chunking(binned_pair, size_chunks, maxlag, best_lag):
         vector of the binned spike trains for the pair being analyzed
     size_chunks : int
         size of chunks desired
-    maxlag : int
-        max number of lags for the binsize chosen
+    max_lag : int
+        max number of lags for the bin_size chosen
     best_lag : int
         lag with the higher number of coincidences
 
@@ -483,10 +510,10 @@ def _chunking(binned_pair, size_chunks, maxlag, best_lag):
     length = len(binned_pair[0], )
 
     # number of chunks
-    n_chunks = math.ceil((length - maxlag) / size_chunks)
+    n_chunks = math.ceil((length - max_lag) / size_chunks)
 
     # new chunk size, this is to have all chunks of roughly the same size
-    size_chunks = math.floor((length - maxlag) / n_chunks)
+    size_chunks = math.floor((length - max_lag) / n_chunks)
 
     n_chunks = np.int(n_chunks)
     size_chunks = np.int(size_chunks)
@@ -495,21 +522,21 @@ def _chunking(binned_pair, size_chunks, maxlag, best_lag):
 
     # cut the time series according to best_lag
 
-    binned_pair_cut = np.array([np.zeros(length - maxlag, dtype=np.int),
-                                np.zeros(length - maxlag, dtype=np.int)])
+    binned_pair_cut = np.array([np.zeros(length - max_lag, dtype=np.int),
+                                np.zeros(length - max_lag, dtype=np.int)])
 
     # choose which entries to consider according to the best lag chosen
     if best_lag == 0:
-        binned_pair_cut[0] = binned_pair[0][0:length - maxlag]
-        binned_pair_cut[1] = binned_pair[1][0:length - maxlag]
+        binned_pair_cut[0] = binned_pair[0][0:length - max_lag]
+        binned_pair_cut[1] = binned_pair[1][0:length - max_lag]
     elif best_lag > 0:
-        binned_pair_cut[0] = binned_pair[0][0:length - maxlag]
+        binned_pair_cut[0] = binned_pair[0][0:length - max_lag]
         binned_pair_cut[1] = binned_pair[1][
-                             best_lag:length - maxlag + best_lag]
+                             best_lag:length - max_lag + best_lag]
     else:
         binned_pair_cut[0] = binned_pair[0][
-                             -best_lag:length - maxlag - best_lag]
-        binned_pair_cut[1] = binned_pair[1][0:length - maxlag]
+                             -best_lag:length - max_lag - best_lag]
+        binned_pair_cut[1] = binned_pair[1][0:length - max_lag]
 
     # put the cut data into the chunked object
     for iii in range(n_chunks - 1):
@@ -527,7 +554,7 @@ def _chunking(binned_pair, size_chunks, maxlag, best_lag):
     return chunked, n_chunks
 
 
-def _assert_same_pattern(item_candidate, existing_patterns, maxlag):
+def _assert_same_pattern(item_candidate, existing_patterns, max_lag):
     """
     Tests if a particular pattern has already been tested and retrieved as
     significant.
@@ -539,7 +566,7 @@ def _assert_same_pattern(item_candidate, existing_patterns, maxlag):
         in the second there are the correspondent lags
     existing_patterns: list
         list of the already significant patterns
-    maxlag: int
+    max_lag: int
         maximum lag to be tested
 
     Returns
@@ -549,16 +576,16 @@ def _assert_same_pattern(item_candidate, existing_patterns, maxlag):
     """
     # unique representation of pattern in term of lags, maxlag and neurons
     # participating
-    item_candidate = sorted(item_candidate[0] * 2 * maxlag +
-                            item_candidate[1] + maxlag)
+    item_candidate = sorted(item_candidate[0] * 2 * max_lag +
+                            item_candidate[1] + max_lag)
     if item_candidate in existing_patterns:
         return True
     else:
         return False
 
 
-def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
-               existing_patterns, same_config_cut):
+def _test_pair(ensemble, spiketrain2, n2, max_lag, size_chunks, reference_lag,
+               existing_patterns, same_configuration_pruning):
     """
     Tests if two spike trains have repetitive patterns occurring more
     frequently than chance.
@@ -572,7 +599,7 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
         (candidate to be a new assembly member)
     n2 : int
         new unit tested
-    maxlag : int
+    max_lag : int
         maximum lag to be tested
     size_chunks : int
         size (in bins) of chunks in which the spike trains is divided
@@ -582,7 +609,7 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
         lag of reference; if zero or negative reference lag=-l
     existing_patterns: list
         list of the already significant patterns
-    same_config_cut: bool
+    same_configuration_pruning: bool
         if True (not present in the original code and more
         efficient), does not test assemblies already formed
         if they appear in the very same configuration
@@ -615,10 +642,10 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
     # list with the binned spike trains of the two neurons
     binned_pair = [ensemble['times'], spiketrain2]
 
-    # For large binsizes, the binned spike counts may potentially fluctuate
+    # For large bin_sizes, the binned spike counts may potentially fluctuate
     # around a high mean level and never fall below some minimum count
     # considerably larger than zero for the whole time series.
-    # Entries up to this minimum count would contribute 
+    # Entries up to this minimum count would contribute
     # to the coincidence count although they are completely
     # uninformative, so we subtract the minima.
 
@@ -649,23 +676,25 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
     # we select the one corresponding to the highest count
 
     # structure with the coincidence counts for each lag
-    fwd_coinc_count = np.array([0 for _ in range(maxlag + 1)])
-    bwd_coinc_count = np.array([0 for _ in range(maxlag + 1)])
+    fwd_coinc_count = np.array([0 for _ in range(max_lag + 1)])
+    bwd_coinc_count = np.array([0 for _ in range(max_lag + 1)])
 
-    for l in range(maxlag + 1):
+    for lag in range(max_lag + 1):
         time_fwd_cc = np.array([binned_pair[0][
-                                0:len(binned_pair[0]) - maxlag],
+                                0:len(binned_pair[0]) - max_lag],
                                 binned_pair[1][
-                                l:len(binned_pair[1]) - maxlag + l]])
+                                lag:len(binned_pair[1]) - max_lag + lag]])
 
         time_bwd_cc = np.array([binned_pair[0][
-                                l:len(binned_pair[0]) - maxlag + l],
+                                lag:len(binned_pair[0]) - max_lag + lag],
                                 binned_pair[1][
-                                0:len(binned_pair[1]) - maxlag]])
+                                0:len(binned_pair[1]) - max_lag]])
 
         # taking the minimum, place by place for the coincidences
-        fwd_coinc_count[l] = np.sum(np.minimum(time_fwd_cc[0], time_fwd_cc[1]))
-        bwd_coinc_count[l] = np.sum(np.minimum(time_bwd_cc[0], time_bwd_cc[1]))
+        fwd_coinc_count[lag] = np.sum(np.minimum(time_fwd_cc[0],
+                                                 time_fwd_cc[1]))
+        bwd_coinc_count[lag] = np.sum(np.minimum(time_bwd_cc[0],
+                                                 time_bwd_cc[1]))
 
     # choice of the best lag, taking into account the reference lag
     if reference_lag <= 0:
@@ -685,7 +714,7 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
         # reverse the ctAB_ object and not take into account the first entry
         bwd_coinc_count_rev = bwd_coinc_count[1:len(bwd_coinc_count)][::-1]
         hab_l = np.append(bwd_coinc_count_rev, fwd_coinc_count)
-        lags = range(-maxlag, maxlag + 1)
+        lags = range(-max_lag, max_lag + 1)
         max_coinc_count = np.amax(hab_l)
         best_lag = lags[np.argmax(hab_l)]
         if best_lag < 0:
@@ -715,26 +744,25 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
     lags_candidate = list(lags_candidate)
     item_candidate = [[pattern_candidate], [lags_candidate]]
 
-    if same_config_cut:
+    if same_configuration_pruning:
         if _assert_same_pattern(item_candidate=item_candidate,
                                 existing_patterns=existing_patterns,
-                                maxlag=maxlag):
-                en_neurons = copy.copy(ensemble['neurons'])
-                en_neurons.append(n2)
-                en_lags = copy.copy(ensemble['lags'])
-                en_lags.append(np.inf)
-                en_pvalue = copy.copy(ensemble['pvalue'])
-                en_pvalue.append(1)
-                en_n_occ = copy.copy(ensemble['signature'])
-                en_n_occ.append([0, 0])
-                item_candidate = []
-                assembly = {'neurons': en_neurons,
-                            'lags': en_lags,
-                            'pvalue': en_pvalue,
-                            'times': [],
-                            'signature': en_n_occ}
-                return assembly, item_candidate
-
+                                max_lag=max_lag):
+            en_neurons = copy.copy(ensemble['neurons'])
+            en_neurons.append(n2)
+            en_lags = copy.copy(ensemble['lags'])
+            en_lags.append(np.inf)
+            en_pvalue = copy.copy(ensemble['pvalue'])
+            en_pvalue.append(1)
+            en_n_occ = copy.copy(ensemble['signature'])
+            en_n_occ.append([0, 0])
+            item_candidate = []
+            assembly = {'neurons': en_neurons,
+                        'lags': en_lags,
+                        'pvalue': en_pvalue,
+                        'times': [],
+                        'signature': en_n_occ}
+            return assembly, item_candidate
     else:
         # I go on with the testing
 
@@ -757,7 +785,7 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
                         'pvalue': en_pvalue,
                         'times': [],
                         'signature': en_n_occ}
-            if same_config_cut:
+            if same_configuration_pruning:
                 item_candidate = []
                 return assembly, item_candidate
             else:
@@ -844,7 +872,7 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
 
         chunked, nch = _chunking(binned_pair=binned_pair,
                                  size_chunks=size_chunks,
-                                 maxlag=maxlag,
+                                 max_lag=max_lag,
                                  best_lag=best_lag)
 
         marginal_counts = np.zeros((nch, maxrate, 2), dtype=np.int)
@@ -889,7 +917,7 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
 
         # calculation of variance for each chunk
 
-        n = ntp - maxlag  # used in the calculation of the p-value
+        n = ntp - max_lag  # used in the calculation of the p-value
         var_x = [np.zeros((2, 2)) for _ in range(nch)]
         var_tot = 0
         cov_abab = [0 for _ in range(nch)]
@@ -999,7 +1027,7 @@ def _test_pair(ensemble, spiketrain2, n2, maxlag, size_chunks, reference_lag,
                     'pvalue': en_pvalue,
                     'times': activation_series,
                     'signature': en_n_occ}
-        if same_config_cut:
+        if same_configuration_pruning:
             return assembly, item_candidate
         else:
             return assembly
@@ -1010,9 +1038,9 @@ def _significance_pruning_step(pre_pruning_assembly):
     Between two assemblies with the same unit set arranged into different
     configurations the most significant one is chosen.
 
-    Parameters:
+    Parameters
     ----------
-    assembly : list
+    pre_pruning_assembly : list
         contains the whole set of significant assemblies (unfiltered)
 
     Returns
@@ -1110,21 +1138,22 @@ def _subgroup_pruning_step(pre_pruning_assembly):
     return assembly
 
 
-def _raise_errors(data, maxlag, alpha, min_occ, size_chunks, max_spikes):
+def _raise_errors(binned_spiketrain, max_lag, alpha, min_occurrences,
+                  size_chunks, max_spikes):
     """
     Returns errors if the parameters given in input are not correct.
 
     Parameters
     ----------
-    data : BinnedSpikeTrain object
+    binned_spiketrain : BinnedSpikeTrain object
         binned spike trains containing data to be analysed
-    maxlag: int
-        maximal lag to be tested. For a binning dimension of binsize the
+    max_lag: int
+        maximal lag to be tested. For a binning dimension of bin_size the
         method will test all pairs configurations with a time
-        shift between -maxlag and maxlag
+        shift between -max_lag and max_lag
     alpha : float
         alpha level.
-    min_occ : int
+    min_occurrences : int
         minimal number of occurrences required for an assembly
         (all assemblies, even if significant, with fewer occurrences
         than min_occurrences are discarded).
@@ -1145,23 +1174,23 @@ def _raise_errors(data, maxlag, alpha, min_occ, size_chunks, max_spikes):
         if the significance level is not in [0,1]
         if the minimal number of occurrences for an assembly is less than 1
         if the length of the chunks for the variance computation is 1 or less
-        if the maximal assembly order is not between 2 
+        if the maximal assembly order is not between 2
         and the number of neurons
         if the time series is too short (less than 100 bins)
 
     """
 
-    if not isinstance(data, conv.BinnedSpikeTrain):
+    if not isinstance(binned_spiketrain, conv.BinnedSpikeTrain):
         raise TypeError(
             'data must be in BinnedSpikeTrain format')
 
-    if maxlag < 2:
-        raise ValueError('maxlag value cant be less than 2')
+    if max_lag < 2:
+        raise ValueError('max_lag value cant be less than 2')
 
     if alpha < 0 or alpha > 1:
         raise ValueError('significance level has to be in interval [0,1]')
 
-    if min_occ < 1:
+    if min_occurrences < 1:
         raise ValueError('minimal number of occurrences for an assembly '
                          'must be at least 1')
 
@@ -1171,7 +1200,7 @@ def _raise_errors(data, maxlag, alpha, min_occ, size_chunks, max_spikes):
     if max_spikes < 2:
         raise ValueError('maximal assembly order must be less than 2')
 
-    if data.matrix_columns - maxlag < 100:
+    if binned_spiketrain.matrix_columns - max_lag < 100:
         raise ValueError('The time series is too short, consider '
                          'taking a longer portion of spike train '
                          'or diminish the bin size to be tested')

+ 163 - 154
code/elephant/elephant/change_point_detection.py

@@ -1,30 +1,28 @@
 # -*- coding: utf-8 -*-
 
 """
-This algorithm determines if a spike train `spk` can be considered as stationary
-process (constant firing rate) or not as stationary process (i.e. presence of
-one or more points at which the rate increases or decreases). In case of
-non-stationarity, the output is a list of detected Change Points (CPs).
+This algorithm determines if a spike train `spk` can be considered as
+stationary process (constant firing rate) or not as stationary process (i.e.
+presence of one or more points at which the rate increases or decreases). In
+case of non-stationarity, the output is a list of detected Change Points (CPs).
 Essentially, a det of  two-sided window of width `h` (`_filter(t, h, spk)`)
 slides over the spike train within the time `[h, t_final-h]`. This generates a
-`_filter_process(dt, h, spk)` that assigns at each time `t` the difference 
-between a spike lying in the right and left window. If at any time `t` this 
-difference is large 'enough' is assumed the presence of a rate Change Point in 
-a neighborhood of `t`. A threshold `test_quantile` for the maximum of 
-the filter_process (max difference of spike count between the left and right 
-window) is derived based on asymptotic considerations. The procedure is repeated 
-for an arbitrary set of windows, with different size `h`.
-
+`_filter_process(time_step, h, spk)` that assigns at each time `t` the
+difference between a spike lying in the right and left window. If at any time
+`t` this difference is large 'enough' is assumed the presence of a rate Change
+Point in a neighborhood of `t`. A threshold `test_quantile` for the maximum of
+the filter_process (max difference of spike count between the left and right
+window) is derived based on asymptotic considerations. The procedure is
+repeated for an arbitrary set of windows, with different size `h`.
 
 Examples
 --------
-The following applies multiple_filter_test to a spike trains. 
+The following applies multiple_filter_test to a spike trains.
 
     >>> import quantities as pq
     >>> import neo
     >>> from elephant.change_point_detection import multiple_filter_test
-
-    
+    ...
     >>> test_array = [1.1,1.2,1.4,   1.6,1.7,1.75,1.8,1.85,1.9,1.95]
     >>> st = neo.SpikeTrain(test_array, units='s', t_stop = 2.1)
     >>> window_size = [0.5]*pq.s
@@ -32,16 +30,14 @@ The following applies multiple_filter_test to a spike trains.
     >>> alpha = 5.0
     >>> num_surrogates = 10000
     >>> change_points = multiple_filter_test(window_size, st, t_fin, alpha,
-                        num_surrogates, dt = 0.5*pq.s)
-
-
+    ...                 num_surrogates, time_step = 0.5*pq.s)
 
 References
 ----------
-Messer, M., Kirchner, M., Schiemann, J., Roeper, J., Neininger, R., & Schneider,
-G. (2014). A multiple filter test for the detection of rate changes in renewal
-processes with varying variance. The Annals of Applied Statistics, 8(4),2027-2067.
-
+Messer, M., Kirchner, M., Schiemann, J., Roeper, J., Neininger, R., &
+Schneider, G. (2014). A multiple filter test for the detection of rate changes
+in renewal processes with varying variance. The Annals of Applied Statistics,
+8(4),2027-2067.
 
 Original code
 -------------
@@ -50,74 +46,84 @@ DOI: 10.1214/14-AOAS782SUPP;.r
 
 """
 
+from __future__ import division, print_function, unicode_literals
+
 import numpy as np
 import quantities as pq
 
+from elephant.utils import deprecated_alias
 
-def multiple_filter_test(window_sizes, spiketrain, t_final, alpha, n_surrogates,
-                         test_quantile=None, test_param=None, dt=None):
+__all__ = [
+    "multiple_filter_test",
+    "empirical_parameters"
+]
+
+
+@deprecated_alias(dt='time_step')
+def multiple_filter_test(window_sizes, spiketrain, t_final, alpha,
+                         n_surrogates, test_quantile=None, test_param=None,
+                         time_step=None):
     """
     Detects change points.
 
-    This function returns the detected change points, that correspond to the 
-    maxima of the `_filter_processes`. These are the processes generated by 
-    sliding the windows of step `dt`; at each step the difference between spike
-    on the right and left window is calculated.
+    This function returns the detected change points, that correspond to the
+    maxima of the `_filter_processes`. These are the processes generated by
+    sliding the windows of step `time_step`; at each step the difference
+    between spike on the right and left window is calculated.
 
     Parameters
     ----------
-        window_sizes : list of quantity objects
-                    list that contains windows sizes
-        spiketrain : neo.SpikeTrain, numpy array or list
-            spiketrain objects to analyze
-        t_final : quantity
-            final time of the spike train which is to be analysed
-        alpha : float
-            alpha-quantile in range [0, 100] for the set of maxima of the limit
-            processes
-        n_surrogates : integer
-            numbers of simulated limit processes
-        test_quantile : float
-            threshold for the maxima of the filter derivative processes, if any 
-            of these maxima is larger than this value, it is assumed the 
-            presence of a cp at the time corresponding to that maximum
-        dt : quantity
-          resolution, time step at which the windows are slided
-        test_param : np.array of shape (3, num of window),
-            first row: list of `h`, second and third rows: empirical means and
-            variances of the limit process correspodning to `h`. This will be 
-            used to normalize the `filter_process` in order to give to the every
-            maximum the same impact on the global statistic.
-           
-
-    Returns:
-    --------
-        cps : list of lists
-           one list for each window size `h`, containing the points detected with 
-           the corresponding `filter_process`. N.B.: only cps whose h-neighborhood 
-           does not include previously detected cps (with smaller window h) are
-           added to the list.
+    window_sizes : list of quantity objects
+                list that contains windows sizes
+    spiketrain : neo.SpikeTrain, numpy array or list
+        spiketrain objects to analyze
+    t_final : quantity
+        final time of the spike train which is to be analysed
+    alpha : float
+        alpha-quantile in range [0, 100] for the set of maxima of the limit
+        processes
+    n_surrogates : integer
+        numbers of simulated limit processes
+    test_quantile : float
+        threshold for the maxima of the filter derivative processes, if any
+        of these maxima is larger than this value, it is assumed the
+        presence of a cp at the time corresponding to that maximum
+    time_step : quantity
+      resolution, time step at which the windows are slided
+    test_param : np.array of shape (3, num of window),
+        first row: list of `h`, second and third rows: empirical means and
+        variances of the limit process correspodning to `h`. This will be
+        used to normalize the `filter_process` in order to give to the
+        every maximum the same impact on the global statistic.
+
+    Returns
+    -------
+    cps : list of list
+       one list for each window size `h`, containing the points detected with
+       the corresponding `filter_process`. N.B.: only cps whose h-neighborhood
+       does not include previously detected cps (with smaller window h) are
+       added to the list.
     """
 
     if (test_quantile is None) and (test_param is None):
         test_quantile, test_param = empirical_parameters(window_sizes, t_final,
                                                          alpha, n_surrogates,
-                                                         dt)
+                                                         time_step)
     elif test_quantile is None:
         test_quantile = empirical_parameters(window_sizes, t_final, alpha,
-                                             n_surrogates, dt)[0]
+                                             n_surrogates, time_step)[0]
     elif test_param is None:
         test_param = empirical_parameters(window_sizes, t_final, alpha,
-                                          n_surrogates, dt)[1]
-                                          
+                                          n_surrogates, time_step)[1]
+
     spk = spiketrain
-    
+
     #  List of lists of detected change points (CPs), to be returned
-    cps = []  
-    
+    cps = []
+
     for i, h in enumerate(window_sizes):
-        # automatic setting of dt
-        dt_temp = h / 20 if dt is None else dt
+        # automatic setting of time_step
+        dt_temp = h / 20 if time_step is None else time_step
         # filter_process for window of size h
         t, differences = _filter_process(dt_temp, h, spk, t_final, test_param)
         time_index = np.arange(len(differences))
@@ -126,14 +132,13 @@ def multiple_filter_test(window_sizes, spiketrain, t_final, alpha, n_surrogates,
         while np.max(differences) > test_quantile:
             cp_index = np.argmax(differences)
             # from index to time
-            cp = cp_index * dt_temp + h  
-            #print("detected point {0}".format(cp), "with filter {0}".format(h))
+            cp = cp_index * dt_temp + h
             # before repeating the procedure, the h-neighbourgs of detected CP
-            # are discarded, because rate changes into it are alrady explained 
+            # are discarded, because rate changes into it are alrady explained
             mask_fore = time_index > cp_index - int((h / dt_temp).simplified)
             mask_back = time_index < cp_index + int((h / dt_temp).simplified)
             differences[mask_fore & mask_back] = 0
-            # check if the neighbourhood of detected cp does not contain cps 
+            # check if the neighbourhood of detected cp does not contain cps
             # detected with other windows
             neighbourhood_free = True
             # iterate on lists of cps detected with smaller window
@@ -154,23 +159,24 @@ def multiple_filter_test(window_sizes, spiketrain, t_final, alpha, n_surrogates,
     return cps
 
 
-def _brownian_motion(t_in, t_fin, x_in, dt):
+def _brownian_motion(t_in, t_fin, x_in, time_step):
     """
     Generate a Brownian Motion.
 
     Parameters
     ----------
-        t_in : quantities,
-            initial time
-        t_fin : quantities,
-             final time
-        x_in : float,
-            initial point of the process: _brownian_motio(0) = x_in
-        dt : quantities,
-          resolution, time step at which brownian increments are summed
+    t_in : quantities,
+        initial time
+    t_fin : quantities,
+         final time
+    x_in : float,
+        initial point of the process: _brownian_motio(0) = x_in
+    time_step : quantities,
+      resolution, time step at which brownian increments are summed
     Returns
     -------
-    Brownian motion on [t_in, t_fin], with resolution dt and initial state x_in
+    Brownian motion on [t_in, t_fin], with resolution time_step and initial
+    state x_in
     """
 
     u = 1 * pq.s
@@ -183,17 +189,17 @@ def _brownian_motion(t_in, t_fin, x_in, dt):
     except ValueError:
         raise ValueError("t_fin must be a time quantity")
     try:
-        dt_sec = dt.rescale(u).magnitude
+        dt_sec = time_step.rescale(u).magnitude
     except ValueError:
         raise ValueError("dt must be a time quantity")
 
-    x = np.random.normal(0, np.sqrt(dt_sec), size=int((t_fin_sec - t_in_sec) 
-                                                                     / dt_sec))
+    x = np.random.normal(0, np.sqrt(dt_sec),
+                         size=int((t_fin_sec - t_in_sec) / dt_sec))
     s = np.cumsum(x)
     return s + x_in
 
 
-def _limit_processes(window_sizes, t_final, dt):
+def _limit_processes(window_sizes, t_final, time_step):
     """
     Generate the limit processes (depending only on t_final and h), one for
     each window size `h` in H. The distribution of maxima of these processes
@@ -205,14 +211,14 @@ def _limit_processes(window_sizes, t_final, dt):
             set of windows' size
         t_final : quantity object
             end of limit process
-        dt : quantity object
+        time_step : quantity object
             resolution, time step at which the windows are slided
 
     Returns
     -------
         limit_processes : list of numpy array
             each entries contains the limit processes for each h,
-            evaluated in [h,T-h] with steps dt
+            evaluated in [h,T-h] with steps time_step
     """
 
     limit_processes = []
@@ -223,20 +229,20 @@ def _limit_processes(window_sizes, t_final, dt):
     except ValueError:
         raise ValueError("window_sizes must be a list of times")
     try:
-        dt_sec = dt.rescale(u).magnitude
+        dt_sec = time_step.rescale(u).magnitude
     except ValueError:
-        raise ValueError("dt must be a time quantity")
-    
-    w = _brownian_motion(0 * u, t_final, 0, dt)
-    
+        raise ValueError("time_step must be a time quantity")
+
+    w = _brownian_motion(0 * u, t_final, 0, time_step)
+
     for h in window_sizes_sec:
         # BM on [h,T-h], shifted in time t-->t+h
-        brownian_right = w[int(2 * h/dt_sec):]
-        # BM on [h,T-h], shifted in time t-->t-h                     
-        brownian_left = w[:int(-2 * h/dt_sec)]
-        # BM on [h,T-h]                       
-        brownian_center = w[int(h/dt_sec):int(-h/dt_sec)]  
-        
+        brownian_right = w[int(2 * h / dt_sec):]
+        # BM on [h,T-h], shifted in time t-->t-h
+        brownian_left = w[:int(-2 * h / dt_sec)]
+        # BM on [h,T-h]
+        brownian_center = w[int(h / dt_sec):int(-h / dt_sec)]
+
         modul = np.abs(brownian_right + brownian_left - 2 * brownian_center)
         limit_process_h = modul / (np.sqrt(2 * h))
         limit_processes.append(limit_process_h)
@@ -244,13 +250,15 @@ def _limit_processes(window_sizes, t_final, dt):
     return limit_processes
 
 
-def empirical_parameters(window_sizes, t_final, alpha, n_surrogates, dt = None):
+@deprecated_alias(dt='time_step')
+def empirical_parameters(window_sizes, t_final, alpha, n_surrogates,
+                         time_step=None):
     """
     This function generates the threshold and the null parameters.
-    The`_filter_process_h` has been proved to converge (for t_fin, h-->infinity)
-    to a continuous functional of a Brownaian motion ('limit_process').
-    Using a MonteCarlo technique, maxima of these limit_processes are
-    collected.
+    The`_filter_process_h` has been proved to converge (for t_fin,
+    h-->infinity) to a continuous functional of a Brownaian motion
+    ('limit_process'). Using a MonteCarlo technique, maxima of
+    these limit_processes are collected.
 
     The threshold is defined as the alpha quantile of this set of maxima.
     Namely:
@@ -259,29 +267,29 @@ def empirical_parameters(window_sizes, t_final, alpha, n_surrogates, dt = None):
 
     Parameters
     ----------
-        window_sizes : list of quantity objects
-            set of windows' size
-        t_final : quantity object
-            final time of the spike
-        alpha : float
-            alpha-quantile in range [0, 100]
-        n_surrogates : integer
-            numbers of simulated limit processes
-        dt : quantity object
-            resolution, time step at which the windows are slided
+    window_sizes : list of quantity objects
+        set of windows' size
+    t_final : quantity object
+        final time of the spike
+    alpha : float
+        alpha-quantile in range [0, 100]
+    n_surrogates : integer
+        numbers of simulated limit processes
+    time_step : quantity object
+        resolution, time step at which the windows are slided
 
     Returns
     -------
-        test_quantile : float
-            threshold for the maxima of the filter derivative processes, if any 
-            of these maxima is larger than this value, it is assumed the 
-            presence of a cp at the time corresponding to that maximum
-            
-        test_param : np.array 3 * num of window,
-            first row: list of `h`, second and third rows: empirical means and
-            variances of the limit process correspodning to `h`. This will be 
-            used to normalize the `filter_process` in order to give to the every
-            maximum the same impact on the global statistic.
+    test_quantile : float
+        threshold for the maxima of the filter derivative processes, if any
+        of these maxima is larger than this value, it is assumed the
+        presence of a cp at the time corresponding to that maximum
+
+    test_param : np.array 3 * num of window,
+        first row: list of `h`, second and third rows: empirical means and
+        variances of the limit process correspodning to `h`. This will be
+        used to normalize the `filter_process` in order to give to the every
+        maximum the same impact on the global statistic.
     """
 
     # try:
@@ -301,8 +309,8 @@ def empirical_parameters(window_sizes, t_final, alpha, n_surrogates, dt = None):
         raise ValueError("t_final must be a time quantity")
     if not isinstance(n_surrogates, int):
         raise TypeError("n_surrogates must be an integer")
-    if not (isinstance(dt, pq.Quantity) or (dt is None)):
-        raise ValueError("dt must be a time quantity")
+    if not (isinstance(time_step, pq.Quantity) or (time_step is None)):
+        raise ValueError("time_step must be a time quantity")
 
     if t_final <= 0:
         raise ValueError("t_final needs to be strictly positive")
@@ -312,11 +320,11 @@ def empirical_parameters(window_sizes, t_final, alpha, n_surrogates, dt = None):
         raise ValueError("window size needs to be strictly positive")
     if np.max(window_sizes) >= t_final / 2:
         raise ValueError("window size too large")
-    if dt is not None:
+    if time_step is not None:
         for h in window_sizes:
-            if int(h.rescale('us')) % int(dt.rescale('us')) != 0:
+            if int(h.rescale('us')) % int(time_step.rescale('us')) != 0:
                 raise ValueError(
-                    "Every window size h must be a multiple of dt")
+                    "Every window size h must be a multiple of time_step")
 
     # Generate a matrix M*: n X m where n = n_surrogates is the number of
     # simulated limit processes and m is the number of chosen window sizes.
@@ -326,12 +334,13 @@ def empirical_parameters(window_sizes, t_final, alpha, n_surrogates, dt = None):
 
     for i in range(n_surrogates):
         # mh_star = []
-        simu = _limit_processes(window_sizes, t_final, dt)
+        simu = _limit_processes(window_sizes, t_final, time_step)
         # for i, h in enumerate(window_sizes_mag):
         #     # max over time of the limit process generated with window h
         #     m_h = np.max(simu[i])
         #     mh_star.append(m_h)
-        mh_star = [np.max(x) for x in simu]  # max over time of the limit process generated with window h
+        # max over time of the limit process generated with window h
+        mh_star = [np.max(x) for x in simu]
         maxima_matrix.append(mh_star)
 
     maxima_matrix = np.asanyarray(maxima_matrix)
@@ -353,41 +362,41 @@ def empirical_parameters(window_sizes, t_final, alpha, n_surrogates, dt = None):
     return test_quantile, test_param
 
 
-def _filter(t, h, spk):
+def _filter(t_center, window, spiketrain):
     """
-    This function calculates the difference of spike counts in the left and right
-    side of a window of size h centered in t and normalized by its variance.
-    The variance of this count can be expressed as a combination of mean and var
-    of the I.S.I. lying inside the window.
+    This function calculates the difference of spike counts in the left and
+    right side of a window of size h centered in t and normalized by its
+    variance. The variance of this count can be expressed as a combination of
+    mean and var of the I.S.I. lying inside the window.
 
     Parameters
     ----------
-        h : quantity
-            window's size
-        t : quantity
-            time on which the window is centered
-        spk : list, numpy array or SpikeTrain
-            spike train to analyze
+    t_center : quantity
+        time on which the window is centered
+    window : quantity
+        window's size
+    spiketrain : list, numpy array or SpikeTrain
+        spike train to analyze
 
     Returns
     -------
-        difference : float,
-          difference of spike count normalized by its variance
+    difference : float,
+        difference of spike count normalized by its variance
     """
 
     u = 1 * pq.s
     try:
-        t_sec = t.rescale(u).magnitude
+        t_sec = t_center.rescale(u).magnitude
     except AttributeError:
         raise ValueError("t must be a quantities object")
     # tm = t_sec.magnitude
     try:
-        h_sec = h.rescale(u).magnitude
+        h_sec = window.rescale(u).magnitude
     except AttributeError:
         raise ValueError("h must be a time quantity")
     # hm = h_sec.magnitude
     try:
-        spk_sec = spk.rescale(u).magnitude
+        spk_sec = spiketrain.rescale(u).magnitude
     except AttributeError:
         raise ValueError(
             "spiketrain must be a list (array) of times or a neo spiketrain")
@@ -401,7 +410,7 @@ def _filter(t, h, spk):
     # spike count in the left side
     count_left = train_left.size
     # form spikes to I.S.I
-    isi_right = np.diff(train_right)  
+    isi_right = np.diff(train_right)
     isi_left = np.diff(train_left)
 
     if isi_right.size == 0:
@@ -433,11 +442,11 @@ def _filter(t, h, spk):
     return difference
 
 
-def _filter_process(dt, h, spk, t_final, test_param):
+def _filter_process(time_step, h, spk, t_final, test_param):
     """
     Given a spike train `spk` and a window size `h`, this function generates
     the `filter derivative process` by evaluating the function `_filter`
-    in steps of `dt`.
+    in steps of `time_step`.
 
     Parameters
     ----------
@@ -447,7 +456,7 @@ def _filter_process(dt, h, spk, t_final, test_param):
            time on which the window is centered
         spk : list, array or SpikeTrain
            spike train to analyze
-        dt : quantity object, time step at which the windows are slided
+        time_step : quantity object, time step at which the windows are slided
           resolution
         test_param : matrix, the means of the first row list of `h`,
                     the second row Empirical and the third row variances of
@@ -473,9 +482,9 @@ def _filter_process(dt, h, spk, t_final, test_param):
     except AttributeError:
         raise ValueError("t_final must be a time quanity")
     try:
-        dt_sec = dt.rescale(u).magnitude
+        dt_sec = time_step.rescale(u).magnitude
     except AttributeError:
-        raise ValueError("dt must be a time quantity")
+        raise ValueError("time_step must be a time quantity")
     # domain of the process
     time_domain = np.arange(h_sec, t_final_sec - h_sec, dt_sec)
     filter_trajectrory = []
@@ -484,7 +493,7 @@ def _filter_process(dt, h, spk, t_final, test_param):
     emp_var_h = test_param[2][test_param[0] == h]
 
     for t in time_domain:
-        filter_trajectrory.append(_filter(t*u, h, spk))
+        filter_trajectrory.append(_filter(t * u, h, spk))
 
     filter_trajectrory = np.asanyarray(filter_trajectrory)
     # ordered normalization to give each process the same impact on the max

Разница между файлами не показана из-за своего большого размера
+ 615 - 356
code/elephant/elephant/conversion.py


+ 62 - 60
code/elephant/elephant/cubic.py

@@ -7,57 +7,67 @@ Given a list sts of SpikeTrains, the analysis comprises the following
 steps:
 
 1) compute the population histogram (PSTH) with the desired bin size
-       >>> binsize = 5 * pq.ms
-       >>> pop_count = elephant.statistics.time_histogram(sts, binsize)
+       >>> bin_size = 5 * pq.ms
+       >>> pop_count = elephant.statistics.time_histogram(sts, bin_size)
 
 2) apply CuBIC to the population count
        >>> alpha = 0.05  # significance level of the tests used
-       >>> xi, p_val, k = cubic(data, ximax=100, alpha=0.05, errorval=4.):
+       >>> xi, p_val, k = cubic(data, max_iterations=100, alpha=0.05,
+       ... errorval=4.):
 
-:copyright: Copyright 2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2016 by the Elephant team, see `doc/authors.rst`.
 :license: BSD, see LICENSE.txt for details.
 '''
 # -*- coding: utf-8 -*-
-from __future__ import division
+
+from __future__ import division, print_function, unicode_literals
+
 import scipy.stats
 import scipy.special
 import math
 import warnings
 
+from elephant.utils import deprecated_alias
+
+__all__ = [
+    "cubic"
+]
+
 
 # Based on matlab code by Benjamin Staude
 # Adaptation to python by Pietro Quaglio and Emiliano Torre
 
 
-def cubic(data, ximax=100, alpha=0.05):
-    '''
-    Performs the CuBIC analysis [1] on a population histogram, calculated from
-    a population of spiking neurons.
-
-    The null hypothesis :math:`H_0: k_3(data)<=k^*_{3,\\xi}` is iteratively
-    tested with increasing correlation order :math:`\\xi` (correspondent to
-    variable xi) until it is possible to accept, with a significance level alpha,
-    that :math:`\\hat{\\xi}` (corresponding to variable xi_hat) is the minimum
-    order of correlation necessary to explain the third cumulant
+@deprecated_alias(data='histogram', ximax='max_iterations')
+def cubic(histogram, max_iterations=100, alpha=0.05):
+    r"""
+    Performs the CuBIC analysis [1]_ on a population histogram, calculated
+    from a population of spiking neurons.
+
+    The null hypothesis :math:`H_0: k_3(data)<=k^*_{3,\xi}` is iteratively
+    tested with increasing correlation order :math:`\xi` until it is possible
+    to accept, with a significance level `alpha`, that :math:`\hat{\xi}` is
+    the minimum order of correlation necessary to explain the third cumulant
     :math:`k_3(data)`.
 
-    :math:`k^*_{3,\\xi}` is the maximized third cumulant, supposing a Compund
-    Poisson Process (CPP) model for correlated spike trains (see [1])
-    with maximum order of correlation equal to :math:`\\xi`.
+    :math:`k^*_{3,\xi}` is the maximized third cumulant, supposing a Compound
+    Poisson Process (CPP) model for correlated spike trains (see [1]_)
+    with maximum order of correlation equal to :math:`\xi`.
 
     Parameters
     ----------
-    data : neo.AnalogSignal
+    histogram : neo.AnalogSignal
         The population histogram (count of spikes per time bin) of the entire
         population of neurons.
-    ximax : int
-         The maximum number of iteration of the hypothesis test:
-         if it is not possible to compute the :math:`\\hat{\\xi}` before ximax
-         iteration the CuBIC procedure is aborted.
-         Default: 100
-    alpha : float
-         The significance level of the hypothesis tests perfomed.
-         Default: 0.05
+    max_iterations : int, optional
+         The maximum number of iterations of the hypothesis test. Corresponds
+         to the :math:`\hat{\xi_{\text{max}}}` in [1]_. If it is not possible
+         to compute the :math:`\hat{\xi}` before `max_iterations` iteration,
+         the CuBIC procedure is aborted.
+         Default: 100.
+    alpha : float, optional
+         The significance level of the hypothesis tests performed.
+         Default: 0.05.
 
     Returns
     -------
@@ -65,38 +75,38 @@ def cubic(data, ximax=100, alpha=0.05):
         The minimum correlation order estimated by CuBIC, necessary to
         explain the value of the third cumulant calculated from the population.
     p : list
-        The ordred list of all the p-values of the hypothesis tests that have
-        been performed. If the maximum number of iteration ximax is reached the
-        last p-value is set to -4
+        The ordered list of all the p-values of the hypothesis tests that have
+        been performed. If the maximum number of iteration `max_iterations` is
+        reached, the last p-value is set to -4.
     kappa : list
         The list of the first three cumulants of the data.
     test_aborted : bool
-        Wheter the test was aborted because reached the maximum number of
-        iteration ximax
+        Whether the test was aborted because reached the maximum number of
+        iteration, `max_iterations`.
 
     References
     ----------
-    [1]Staude, Rotter, Gruen, (2009) J. Comp. Neurosci
-    '''
+    .. [1] Staude, Rotter, Gruen, (2009) J. Comp. Neurosci
+
+    """
     # alpha in in the interval [0,1]
     if alpha < 0 or alpha > 1:
         raise ValueError(
             'the significance level alpha (= %s) has to be in [0,1]' % alpha)
 
-    if not isinstance(ximax, int) or ximax < 0:
-        raise ValueError(
-            'The maximum number of iterations ximax(= %i) has to be a positive'
-            % alpha + ' integer')
+    if not isinstance(max_iterations, int) or max_iterations < 0:
+        raise ValueError("'max_iterations' ({}) has to be a positive integer"
+                         .format(max_iterations))
 
     # dict of all possible rate functions
     try:
-        data = data.magnitude
+        histogram = histogram.magnitude
     except AttributeError:
         pass
-    L = len(data)
+    L = len(histogram)
 
     # compute first three cumulants
-    kappa = _kstat(data)
+    kappa = _kstat(histogram)
     xi_hat = 1
     xi = 1
     pval = 0.
@@ -106,8 +116,9 @@ def cubic(data, ximax=100, alpha=0.05):
     # compute xi_hat iteratively
     while pval < alpha:
         xi_hat = xi
-        if xi > ximax:
-            warnings.warn('Test aborted, xihat= %i > ximax= %i' % (xi, ximax))
+        if xi > max_iterations:
+            warnings.warn('Test aborted, xihat= %i > ximax= %i' % (
+                xi, max_iterations))
             test_aborted = True
             break
 
@@ -143,11 +154,9 @@ def _H03xi(kappa, xi, L):
 
     # Check the order condition of the cumulants necessary to perform CuBIC
     if kappa[1] < kappa[0]:
-        # p = errorval
-        kstar = [0]
         raise ValueError(
             'H_0 can not be tested:'
-            'kappa(2)= %f<%f=kappa(1)!!!' % (kappa[1], kappa[0]))
+            'kappa(2) = %f < %f = kappa(1)!!!' % (kappa[1], kappa[0]))
     else:
         # computation of the maximized cumulants
         kstar = [_kappamstar(kappa[:2], i, xi) for i in range(2, 7)]
@@ -158,7 +167,7 @@ def _H03xi(kappa, xi, L):
             kstar[4] / L + 9 * (kstar[2] * kstar[0] + kstar[1] ** 2) /
             (L - 1) + 6 * L * kstar[0] ** 3 / ((L - 1) * (L - 2)))
         # computation of the p-value (the third cumulant is supposed to
-        # be gaussian istribuited)
+        # be gaussian distributed)
         p = 1 - scipy.stats.norm(k3star, sigmak3star).cdf(kappa[2])
         return p
 
@@ -199,23 +208,16 @@ def _kstat(data):
 
     Parameters
     -----
-    data : numpy.aray
+    data : numpy.ndarray
         The population histogram of the population on which are computed
         the cumulants
 
     Returns
     -----
-    kappa : list
-        The first three cumulants of the population count
+    moments : list
+        The first three unbiased cumulants of the population count
     '''
-    L = len(data)
-    if L == 0:
+    if len(data) == 0:
         raise ValueError('The input data must be a non-empty array')
-    S = [(data ** r).sum() for r in range(1, 4)]
-    kappa = []
-    kappa.append(S[0] / float(L))
-    kappa.append((L * S[1] - S[0] ** 2) / (L * (L - 1)))
-    kappa.append(
-        (2 * S[0] ** 3 - 3 * L * S[0] * S[1] + L ** 2 * S[2]) / (
-            L * (L - 1) * (L - 2)))
-    return kappa
+    moments = [scipy.stats.kstat(data, n=n) for n in [1, 2, 3]]
+    return moments

+ 111 - 91
code/elephant/elephant/current_source_density.py

@@ -35,17 +35,21 @@ CC implemented the kCSD methods, kCSD1D(MC and CC)
 CC and EH developed the interface to elephant.
 """
 
-from __future__ import division
+from __future__ import division, print_function, unicode_literals
 
 import neo
-import quantities as pq
 import numpy as np
-from scipy import io
+import quantities as pq
 from scipy.integrate import simps
 
-from elephant.current_source_density_src import KCSD
-from elephant.current_source_density_src import icsd
 import elephant.current_source_density_src.utility_functions as utils
+from elephant.current_source_density_src import KCSD, icsd
+from elephant.utils import deprecated_alias
+
+__all__ = [
+    "estimate_csd",
+    "generate_lfp"
+]
 
 utils.patch_quantities()
 
@@ -59,23 +63,25 @@ icsd_methods = ['DeltaiCSD', 'StepiCSD', 'SplineiCSD']
 py_iCSD_toolbox = ['StandardCSD'] + icsd_methods
 
 
-def estimate_csd(lfp, coords=None, method=None,
+@deprecated_alias(coords='coordinates')
+def estimate_csd(lfp, coordinates=None, method=None,
                  process_estimate=True, **kwargs):
     """
-    Fuction call to compute the current source density (CSD) from extracellular
-    potential recordings(local-field potentials - LFP) using laminar electrodes
-    or multi-contact electrodes with 2D or 3D geometries.
+    Function call to compute the current source density (CSD) from
+    extracellular potential recordings(local-field potentials - LFP) using
+    laminar electrodes or multi-contact electrodes with 2D or 3D geometries.
 
     Parameters
     ----------
     lfp : neo.AnalogSignal
         positions of electrodes can be added as neo.RecordingChannel
         coordinate or sent externally as a func argument (See coords)
-    coords : [Optional] corresponding spatial coordinates of the electrodes
+    coordinates : [Optional] corresponding spatial coordinates of the
+        electrodes.
         Defaults to None
-        Otherwise looks for RecordingChannels coordinate
+        Otherwise looks for ChannelIndex coordinate
     method : string
-        Pick a method corresonding to the setup, in this implementation
+        Pick a method corresponding to the setup, in this implementation
         For Laminar probe style (1D), use 'KCSD1D' or 'StandardCSD',
          or 'DeltaiCSD' or 'StepiCSD' or 'SplineiCSD'
         For MEA probe style (2D),  use 'KCSD2D', or 'MoIKCSD'
@@ -110,25 +116,25 @@ def estimate_csd(lfp, coords=None, method=None,
     """
     if not isinstance(lfp, neo.AnalogSignal):
         raise TypeError('Parameter `lfp` must be a neo.AnalogSignal object')
-    if coords is None:
-        coords = lfp.channel_index.coordinates
+    if coordinates is None:
+        coordinates = lfp.channel_index.coordinates
     else:
         scaled_coords = []
-        for coord in coords:
+        for coord in coordinates:
             try:
                 scaled_coords.append(coord.rescale(pq.mm))
             except AttributeError:
                 raise AttributeError('No units given for electrode spatial \
                 coordinates')
-        coords = scaled_coords
+        coordinates = scaled_coords
     if method is None:
         raise ValueError('Must specify a method of CSD implementation')
-    if len(coords) != lfp.shape[1]:
+    if len(coordinates) != lfp.shape[1]:
         raise ValueError('Number of signals and coords is not same')
-    for ii in coords:  # CHECK for Dimensionality of electrodes
+    for ii in coordinates:  # CHECK for Dimensionality of electrodes
         if len(ii) > 3:
             raise ValueError('Invalid number of coordinate positions')
-    dim = len(coords[0])  # TODO : Generic co-ordinates!
+    dim = len(coordinates[0])  # TODO : Generic co-ordinates!
     if dim == 1 and (method not in available_1d):
         raise ValueError('Invalid method, Available options are:',
                          available_1d)
@@ -145,7 +151,7 @@ def estimate_csd(lfp, coords=None, method=None,
         kernel_method = getattr(KCSD, method)  # fetch the class 'KCSD1D'
         lambdas = kwargs.pop('lambdas', None)
         Rs = kwargs.pop('Rs', None)
-        k = kernel_method(np.array(coords), input_array.T, **kwargs)
+        k = kernel_method(np.array(coordinates), input_array.T, **kwargs)
         if process_estimate:
             k.cross_validate(lambdas, Rs)
         estm_csd = k.values()
@@ -163,81 +169,91 @@ def estimate_csd(lfp, coords=None, method=None,
                             z_coords=k.estm_z)
     elif method in py_iCSD_toolbox:
 
-        coords = np.array(coords) * coords[0].units
+        coordinates = np.array(coordinates) * coordinates[0].units
 
         if method in icsd_methods:
             try:
-                coords = coords.rescale(kwargs['diam'].units)
+                coordinates = coordinates.rescale(kwargs['diam'].units)
             except KeyError:  # Then why specify as a default in icsd?
-                              # All iCSD methods explicitly assume a source
-                              # diameter in contrast to the stdCSD  that
-                              # implicitly assume infinite source radius
+                # All iCSD methods explicitly assume a source
+                # diameter in contrast to the stdCSD  that
+                # implicitly assume infinite source radius
                 raise ValueError("Parameter diam must be specified for iCSD \
                                   methods: {}".format(", ".join(icsd_methods)))
 
         if 'f_type' in kwargs:
-            if (kwargs['f_type'] is not 'identity') and  \
+            if (kwargs['f_type'] != 'identity') and  \
                (kwargs['f_order'] is None):
                 raise ValueError("The order of {} filter must be \
                                   specified".format(kwargs['f_type']))
 
         lfp = neo.AnalogSignal(np.asarray(lfp).T, units=lfp.units,
-                                    sampling_rate=lfp.sampling_rate)
+                               sampling_rate=lfp.sampling_rate)
         csd_method = getattr(icsd, method)  # fetch class from icsd.py file
         csd_estimator = csd_method(lfp=lfp.magnitude * lfp.units,
-                                   coord_electrode=coords.flatten(),
+                                   coord_electrode=coordinates.flatten(),
                                    **kwargs)
         csd_pqarr = csd_estimator.get_csd()
 
         if process_estimate:
             csd_pqarr_filtered = csd_estimator.filter_csd(csd_pqarr)
             output = neo.AnalogSignal(csd_pqarr_filtered.T,
-                                           t_start=lfp.t_start,
-                                           sampling_rate=lfp.sampling_rate)
+                                      t_start=lfp.t_start,
+                                      sampling_rate=lfp.sampling_rate)
         else:
             output = neo.AnalogSignal(csd_pqarr.T, t_start=lfp.t_start,
-                                           sampling_rate=lfp.sampling_rate)
-        output.annotate(x_coords=coords)
+                                      sampling_rate=lfp.sampling_rate)
+        output.annotate(x_coords=coordinates)
     return output
 
 
-def generate_lfp(csd_profile, ele_xx, ele_yy=None, ele_zz=None,
-                 xlims=[0., 1.], ylims=[0., 1.], zlims=[0., 1.], res=50):
-    """Forward modelling for the getting the potentials for testing CSD
-
-        Parameters
-        ----------
-        csd_profile : fuction that computes True CSD profile
-            Available options are (see ./csd/utility_functions.py)
-            1D : gauss_1d_dipole
-            2D : large_source_2D and small_source_2D
-            3D : gauss_3d_dipole
-        ele_xx : np.array
-            Positions of the x coordinates of the electrodes
-        ele_yy : np.array
-            Positions of the y coordinates of the electrodes
-            Defaults ot None, use in 2D or 3D cases only
-        ele_zz : np.array
-            Positions of the z coordinates of the electrodes
-            Defaults ot None, use in 3D case only
-        x_lims : [start, end]
-            The starting spatial coordinate and the ending for integration
-            Defaults to [0.,1.]
-        y_lims : [start, end]
-            The starting spatial coordinate and the ending for integration
-            Defaults to [0.,1.], use only in 2D and 3D case
-        z_lims : [start, end]
-            The starting spatial coordinate and the ending for integration
-            Defaults to [0.,1.], use only in 3D case
-        res : int
-            The resolution of the integration
-            Defaults to 50
-
-        Returns
-        -------
-        LFP : neo.AnalogSignal object
-           The potentials created by the csd profile at the electrode positions
-           The electrode postions are attached as RecordingChannel's coordinate
+@deprecated_alias(ele_xx='x_positions', ele_yy='y_positions',
+                  ele_zz='z_positions', xlims='x_limits', ylims='y_limits',
+                  zlims='z_limits', res='resolution')
+def generate_lfp(csd_profile, x_positions, y_positions=None, z_positions=None,
+                 x_limits=[0., 1.], y_limits=[0., 1.], z_limits=[0., 1.],
+                 resolution=50):
+    """
+    Forward modelling for getting the potentials for testing Current Source
+    Density (CSD).
+
+    Parameters
+    ----------
+    csd_profile : callable
+        A function that computes true CSD profile.
+        Available options are (see ./csd/utility_functions.py)
+        1D : gauss_1d_dipole
+        2D : large_source_2D and small_source_2D
+        3D : gauss_3d_dipole
+    x_positions : np.ndarray
+        Positions of the x coordinates of the electrodes
+    y_positions : np.ndarray, optional
+        Positions of the y coordinates of the electrodes
+        Defaults to None, use in 2D or 3D cases only
+    z_positions : np.ndarray, optional
+        Positions of the z coordinates of the electrodes
+        Defaults to None, use in 3D case only
+    x_limits : list, optional
+        A list of [start, end].
+        The starting spatial coordinate and the ending for integration
+        Defaults to [0.,1.]
+    y_limits : list, optional
+        A list of [start, end].
+        The starting spatial coordinate and the ending for integration
+        Defaults to [0.,1.], use only in 2D and 3D case
+    z_limits : list, optional
+        A list of [start, end].
+        The starting spatial coordinate and the ending for integration
+        Defaults to [0.,1.], use only in 3D case
+    resolution : int, optional
+        The resolution of the integration
+        Defaults to 50
+
+    Returns
+    -------
+    LFP : neo.AnalogSignal
+       The potentials created by the csd profile at the electrode positions.
+       The electrode positions are attached as RecordingChannel's coordinate.
     """
     def integrate_1D(x0, csd_x, csd, h):
         m = np.sqrt((csd_x - x0)**2 + h**2) - abs(csd_x - x0)
@@ -272,49 +288,53 @@ def generate_lfp(csd_profile, ele_xx, ele_yy=None, ele_zz=None,
         F = simps(Iy, xlin)
         return F
     dim = 1
-    if ele_zz is not None:
+    if z_positions is not None:
         dim = 3
-    elif ele_yy is not None:
+    elif y_positions is not None:
         dim = 2
-    x = np.linspace(xlims[0], xlims[1], res)
+    x = np.linspace(x_limits[0], x_limits[1], resolution)
     if dim >= 2:
-        y = np.linspace(ylims[0], ylims[1], res)
+        y = np.linspace(y_limits[0], y_limits[1], resolution)
     if dim == 3:
-        z = np.linspace(zlims[0], zlims[1], res)
+        z = np.linspace(z_limits[0], z_limits[1], resolution)
     sigma = 1.0
     h = 50.
-    pots = np.zeros(len(ele_xx))
+    pots = np.zeros(len(x_positions))
     if dim == 1:
-        chrg_x = np.linspace(xlims[0], xlims[1], res)
+        chrg_x = np.linspace(x_limits[0], x_limits[1], resolution)
         csd = csd_profile(chrg_x)
-        for ii in range(len(ele_xx)):
-            pots[ii] = integrate_1D(ele_xx[ii], chrg_x, csd, h)
+        for ii in range(len(x_positions)):
+            pots[ii] = integrate_1D(x_positions[ii], chrg_x, csd, h)
         pots /= 2. * sigma  # eq.: 26 from Potworowski et al
-        ele_pos = ele_xx
+        ele_pos = x_positions
     elif dim == 2:
-        chrg_x, chrg_y = np.mgrid[xlims[0]:xlims[1]:np.complex(0, res),
-                                  ylims[0]:ylims[1]:np.complex(0, res)]
+        chrg_x, chrg_y = np.mgrid[
+                         x_limits[0]:x_limits[1]:np.complex(0, resolution),
+                         y_limits[0]:y_limits[1]:np.complex(0, resolution)]
         csd = csd_profile(chrg_x, chrg_y)
-        for ii in range(len(ele_xx)):
-            pots[ii] = integrate_2D(ele_xx[ii], ele_yy[ii],
+        for ii in range(len(x_positions)):
+            pots[ii] = integrate_2D(x_positions[ii], y_positions[ii],
                                     x, y, csd, h, chrg_x, chrg_y)
         pots /= 2 * np.pi * sigma
-        ele_pos = np.vstack((ele_xx, ele_yy)).T
+        ele_pos = np.vstack((x_positions, y_positions)).T
     elif dim == 3:
-        chrg_x, chrg_y, chrg_z = np.mgrid[xlims[0]:xlims[1]:np.complex(0, res),
-                                          ylims[0]:ylims[1]:np.complex(0, res),
-                                          zlims[0]:zlims[1]:np.complex(0, res)]
+        chrg_x, chrg_y, chrg_z = np.mgrid[
+            x_limits[0]:x_limits[1]:np.complex(0, resolution),
+            y_limits[0]:y_limits[1]:np.complex(0, resolution),
+            z_limits[0]:z_limits[1]:np.complex(0, resolution)
+        ]
         csd = csd_profile(chrg_x, chrg_y, chrg_z)
         xlin = chrg_x[:, 0, 0]
         ylin = chrg_y[0, :, 0]
         zlin = chrg_z[0, 0, :]
-        for ii in range(len(ele_xx)):
-            pots[ii] = integrate_3D(ele_xx[ii], ele_yy[ii], ele_zz[ii],
-                                    xlims, ylims, zlims, csd,
+        for ii in range(len(x_positions)):
+            pots[ii] = integrate_3D(x_positions[ii], y_positions[ii],
+                                    z_positions[ii],
+                                    x_limits, y_limits, z_limits, csd,
                                     xlin, ylin, zlin,
                                     chrg_x, chrg_y, chrg_z)
         pots /= 4 * np.pi * sigma
-        ele_pos = np.vstack((ele_xx, ele_yy, ele_zz)).T
+        ele_pos = np.vstack((x_positions, y_positions, z_positions)).T
     pots = np.reshape(pots, (-1, 1)) * pq.mV
     ele_pos = ele_pos * pq.mm
     lfp = []

+ 1 - 1
code/elephant/elephant/current_source_density_src/KCSD.py

@@ -48,7 +48,7 @@ class CSD(object):
         if ele_pos.shape[0] < 1+ele_pos.shape[1]: #Dim+1
             raise Exception("Number of electrodes must be at least :",
                             1+ele_pos.shape[1])
-        if utils.check_for_duplicated_electrodes(ele_pos) is False:
+        if utils.contains_duplicated_electrodes(ele_pos):
             raise Exception("Error! Duplicated electrode!")
 
     def sanity(self, true_csd, pos_csd):

+ 5 - 3
code/elephant/elephant/current_source_density_src/utility_functions.py

@@ -35,17 +35,19 @@ def patch_quantities():
         lastdefinition = definition
     return
 
-def check_for_duplicated_electrodes(elec_pos):
+
+def contains_duplicated_electrodes(elec_pos):
     """Checks for duplicate electrodes
     Parameters
     ----------
     elec_pos : np.array
+
     Returns
     -------
     has_duplicated_elec : Boolean
     """
-    unique_elec_pos = np.vstack({tuple(row) for row in elec_pos})
-    has_duplicated_elec = unique_elec_pos.shape == elec_pos.shape
+    unique_elec_pos = set(map(tuple, elec_pos))
+    has_duplicated_elec = len(unique_elec_pos) < len(elec_pos)
     return has_duplicated_elec
 
 

Разница между файлами не показана из-за своего большого размера
+ 671 - 283
code/elephant/elephant/kernels.py


+ 85 - 56
code/elephant/elephant/neo_tools.py

@@ -2,56 +2,78 @@
 """
 Tools to manipulate Neo objects.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
-from __future__ import division, print_function
+from __future__ import division, print_function, unicode_literals
+import warnings
 
 from itertools import chain
 
 from neo.core.container import unique_objs
+from elephant.utils import deprecated_alias
 
+__all__ = [
+    "extract_neo_attributes",
+    "get_all_spiketrains",
+    "get_all_events",
+    "get_all_epochs"
+]
 
-def extract_neo_attrs(obj, parents=True, child_first=True,
-                      skip_array=False, skip_none=False):
-    """Given a neo object, return a dictionary of attributes and annotations.
+
+@deprecated_alias(obj='neo_object')
+def extract_neo_attributes(neo_object, parents=True, child_first=True,
+                           skip_array=False, skip_none=False):
+    """
+    Given a Neo object, return a dictionary of attributes and annotations.
 
     Parameters
     ----------
-
-    obj : neo object
+    neo_object : neo.BaseNeo
+        Object to get attributes and annotations.
     parents : bool, optional
-              Also include attributes and annotations from parent neo
-              objects (if any).
+        If True, also include attributes and annotations from parent Neo
+        objects (if any).
+        Default: True.
     child_first : bool, optional
-                  If True (default True), values of child attributes are used
-                  over parent attributes in the event of a name conflict.
-                  If False, parent attributes are used.
-                  This parameter does nothing if `parents` is False.
+        If True, values of child attributes are used over parent attributes in
+        the event of a name conflict.
+        If False, parent attributes are used.
+        This parameter does nothing if `parents` is False.
+        Default: True.
     skip_array : bool, optional
-                 If True (default False), skip attributes that store non-scalar
-                 array values.
+        If True, skip attributes that store non-scalar array values.
+        Default: False.
     skip_none : bool, optional
-                If True (default False), skip annotations and attributes that
-                have a value of `None`.
+        If True, skip annotations and attributes that have a value of None.
+        Default: False.
 
     Returns
     -------
-
     dict
         A dictionary where the keys are annotations or attribute names and
         the values are the corresponding annotation or attribute value.
 
     """
-    attrs = obj.annotations.copy()
-    for attr in obj._necessary_attrs + obj._recommended_attrs:
+    attrs = neo_object.annotations.copy()
+    if not skip_array and hasattr(neo_object, "array_annotations"):
+        # Exclude labels and durations, and any other fields that should not
+        # be a part of array_annotation.
+        required_keys = set(neo_object.array_annotations).difference(
+            dir(neo_object))
+        for a in required_keys:
+            if "array_annotations" not in attrs:
+                attrs["array_annotations"] = {}
+            attrs["array_annotations"][a] = \
+                neo_object.array_annotations[a].copy()
+    for attr in neo_object._necessary_attrs + neo_object._recommended_attrs:
         if skip_array and len(attr) >= 3 and attr[2]:
             continue
         attr = attr[0]
-        if attr == getattr(obj, '_quantity_attr', None):
+        if attr == getattr(neo_object, '_quantity_attr', None):
             continue
-        attrs[attr] = getattr(obj, attr, None)
+        attrs[attr] = getattr(neo_object, attr, None)
 
     if skip_none:
         for attr, value in attrs.copy().items():
@@ -61,13 +83,13 @@ def extract_neo_attrs(obj, parents=True, child_first=True,
     if not parents:
         return attrs
 
-    for parent in getattr(obj, 'parents', []):
+    for parent in getattr(neo_object, 'parents', []):
         if parent is None:
             continue
-        newattr = extract_neo_attrs(parent, parents=True,
-                                    child_first=child_first,
-                                    skip_array=skip_array,
-                                    skip_none=skip_none)
+        newattr = extract_neo_attributes(parent, parents=True,
+                                         child_first=child_first,
+                                         skip_array=skip_array,
+                                         skip_none=skip_none)
         if child_first:
             newattr.update(attrs)
             attrs = newattr
@@ -77,54 +99,65 @@ def extract_neo_attrs(obj, parents=True, child_first=True,
     return attrs
 
 
-def _get_all_objs(container, classname):
-    """Get all `neo` objects of a given type from a container.
+def extract_neo_attrs(*args, **kwargs):
+    warnings.warn("'extract_neo_attrs' function is deprecated; "
+                  "use 'extract_neo_attributes'", DeprecationWarning)
+    return extract_neo_attributes(*args, **kwargs)
+
+
+def _get_all_objs(container, class_name):
+    """
+    Get all Neo objects of a given type from a container.
 
     The objects can be any list, dict, or other iterable or mapping containing
-    neo objects of a particular class, as well as any neo object that can hold
+    Neo objects of a particular class, as well as any Neo object that can hold
     the object.
     Objects are searched recursively, so the objects can be nested (such as a
     list of blocks).
 
     Parameters
     ----------
-
-    container : list, tuple, iterable, dict, neo container
-                The container for the neo objects.
-    classname : str
+    container : list, tuple, iterable, dict, neo.Container
+                The container for the Neo objects.
+    class_name : str
                 The name of the class, with proper capitalization
-                (so `SpikeTrain`, not `Spiketrain` or `spiketrain`)
+                (i.e., 'SpikeTrain', not 'Spiketrain' or 'spiketrain').
 
     Returns
     -------
-
     list
-        A list of unique `neo` objects
+        A list of unique Neo objects.
+
+    Raises
+    ------
+    ValueError
+        If can not handle containers of the type passed in `container`.
 
     """
-    if container.__class__.__name__ == classname:
+    if container.__class__.__name__ == class_name:
         return [container]
-    classholder = classname.lower() + 's'
+    classholder = class_name.lower() + 's'
     if hasattr(container, classholder):
         vals = getattr(container, classholder)
     elif hasattr(container, 'list_children_by_class'):
-        vals = container.list_children_by_class(classname)
+        vals = container.list_children_by_class(class_name)
     elif hasattr(container, 'values') and not hasattr(container, 'ndim'):
         vals = container.values()
     elif hasattr(container, '__iter__') and not hasattr(container, 'ndim'):
         vals = container
     else:
         raise ValueError('Cannot handle object of type %s' % type(container))
-    res = list(chain.from_iterable(_get_all_objs(obj, classname)
+    res = list(chain.from_iterable(_get_all_objs(obj, class_name)
                                    for obj in vals))
     return unique_objs(res)
 
 
 def get_all_spiketrains(container):
-    """Get all `neo.Spiketrain` objects from a container.
+    """
+    Get all `neo.Spiketrain` objects from a container.
 
     The objects can be any list, dict, or other iterable or mapping containing
-    spiketrains, as well as any neo object that can hold spiketrains:
+    spiketrains, as well as any Neo object that can hold spiketrains:
     `neo.Block`, `neo.ChannelIndex`, `neo.Unit`, and `neo.Segment`.
 
     Containers are searched recursively, so the objects can be nested
@@ -132,14 +165,12 @@ def get_all_spiketrains(container):
 
     Parameters
     ----------
-
-    container : list, tuple, iterable, dict,
-                neo Block, neo Segment, neo Unit, neo ChannelIndex
-                The container for the spiketrains.
+    container : list, tuple, iterable, dict, neo.Block, neo.Segment, neo.Unit,
+        neo.ChannelIndex
+        The container for the spiketrains.
 
     Returns
     -------
-
     list
         A list of the unique `neo.SpikeTrain` objects in `container`.
 
@@ -148,7 +179,8 @@ def get_all_spiketrains(container):
 
 
 def get_all_events(container):
-    """Get all `neo.Event` objects from a container.
+    """
+    Get all `neo.Event` objects from a container.
 
     The objects can be any list, dict, or other iterable or mapping containing
     events, as well as any neo object that can hold events:
@@ -159,13 +191,11 @@ def get_all_events(container):
 
     Parameters
     ----------
-
-    container : list, tuple, iterable, dict, neo Block, neo Segment
+    container : list, tuple, iterable, dict, neo.Block, neo.Segment
                 The container for the events.
 
     Returns
     -------
-
     list
         A list of the unique `neo.Event` objects in `container`.
 
@@ -174,7 +204,8 @@ def get_all_events(container):
 
 
 def get_all_epochs(container):
-    """Get all `neo.Epoch` objects from a container.
+    """
+    Get all `neo.Epoch` objects from a container.
 
     The objects can be any list, dict, or other iterable or mapping containing
     epochs, as well as any neo object that can hold epochs:
@@ -185,13 +216,11 @@ def get_all_epochs(container):
 
     Parameters
     ----------
-
-    container : list, tuple, iterable, dict, neo Block, neo Segment
+    container : list, tuple, iterable, dict, neo.Block, neo.Segment
                 The container for the epochs.
 
     Returns
     -------
-
     list
         A list of the unique `neo.Epoch` objects in `container`.
 

+ 13 - 7
code/elephant/elephant/pandas_bridge.py

@@ -2,7 +2,7 @@
 """
 Bridge to the pandas library.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
@@ -10,12 +10,18 @@ from __future__ import division, print_function, unicode_literals
 
 import numpy as np
 import pandas as pd
+import warnings
 import quantities as pq
 
-from elephant.neo_tools import (extract_neo_attrs, get_all_epochs,
+from elephant.neo_tools import (extract_neo_attributes, get_all_epochs,
                                 get_all_events, get_all_spiketrains)
 
 
+warnings.simplefilter('once', DeprecationWarning)
+warnings.warn("pandas_bridge module will be removed in Elephant v0.8.x",
+              DeprecationWarning)
+
+
 def _multiindex_from_dict(inds):
     """Given a dictionary, return a `pandas.MultiIndex`.
 
@@ -60,7 +66,7 @@ def _sort_inds(obj, axis=0):
         return obj
 
     obj = obj.reorder_levels(sorted(obj.axes[axis].names), axis=axis)
-    return obj.sortlevel(0, axis=axis, sort_remaining=True)
+    return obj.sort_index(level=0, axis=axis, sort_remaining=True)
 
 
 def _extract_neo_attrs_safe(obj, parents=True, child_first=True):
@@ -89,8 +95,8 @@ def _extract_neo_attrs_safe(obj, parents=True, child_first=True):
         the values are the corresponding annotation or attribute value.
 
     """
-    res = extract_neo_attrs(obj, skip_array=True, skip_none=True,
-                            parents=parents, child_first=child_first)
+    res = extract_neo_attributes(obj, skip_array=True, skip_none=True,
+                                 parents=parents, child_first=child_first)
     for key, value in res.items():
         res[key] = _convert_value_safe(value)
         key2 = _convert_value_safe(key)
@@ -576,8 +582,8 @@ def slice_spiketrain(pdobj, t_start=None, t_stop=None):
     pdobj : scalar, pandas Series, DataFrame, or Panel
             The returned data type is the same as the type of `pdobj`
 
-    Note
-    ----
+    Notes
+    -----
 
     The order of the index and/or column levels of the returned object may
     differ  from the order of the original.

+ 59 - 38
code/elephant/elephant/phase_analysis.py

@@ -2,67 +2,87 @@
 """
 Methods for performing phase analysis.
 
-:copyright: Copyright 2014-2018 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2018 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
+from __future__ import division, print_function, unicode_literals
+
 import numpy as np
 import quantities as pq
 
+__all__ = [
+    "spike_triggered_phase"
+]
+
 
 def spike_triggered_phase(hilbert_transform, spiketrains, interpolate):
     """
-    Calculate the set of spike-triggered phases of an AnalogSignal.
+    Calculate the set of spike-triggered phases of a `neo.AnalogSignal`.
 
     Parameters
     ----------
-    hilbert_transform : AnalogSignal or list of AnalogSignal
-        AnalogSignal of the complex analytic signal (e.g., returned by the
-        elephant.signal_processing.hilbert()). All spike trains are compared to
-        this signal, if only one signal is given. Otherwise, length of
-        hilbert_transform must match the length of spiketrains.
-    spiketrains : Spiketrain or list of Spiketrain
-        Spiketrains on which to trigger hilbert_transform extraction
+    hilbert_transform : neo.AnalogSignal or list of neo.AnalogSignal
+        `neo.AnalogSignal` of the complex analytic signal (e.g., returned by
+        the `elephant.signal_processing.hilbert` function).
+        If `hilbert_transform` is only one signal, all spike trains are
+        compared to this signal. Otherwise, length of `hilbert_transform` must
+        match the length of `spiketrains`.
+    spiketrains : neo.SpikeTrain or list of neo.SpikeTrain
+        Spike trains on which to trigger `hilbert_transform` extraction.
     interpolate : bool
-        If True, the phases and amplitudes of hilbert_transform for spikes
-        falling between two samples of signal is interpolated. Otherwise, the
-        closest sample of hilbert_transform is used.
+        If True, the phases and amplitudes of `hilbert_transform` for spikes
+        falling between two samples of signal is interpolated.
+        If False, the closest sample of `hilbert_transform` is used.
 
     Returns
     -------
-    phases : list of arrays
+    phases : list of np.ndarray
         Spike-triggered phases. Entries in the list correspond to the
-        SpikeTrains in spiketrains. Each entry contains an array with the
-        spike-triggered angles (in rad) of the signal.
-    amp : list of arrays
+        `neo.SpikeTrain`s in `spiketrains`. Each entry contains an array with
+        the spike-triggered angles (in rad) of the signal.
+    amp : list of pq.Quantity
         Corresponding spike-triggered amplitudes.
-    times : list of arrays
-        A list of times corresponding to the signal
-        Corresponding times (corresponds to the spike times).
-
-    Example
-    -------
+    times : list of pq.Quantity
+        A list of times corresponding to the signal. They correspond to the
+        times of the `neo.SpikeTrain` referred by the list item.
+
+    Raises
+    ------
+    ValueError
+        If the number of spike trains and number of phase signals don't match,
+        and neither of the two are a single signal.
+
+    Examples
+    --------
     Create a 20 Hz oscillatory signal sampled at 1 kHz and a random Poisson
-    spike train:
-
+    spike train, then calculate spike-triggered phases and amplitudes of the
+    oscillation:
+
+    >>> import neo
+    >>> import elephant
+    >>> import quantities as pq
+    >>> import numpy as np
+    ...
     >>> f_osc = 20. * pq.Hz
     >>> f_sampling = 1 * pq.ms
     >>> tlen = 100 * pq.s
+    ...
     >>> time_axis = np.arange(
-            0, tlen.magnitude,
-            f_sampling.rescale(pq.s).magnitude) * pq.s
-    >>> analogsignal = AnalogSignal(
-            np.sin(2 * np.pi * (f_osc * time_axis).simplified.magnitude),
-            units=pq.mV, t_start=0 * pq.ms, sampling_period=f_sampling)
-    >>> spiketrain = elephant.spike_train_generation.
-            homogeneous_poisson_process(
-                50 * pq.Hz, t_start=0.0 * ms, t_stop=tlen.rescale(pq.ms))
-
-    Calculate spike-triggered phases and amplitudes of the oscillation:
+    ...     0, tlen.magnitude,
+    ...     f_sampling.rescale(pq.s).magnitude) * pq.s
+    >>> analogsignal = neo.AnalogSignal(
+    ...     np.sin(2 * np.pi * (f_osc * time_axis).simplified.magnitude),
+    ...     units=pq.mV, t_start=0*pq.ms, sampling_period=f_sampling)
+    >>> spiketrain = (elephant.spike_train_generation.
+    ...     homogeneous_poisson_process(
+    ...     50 * pq.Hz, t_start=0.0*pq.ms, t_stop=tlen.rescale(pq.ms)))
+    ...
     >>> phases, amps, times = elephant.phase_analysis.spike_triggered_phase(
-            elephant.signal_processing.hilbert(analogsignal),
-            spiketrain,
-            interpolate=True)
+    ...     elephant.signal_processing.hilbert(analogsignal),
+    ...     spiketrain,
+    ...     interpolate=True)
+
     """
 
     # Convert inputs to lists
@@ -108,7 +128,8 @@ def spike_triggered_phase(hilbert_transform, spiketrains, interpolate):
         # Find index into signal for each spike
         ind_at_spike = np.round(
             (spiketrain[sttimeind] - hilbert_transform[phase_i].t_start) /
-            hilbert_transform[phase_i].sampling_period).magnitude.astype(int)
+            hilbert_transform[phase_i].sampling_period). \
+            simplified.magnitude.astype(int)
 
         # Extract times for speed reasons
         times = hilbert_transform[phase_i].times

Разница между файлами не показана из-за своего большого размера
+ 732 - 250
code/elephant/elephant/signal_processing.py


Разница между файлами не показана из-за своего большого размера
+ 1881 - 1165
code/elephant/elephant/spade.py


+ 0 - 2
code/elephant/elephant/spade_src/__init__.py

@@ -1,2 +0,0 @@
-# -*- coding: utf-8 -*-
-from . import fast_fca

Разница между файлами не показана из-за своего большого размера
+ 246 - 1097
code/elephant/elephant/spade_src/fast_fca.py


+ 319 - 326
code/elephant/elephant/spectral.py

@@ -3,254 +3,170 @@
 Identification of spectral properties in analog signals (e.g., the power
 spectrum).
 
-:copyright: Copyright 2015-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2015-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
-import warnings
+from __future__ import division, print_function, unicode_literals
 
+import neo
+import warnings
 import numpy as np
-import scipy.signal
-import scipy.fftpack as fftpack
-import scipy.signal.signaltools as signaltools
-from scipy.signal.windows import get_window
-from six import string_types
 import quantities as pq
-import neo
+import scipy.signal
+
+from elephant.utils import deprecated_alias
+
+__all__ = [
+    "welch_psd",
+    "welch_coherence"
+]
 
 
-def _welch(x, y, fs=1.0, window='hanning', nperseg=256, noverlap=None,
-          nfft=None, detrend='constant', scaling='density', axis=-1):
+@deprecated_alias(num_seg='n_segments', len_seg='len_segment',
+                  freq_res='frequency_resolution')
+def welch_psd(signal, n_segments=8, len_segment=None,
+              frequency_resolution=None, overlap=0.5, fs=1.0, window='hanning',
+              nfft=None, detrend='constant', return_onesided=True,
+              scaling='density', axis=-1):
     """
-    A helper function to estimate cross spectral density using Welch's method.
-    This function is a slightly modified version of `scipy.signal.welch()` with
-    modifications based on `matplotlib.mlab._spectral_helper()`.
+    Estimates power spectrum density (PSD) of a given `neo.AnalogSignal`
+    using Welch's method.
 
-    Welch's method [1]_ computes an estimate of the cross spectral density
-    by dividing the data into overlapping segments, computing a modified
-    periodogram for each segment and averaging the cross-periodograms.
+    The PSD is obtained through the following steps:
+
+    1. Cut the given data into several overlapping segments. The degree of
+       overlap can be specified by parameter `overlap` (default is 0.5,
+       i.e. segments are overlapped by the half of their length).
+       The number and the length of the segments are determined according
+       to the parameters `n_segments`, `len_segment` or `frequency_resolution`.
+       By default, the data is cut into 8 segments;
+
+    2. Apply a window function to each segment. Hanning window is used by
+       default. This can be changed by giving a window function or an
+       array as parameter `window` (see Notes [2]);
+
+    3. Compute the periodogram of each segment;
+
+    4. Average the obtained periodograms to yield PSD estimate.
 
     Parameters
     ----------
-    x, y : array_like
-        Time series of measurement values
-    fs : float, optional
-        Sampling frequency of the `x` and `y` time series in units of Hz.
-        Defaults to 1.0.
-    window : str or tuple or array_like, optional
-        Desired window to use. See `get_window` for a list of windows and
-        required parameters. If `window` is array_like it will be used
-        directly as the window and its length will be used for nperseg.
-        Defaults to 'hanning'.
-    nperseg : int, optional
-        Length of each segment.  Defaults to 256.
-    noverlap: int, optional
-        Number of points to overlap between segments. If None,
-        ``noverlap = nperseg / 2``.  Defaults to None.
+    signal : neo.AnalogSignal or pq.Quantity or np.ndarray
+        Time series data, of which PSD is estimated. When `signal` is
+        `pq.Quantity` or `np.ndarray`, sampling frequency should be given
+        through the keyword argument `fs`. Otherwise, the default value is
+        used (`fs` = 1.0).
+    n_segments : int, optional
+        Number of segments. The length of segments is adjusted so that
+        overlapping segments cover the entire stretch of the given data. This
+        parameter is ignored if `len_segment` or `frequency_resolution` is
+        given.
+        Default: 8.
+    len_segment : int, optional
+        Length of segments. This parameter is ignored if `frequency_resolution`
+        is given. If None, it will be determined from other parameters.
+        Default: None.
+    frequency_resolution : pq.Quantity or float, optional
+        Desired frequency resolution of the obtained PSD estimate in terms of
+        the interval between adjacent frequency bins. When given as a `float`,
+        it is taken as frequency in Hz.
+        If None, it will be determined from other parameters.
+        Default: None.
+    overlap : float, optional
+        Overlap between segments represented as a float number between 0 (no
+        overlap) and 1 (complete overlap).
+        Default: 0.5 (half-overlapped).
+    fs : pq.Quantity or float, optional
+        Specifies the sampling frequency of the input time series. When the
+        input is given as a `neo.AnalogSignal`, the sampling frequency is
+        taken from its attribute and this parameter is ignored.
+        Default: 1.0.
+    window : str or tuple or np.ndarray, optional
+        Desired window to use.
+        See Notes [2].
+        Default: 'hanning'.
     nfft : int, optional
-        Length of the FFT used, if a zero padded FFT is desired.  If None,
-        the FFT length is `nperseg`. Defaults to None.
-    detrend : str or function, optional
-        Specifies how to detrend each segment. If `detrend` is a string,
-        it is passed as the ``type`` argument to `detrend`. If it is a
-        function, it takes a segment and returns a detrended segment.
-        Defaults to 'constant'.
-    scaling : { 'density', 'spectrum' }, optional
-        Selects between computing the power spectral density ('density')
-        where Pxx has units of V**2/Hz if x is measured in V and computing
-        the power spectrum ('spectrum') where Pxx has units of V**2 if x is
-        measured in V. Defaults to 'density'.
+        Length of the FFT used.
+        See Notes [2].
+        Default: None.
+    detrend : str or function or False, optional
+        Specifies how to detrend each segment.
+        See Notes [2].
+        Default: 'constant'.
+    return_onesided : bool, optional
+        If True, return a one-sided spectrum for real data.
+        If False return a two-sided spectrum.
+        See Notes [2].
+        Default: True.
+    scaling : {'density', 'spectrum'}, optional
+        If 'density', computes the power spectral density where Pxx has units
+        of V**2/Hz. If 'spectrum', computes the power spectrum where Pxx has
+        units of V**2, if `signal` is measured in V and `fs` is measured in
+        Hz.
+        See Notes [2].
+        Default: 'density'.
     axis : int, optional
-        Axis along which the periodogram is computed; the default is over
-        the last axis (i.e. ``axis=-1``).
+        Axis along which the periodogram is computed.
+        See Notes [2].
+        Default: last axis (-1).
 
     Returns
     -------
-    f : ndarray
-        Array of sample frequencies.
-    Pxy : ndarray
-        Cross spectral density or cross spectrum of x and y.
+    freqs : pq.Quantity or np.ndarray
+        Frequencies associated with the power estimates in `psd`.
+        `freqs` is always a vector irrespective of the shape of the input
+        data in `signal`.
+        If `signal` is `neo.AnalogSignal` or `pq.Quantity`, a `pq.Quantity`
+        array is returned.
+        Otherwise, a `np.ndarray` containing frequency in Hz is returned.
+    psd : pq.Quantity or np.ndarray
+        PSD estimates of the time series in `signal`.
+        If `signal` is `neo.AnalogSignal`, a `pq.Quantity` array is returned.
+        Otherwise, the return is a `np.ndarray`.
 
-    Notes
-    -----
-    An appropriate amount of overlap will depend on the choice of window
-    and on your requirements.  For the default 'hanning' window an
-    overlap of 50% is a reasonable trade off between accurately estimating
-    the signal power, while not over counting any of the data.  Narrower
-    windows may require a larger overlap.
+    Raises
+    ------
+    ValueError
+        If `overlap` is not in the interval `[0, 1)`.
 
-    If `noverlap` is 0, this method is equivalent to Bartlett's method [2]_.
+        If `frequency_resolution` is not positive.
 
-    References
-    ----------
-    .. [1] P. Welch, "The use of the fast Fourier transform for the
-           estimation of power spectra: A method based on time averaging
-           over short, modified periodograms", IEEE Trans. Audio
-           Electroacoust. vol. 15, pp. 70-73, 1967.
-    .. [2] M.S. Bartlett, "Periodogram Analysis and Continuous Spectra",
-           Biometrika, vol. 37, pp. 1-16, 1950.
-    """
-    # TODO: This function should be replaced by `scipy.signal.csd()`, which
-    # will appear in SciPy 0.16.0.
-
-    # The checks for if y is x are so that we can use the same function to
-    # obtain both power spectrum and cross spectrum without doing extra
-    # calculations.
-    same_data = y is x
-    # Make sure we're dealing with a numpy array. If y and x were the same
-    # object to start with, keep them that way
-    x = np.asarray(x)
-    if same_data:
-        y = x
-    else:
-        if x.shape != y.shape:
-            raise ValueError("x and y must be of the same shape.")
-        y = np.asarray(y)
-
-    if x.size == 0:
-        return np.empty(x.shape), np.empty(x.shape)
-
-    if axis != -1:
-        x = np.rollaxis(x, axis, len(x.shape))
-        if not same_data:
-            y = np.rollaxis(y, axis, len(y.shape))
-
-    if x.shape[-1] < nperseg:
-        warnings.warn('nperseg = %d, is greater than x.shape[%d] = %d, using '
-                      'nperseg = x.shape[%d]'
-                      % (nperseg, axis, x.shape[axis], axis))
-        nperseg = x.shape[-1]
-
-    if isinstance(window, string_types) or type(window) is tuple:
-        win = get_window(window, nperseg)
-    else:
-        win = np.asarray(window)
-        if len(win.shape) != 1:
-            raise ValueError('window must be 1-D')
-        if win.shape[0] > x.shape[-1]:
-            raise ValueError('window is longer than x.')
-        nperseg = win.shape[0]
-
-    if scaling == 'density':
-        scale = 1.0 / (fs * (win * win).sum())
-    elif scaling == 'spectrum':
-        scale = 1.0 / win.sum()**2
-    else:
-        raise ValueError('Unknown scaling: %r' % scaling)
-
-    if noverlap is None:
-        noverlap = nperseg // 2
-    elif noverlap >= nperseg:
-        raise ValueError('noverlap must be less than nperseg.')
-
-    if nfft is None:
-        nfft = nperseg
-    elif nfft < nperseg:
-        raise ValueError('nfft must be greater than or equal to nperseg.')
-
-    if not hasattr(detrend, '__call__'):
-        detrend_func = lambda seg: signaltools.detrend(seg, type=detrend)
-    elif axis != -1:
-        # Wrap this function so that it receives a shape that it could
-        # reasonably expect to receive.
-        def detrend_func(seg):
-            seg = np.rollaxis(seg, -1, axis)
-            seg = detrend(seg)
-            return np.rollaxis(seg, axis, len(seg.shape))
-    else:
-        detrend_func = detrend
+        If `frequency_resolution` is too high for the given data size.
 
-    step = nperseg - noverlap
-    indices = np.arange(0, x.shape[-1] - nperseg + 1, step)
+        If `frequency_resolution` is None and `len_segment` is not a positive
+        number.
 
-    for k, ind in enumerate(indices):
-        x_dt = detrend_func(x[..., ind:ind + nperseg])
-        xft = fftpack.fft(x_dt * win, nfft)
-        if same_data:
-            yft = xft
-        else:
-            y_dt = detrend_func(y[..., ind:ind + nperseg])
-            yft = fftpack.fft(y_dt * win, nfft)
-        if k == 0:
-            Pxy = (xft * yft.conj())
-        else:
-            Pxy *= k / (k + 1.0)
-            Pxy += (xft * yft.conj()) / (k + 1.0)
-    Pxy *= scale
-    f = fftpack.fftfreq(nfft, 1.0 / fs)
+        If `frequency_resolution` is None and `len_segment` is greater than the
+        length of data at `axis`.
+
+        If both `frequency_resolution` and `len_segment` are None and
+        `n_segments` is not a positive number.
 
-    if axis != -1:
-        Pxy = np.rollaxis(Pxy, -1, axis)
+        If both `frequency_resolution` and `len_segment` are None and
+        `n_segments` is greater than the length of data at `axis`.
 
-    return f, Pxy
+    Notes
+    -----
+    1. The computation steps used in this function are implemented in
+       `scipy.signal` module, and this function is a wrapper which provides
+       a proper set of parameters to `scipy.signal.welch` function.
+    2. The parameters `window`, `nfft`, `detrend`, `return_onesided`,
+       `scaling`, and `axis` are directly passed to the `scipy.signal.welch`
+       function. See the respective descriptions in the docstring of
+       `scipy.signal.welch` for usage.
+    3. When only `n_segments` is given, parameter `nperseg` of
+       `scipy.signal.welch` function is determined according to the expression
 
+       `signal.shape[axis] / (n_segments - overlap * (n_segments - 1))`
 
-def welch_psd(signal, num_seg=8, len_seg=None, freq_res=None, overlap=0.5,
-              fs=1.0, window='hanning', nfft=None, detrend='constant',
-              return_onesided=True, scaling='density', axis=-1):
-    """
-    Estimates power spectrum density (PSD) of a given AnalogSignal using
-    Welch's method, which works in the following steps:
-        1. cut the given data into several overlapping segments. The degree of
-            overlap can be specified by parameter *overlap* (default is 0.5,
-            i.e. segments are overlapped by the half of their length).
-            The number and the length of the segments are determined according
-            to parameter *num_seg*, *len_seg* or *freq_res*. By default, the
-            data is cut into 8 segments.
-        2. apply a window function to each segment. Hanning window is used by
-            default. This can be changed by giving a window function or an
-            array as parameter *window* (for details, see the docstring of
-            `scipy.signal.welch()`)
-        3. compute the periodogram of each segment
-        4. average the obtained periodograms to yield PSD estimate
-    These steps are implemented in `scipy.signal`, and this function is a
-    wrapper which provides a proper set of parameters to
-    `scipy.signal.welch()`. Some parameters for scipy.signal.welch(), such as
-    `nfft`, `detrend`, `window`, `return_onesided` and `scaling`, also works
-    for this function.
+       converted to integer.
 
-    Parameters
-    ----------
-    signal: Neo AnalogSignal or Quantity array or Numpy ndarray
-        Time series data, of which PSD is estimated. When a Quantity array or
-        Numpy ndarray is given, sampling frequency should be given through the
-        keyword argument `fs`, otherwise the default value (`fs=1.0`) is used.
-    num_seg: int, optional
-        Number of segments. The length of segments is adjusted so that
-        overlapping segments cover the entire stretch of the given data. This
-        parameter is ignored if *len_seg* or *freq_res* is given. Default is 8.
-    len_seg: int, optional
-        Length of segments. This parameter is ignored if *freq_res* is given.
-        Default is None (determined from other parameters).
-    freq_res: Quantity or float, optional
-        Desired frequency resolution of the obtained PSD estimate in terms of
-        the interval between adjacent frequency bins. When given as a float, it
-        is taken as frequency in Hz. Default is None (determined from other
-        parameters).
-    overlap: float, optional
-        Overlap between segments represented as a float number between 0 (no
-        overlap) and 1 (complete overlap). Default is 0.5 (half-overlapped).
-    fs: Quantity array or float, optional
-        Specifies the sampling frequency of the input time series. When the
-        input is given as an AnalogSignal, the sampling frequency is taken
-        from its attribute and this parameter is ignored. Default is 1.0.
-    window, nfft, detrend, return_onesided, scaling, axis: optional
-        These arguments are directly passed on to scipy.signal.welch(). See the
-        respective descriptions in the docstring of `scipy.signal.welch()` for
-        usage.
+    See Also
+    --------
+    scipy.signal.welch
+    welch_cohere
 
-    Returns
-    -------
-    freqs: Quantity array or Numpy ndarray
-        Frequencies associated with the power estimates in `psd`. `freqs` is
-        always a 1-dimensional array irrespective of the shape of the input
-        data. Quantity array is returned if `signal` is AnalogSignal or
-        Quantity array. Otherwise Numpy ndarray containing frequency in Hz is
-        returned.
-    psd: Quantity array or Numpy ndarray
-        PSD estimates of the time series in `signal`. Quantity array is
-        returned if `data` is AnalogSignal or Quantity array. Otherwise
-        Numpy ndarray is returned.
     """
 
     # initialize a parameter dict (to be given to scipy.signal.welch()) with
@@ -280,31 +196,36 @@ def welch_psd(signal, num_seg=8, len_seg=None, freq_res=None, overlap=0.5,
 
     # determine the length of segments (i.e. *nperseg*) according to given
     # parameters
-    if freq_res is not None:
-        if freq_res <= 0:
-            raise ValueError("freq_res must be positive")
-        dF = freq_res.rescale('Hz').magnitude \
-            if isinstance(freq_res, pq.quantity.Quantity) else freq_res
+    if frequency_resolution is not None:
+        if frequency_resolution <= 0:
+            raise ValueError("frequency_resolution must be positive")
+        if isinstance(frequency_resolution, pq.quantity.Quantity):
+            dF = frequency_resolution.rescale('Hz').magnitude
+        else:
+            dF = frequency_resolution
         nperseg = int(params['fs'] / dF)
         if nperseg > data.shape[axis]:
-            raise ValueError("freq_res is too high for the given data size")
-    elif len_seg is not None:
-        if len_seg <= 0:
+            raise ValueError("frequency_resolution is too high for the given "
+                             "data size")
+    elif len_segment is not None:
+        if len_segment <= 0:
             raise ValueError("len_seg must be a positive number")
-        elif data.shape[axis] < len_seg:
+        elif data.shape[axis] < len_segment:
             raise ValueError("len_seg must be shorter than the data length")
-        nperseg = len_seg
+        nperseg = len_segment
     else:
-        if num_seg <= 0:
-            raise ValueError("num_seg must be a positive number")
-        elif data.shape[axis] < num_seg:
-            raise ValueError("num_seg must be smaller than the data length")
-        # when only *num_seg* is given, *nperseg* is determined by solving the
-        # following equation:
-        #  num_seg * nperseg - (num_seg-1) * overlap * nperseg = data.shape[-1]
-        #  -----------------   ===============================   ^^^^^^^^^^^
-        # summed segment lengths        total overlap            data length
-        nperseg = int(data.shape[axis] / (num_seg - overlap * (num_seg - 1)))
+        if n_segments <= 0:
+            raise ValueError("n_segments must be a positive number")
+        elif data.shape[axis] < n_segments:
+            raise ValueError("n_segments must be smaller than the data length")
+        # when only *n_segments* is given, *nperseg* is determined by solving
+        # the following equation:
+        #  n_segments * nperseg - (n_segments-1) * overlap * nperseg =
+        #     data.shape[-1]
+        #  --------------------   ===============================  ^^^^^^^^^^^
+        # summed segment lengths        total overlap              data length
+        nperseg = int(data.shape[axis] / (n_segments - overlap * (
+            n_segments - 1)))
     params['nperseg'] = nperseg
     params['noverlap'] = int(nperseg * overlap)
 
@@ -312,7 +233,7 @@ def welch_psd(signal, num_seg=8, len_seg=None, freq_res=None, overlap=0.5,
 
     # attach proper units to return values
     if isinstance(signal, pq.quantity.Quantity):
-        if 'scaling' in params and params['scaling'] is 'spectrum':
+        if 'scaling' in params and params['scaling'] == 'spectrum':
             psd = psd * signal.units * signal.units
         else:
             psd = psd * signal.units * signal.units / pq.Hz
@@ -321,94 +242,157 @@ def welch_psd(signal, num_seg=8, len_seg=None, freq_res=None, overlap=0.5,
     return freqs, psd
 
 
-def welch_cohere(x, y, num_seg=8, len_seg=None, freq_res=None, overlap=0.5,
-           fs=1.0, window='hanning', nfft=None, detrend='constant',
-           scaling='density', axis=-1):
-    """
-    Estimates coherence between a given pair of analog signals. The estimation
-    is performed with Welch's method: the given pair of data are cut into short
-    segments, cross-spectra are calculated for each pair of segments, and the
-    cross-spectra are averaged and normalized by respective auto_spectra. By
-    default the data are cut into 8 segments with 50% overlap between
+@deprecated_alias(x='signal_i', y='signal_j', num_seg='n_segments',
+                  len_seg='len_segment', freq_res='frequency_resolution')
+def welch_coherence(signal_i, signal_j, n_segments=8, len_segment=None,
+                    frequency_resolution=None, overlap=0.5, fs=1.0,
+                    window='hanning', nfft=None, detrend='constant',
+                    scaling='density', axis=-1):
+    r"""
+    Estimates coherence between a given pair of analog signals.
+
+    The estimation is performed with Welch's method: the given pair of data
+    are cut into short segments, cross-spectra are calculated for each pair of
+    segments, and the cross-spectra are averaged and normalized by respective
+    auto-spectra.
+
+    By default, the data are cut into 8 segments with 50% overlap between
     neighboring segments. These numbers can be changed through respective
     parameters.
 
     Parameters
     ----------
-    x, y: Neo AnalogSignal or Quantity array or Numpy ndarray
-        A pair of time series data, between which coherence is computed. The
-        shapes and the sampling frequencies of `x` and `y` must be identical.
-        When `x` and `y` are not of AnalogSignal, sampling frequency
-        should be specified through the keyword argument `fs`, otherwise the
-        default value (`fs=1.0`) is used.
-    num_seg: int, optional
+    signal_i : neo.AnalogSignal or pq.Quantity or np.ndarray
+        First time series data of the pair between which coherence is
+        computed.
+    signal_j : neo.AnalogSignal or pq.Quantity or np.ndarray
+        Second time series data of the pair between which coherence is
+        computed.
+        The shapes and the sampling frequencies of `signal_i` and `signal_j`
+        must be identical. When `signal_i` and `signal_j` are not
+        `neo.AnalogSignal`, sampling frequency should be specified through the
+        keyword argument `fs`. Otherwise, the default value is used
+        (`fs` = 1.0).
+    n_segments : int, optional
         Number of segments. The length of segments is adjusted so that
         overlapping segments cover the entire stretch of the given data. This
-        parameter is ignored if *len_seg* or *freq_res* is given. Default is 8.
-    len_seg: int, optional
-        Length of segments. This parameter is ignored if *freq_res* is given.
-        Default is None (determined from other parameters).
-    freq_res: Quantity or float, optional
+        parameter is ignored if `len_seg` or `frequency_resolution` is given.
+        Default: 8.
+    len_segment : int, optional
+        Length of segments. This parameter is ignored if `frequency_resolution`
+        is given. If None, it is determined from other parameters.
+        Default: None.
+    frequency_resolution : pq.Quantity or float, optional
         Desired frequency resolution of the obtained coherence estimate in
         terms of the interval between adjacent frequency bins. When given as a
-        float, it is taken as frequency in Hz. Default is None (determined from
-        other parameters).
-    overlap: float, optional
+        `float`, it is taken as frequency in Hz.
+        If None, it is determined from other parameters.
+        Default: None.
+    overlap : float, optional
         Overlap between segments represented as a float number between 0 (no
-        overlap) and 1 (complete overlap). Default is 0.5 (half-overlapped).
-    fs: Quantity array or float, optional
+        overlap) and 1 (complete overlap).
+        Default: 0.5 (half-overlapped).
+    fs : pq.Quantity or float, optional
         Specifies the sampling frequency of the input time series. When the
-        input time series are given as AnalogSignal, the sampling
+        input time series are given as `neo.AnalogSignal`, the sampling
         frequency is taken from their attribute and this parameter is ignored.
-        Default is 1.0.
-    window, nfft, detrend, scaling, axis: optional
-        These arguments are directly passed on to a helper function
-        `elephant.spectral._welch()`. See the respective descriptions in the
-        docstring of `elephant.spectral._welch()` for usage.
+        Default: 1.0.
+    window : str or tuple or np.ndarray, optional
+        Desired window to use.
+        See Notes [1].
+        Default: 'hanning'.
+    nfft : int, optional
+        Length of the FFT used.
+        See Notes [1].
+        Default: None.
+    detrend : str or function or False, optional
+        Specifies how to detrend each segment.
+        See Notes [1].
+        Default: 'constant'.
+    scaling : {'density', 'spectrum'}, optional
+        If 'density', computes the power spectral density where Pxx has units
+        of V**2/Hz. If 'spectrum', computes the power spectrum where Pxx has
+        units of V**2, if `signal` is measured in V and `fs` is measured in
+        Hz.
+        See Notes [1].
+        Default: 'density'.
+    axis : int, optional
+        Axis along which the periodogram is computed.
+        See Notes [1].
+        Default: last axis (-1).
 
     Returns
     -------
-    freqs: Quantity array or Numpy ndarray
+    freqs : pq.Quantity or np.ndarray
         Frequencies associated with the estimates of coherency and phase lag.
-        `freqs` is always a 1-dimensional array irrespective of the shape of
-        the input data. Quantity array is returned if `x` and `y` are of
-        AnalogSignal or Quantity array. Otherwise Numpy ndarray containing
-        frequency in Hz is returned.
-    coherency: Numpy ndarray
-        Estimate of coherency between the input time series. For each frequency
-        coherency takes a value between 0 and 1, with 0 or 1 representing no or
-        perfect coherence, respectively. When the input arrays `x` and `y` are
-        multi-dimensional, `coherency` is of the same shape as the inputs and
-        frequency is indexed along either the first or the last axis depending
-        on the type of the input: when the input is AnalogSignal, the
-        first axis indexes frequency, otherwise the last axis does.
-    phase_lag: Quantity array or Numpy ndarray
-        Estimate of phase lag in radian between the input time series. For each
-        frequency phase lag takes a value between -PI and PI, positive values
-        meaning phase precession of `x` ahead of `y` and vice versa. Quantity
-        array is returned if `x` and `y` are of AnalogSignal or Quantity
-        array. Otherwise Numpy ndarray containing phase lag in radian is
-        returned. The axis for frequency index is determined in the same way as
-        for `coherency`.
+        `freqs` is always a vector irrespective of the shape of the input
+        data. If `signal_i` and `signal_j` are `neo.AnalogSignal` or
+        `pq.Quantity`, a `pq.Quantity` array is returned. Otherwise, a
+        `np.ndarray` containing frequency in Hz is returned.
+    coherency : np.ndarray
+        Estimate of coherency between the input time series. For each
+        frequency, coherency takes a value between 0 and 1, with 0 or 1
+        representing no or perfect coherence, respectively.
+        When the input arrays `signal_i` and `signal_j` are multi-dimensional,
+        `coherency` is of the same shape as the inputs, and the frequency is
+        indexed depending on the type of the input. If the input is
+        `neo.AnalogSignal`, the first axis indexes frequency. Otherwise,
+        frequency is indexed by the last axis.
+    phase_lag : pq.Quantity or np.ndarray
+        Estimate of phase lag in radian between the input time series. For
+        each frequency, phase lag takes a value between :math:`-\pi` and
+        :math:`\pi`, with positive values meaning phase precession of
+        `signal_i` ahead of `signal_j`, and vice versa. If `signal_i` and
+        `signal_j` are `neo.AnalogSignal` or `pq.Quantity`, a `pq.Quantity`
+        array is returned. Otherwise, a `np.ndarray` containing phase lag in
+        radian is returned. The axis for frequency index is determined in the
+        same way as for `coherency`.
+
+    Raises
+    ------
+    ValueError
+        Same as in :func:`welch_psd`.
+
+    Notes
+    -----
+    1. The computation steps used in this function are implemented in
+       `scipy.signal` module, and this function is a wrapper which provides
+       a proper set of parameters to `scipy.signal.welch` function.
+    2. The parameters `window`, `nfft`, `detrend`, `return_onesided`,
+       `scaling`, and `axis` are directly passed to the `scipy.signal.welch`
+       function. See the respective descriptions in the docstring of
+       `scipy.signal.welch` for usage.
+    3. When only `n_segments` is given, parameter `nperseg` of
+       `scipy.signal.welch` function is determined according to the expression
+
+       `signal.shape[axis] / (n_segments - overlap * (n_segments - 1))`
+
+       converted to integer.
+
+    See Also
+    --------
+    welch_psd
+
     """
 
-    # initialize a parameter dict (to be given to _welch()) with
-    # the parameters directly passed on to _welch()
+    # TODO: code duplication with welch_psd()
+
+    # initialize a parameter dict for scipy.signal.csd()
     params = {'window': window, 'nfft': nfft,
               'detrend': detrend, 'scaling': scaling, 'axis': axis}
 
     # When the input is AnalogSignal, the axis for time index is rolled to
     # the last
-    xdata = np.asarray(x)
-    ydata = np.asarray(y)
-    if isinstance(x, neo.AnalogSignal):
+    xdata = np.asarray(signal_i)
+    ydata = np.asarray(signal_j)
+    if isinstance(signal_i, neo.AnalogSignal):
         xdata = np.rollaxis(xdata, 0, len(xdata.shape))
         ydata = np.rollaxis(ydata, 0, len(ydata.shape))
 
     # if the data is given as AnalogSignal, use its attribute to specify
     # the sampling frequency
-    if hasattr(x, 'sampling_rate'):
-        params['fs'] = x.sampling_rate.rescale('Hz').magnitude
+    if hasattr(signal_i, 'sampling_rate'):
+        params['fs'] = signal_i.sampling_rate.rescale('Hz').magnitude
     else:
         params['fs'] = fs
 
@@ -419,49 +403,58 @@ def welch_cohere(x, y, num_seg=8, len_seg=None, freq_res=None, overlap=0.5,
 
     # determine the length of segments (i.e. *nperseg*) according to given
     # parameters
-    if freq_res is not None:
-        if freq_res <= 0:
-            raise ValueError("freq_res must be positive")
-        dF = freq_res.rescale('Hz').magnitude \
-            if isinstance(freq_res, pq.quantity.Quantity) else freq_res
+    if frequency_resolution is not None:
+        if isinstance(frequency_resolution, pq.quantity.Quantity):
+            dF = frequency_resolution.rescale('Hz').magnitude
+        else:
+            dF = frequency_resolution
         nperseg = int(params['fs'] / dF)
         if nperseg > xdata.shape[axis]:
-            raise ValueError("freq_res is too high for the given data size")
-    elif len_seg is not None:
-        if len_seg <= 0:
+            raise ValueError("frequency_resolution is too high for the given"
+                             "data size")
+    elif len_segment is not None:
+        if len_segment <= 0:
             raise ValueError("len_seg must be a positive number")
-        elif xdata.shape[axis] < len_seg:
+        elif xdata.shape[axis] < len_segment:
             raise ValueError("len_seg must be shorter than the data length")
-        nperseg = len_seg
+        nperseg = len_segment
     else:
-        if num_seg <= 0:
-            raise ValueError("num_seg must be a positive number")
-        elif xdata.shape[axis] < num_seg:
-            raise ValueError("num_seg must be smaller than the data length")
-        # when only *num_seg* is given, *nperseg* is determined by solving the
-        # following equation:
-        #  num_seg * nperseg - (num_seg-1) * overlap * nperseg = data.shape[-1]
-        #  -----------------   ===============================   ^^^^^^^^^^^
-        # summed segment lengths        total overlap            data length
-        nperseg = int(xdata.shape[axis] / (num_seg - overlap * (num_seg - 1)))
+        if n_segments <= 0:
+            raise ValueError("n_segments must be a positive number")
+        elif xdata.shape[axis] < n_segments:
+            raise ValueError("n_segments must be smaller than the data length")
+        # when only *n_segments* is given, *nperseg* is determined by solving
+        # the following equation:
+        #  n_segments * nperseg - (n_segments-1) * overlap * nperseg =
+        #      data.shape[-1]
+        #  -------------------    ===============================  ^^^^^^^^^^^
+        # summed segment lengths        total overlap              data length
+        nperseg = int(xdata.shape[axis] / (n_segments - overlap * (
+            n_segments - 1)))
     params['nperseg'] = nperseg
     params['noverlap'] = int(nperseg * overlap)
 
-    freqs, Pxy = _welch(xdata, ydata, **params)
-    freqs, Pxx = _welch(xdata, xdata, **params)
-    freqs, Pyy = _welch(ydata, ydata, **params)
-    coherency = np.abs(Pxy)**2 / (np.abs(Pxx) * np.abs(Pyy))
+    freqs, Pxx = scipy.signal.welch(xdata, **params)
+    _, Pyy = scipy.signal.welch(ydata, **params)
+    _, Pxy = scipy.signal.csd(xdata, ydata, **params)
+
+    coherency = np.abs(Pxy) ** 2 / (Pxx * Pyy)
     phase_lag = np.angle(Pxy)
 
     # attach proper units to return values
-    if isinstance(x, pq.quantity.Quantity):
+    if isinstance(signal_i, pq.quantity.Quantity):
         freqs = freqs * pq.Hz
         phase_lag = phase_lag * pq.rad
 
     # When the input is AnalogSignal, the axis for frequency index is
     # rolled to the first to comply with the Neo convention about time axis
-    if isinstance(x, neo.AnalogSignal):
+    if isinstance(signal_i, neo.AnalogSignal):
         coherency = np.rollaxis(coherency, -1)
         phase_lag = np.rollaxis(phase_lag, -1)
 
     return freqs, coherency, phase_lag
+
+
+def welch_cohere(*args, **kwargs):
+    warnings.warn("'welch_cohere' is deprecated; use 'welch_coherence'",
+                  DeprecationWarning)

Разница между файлами не показана из-за своего большого размера
+ 810 - 488
code/elephant/elephant/spike_train_correlation.py


+ 128 - 98
code/elephant/elephant/spike_train_dissimilarity.py

@@ -9,30 +9,34 @@ of spike train dissimilarity measures are the Victor-Purpura distance and the
 Van Rossum distance implemented in this module, which both are metrics in the
 mathematical sense and time-scale dependent.
 
-:copyright: Copyright 2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
-import quantities as pq
+from __future__ import division, print_function, unicode_literals
+
+import warnings
+
 import numpy as np
+import quantities as pq
 import scipy as sp
-import elephant.kernels as kernels
 from neo.core import SpikeTrain
 
-# Problem of conversion from Python 2 to Python 3:
-# 'xrange' in Python 2 is 'range' in Python 3.
-try:
-    xrange
-except NameError:
-    xrange = range
+import elephant.kernels as kernels
+from elephant.utils import deprecated_alias
+
+__all__ = [
+    "victor_purpura_distance",
+    "van_rossum_distance"
+]
 
 
 def _create_matrix_from_indexed_function(
         shape, func, symmetric_2d=False, **func_params):
     mat = np.empty(shape)
     if symmetric_2d:
-        for i in xrange(shape[0]):
-            for j in xrange(i, shape[1]):
+        for i in range(shape[0]):
+            for j in range(i, shape[1]):
                 mat[i, j] = mat[j, i] = func(i, j, **func_params)
     else:
         for idx in np.ndindex(*shape):
@@ -40,8 +44,9 @@ def _create_matrix_from_indexed_function(
     return mat
 
 
-def victor_purpura_dist(
-        trains, q=1.0 * pq.Hz, kernel=None, sort=True, algorithm='fast'):
+@deprecated_alias(trains='spiketrains', q='cost_factor')
+def victor_purpura_distance(spiketrains, cost_factor=1.0 * pq.Hz, kernel=None,
+                            sort=True, algorithm='fast'):
     """
     Calculates the Victor-Purpura's (VP) distance. It is often denoted as
     :math:`D^{\\text{spike}}[q]`.
@@ -64,16 +69,16 @@ def victor_purpura_dist(
 
     Parameters
     ----------
-    trains : Sequence of :class:`neo.core.SpikeTrain` objects of
-        which the distance will be calculated pairwise.
-    q: Quantity scalar
-        Cost factor for spike shifts as inverse time scalar.
+    spiketrains : list of neo.SpikeTrain
+        Spike trains to calculate pairwise distance.
+    cost_factor: pq.Quantity
+        A cost factor :math:`q` for spike shifts as inverse time scalar.
         Extreme values :math:`q=0` meaning no cost for any shift of
         spikes, or :math: `q=np.inf` meaning infinite cost for any
         spike shift and hence exclusion of spike shifts, are explicitly
         allowed. If `kernel` is not `None`, :math:`q` will be ignored.
         Default: 1.0 * pq.Hz
-    kernel: :class:`.kernels.Kernel`
+    kernel: kernels.Kernel
         Kernel to use in the calculation of the distance. If `kernel` is
         `None`, an unnormalized triangular kernel with standard deviation
         of :math:'2.0/(q * sqrt(6.0))' corresponding to a half width of
@@ -97,42 +102,45 @@ def victor_purpura_dist(
 
     Returns
     -------
-        2-D array
-        Matrix containing the VP distance of all pairs of spike trains.
-
-    Example
-    -------
-        import elephant.spike_train_dissimilarity_measures as stdm
-        q   = 1.0 / (10.0 * pq.ms)
-        st_a = SpikeTrain([10, 20, 30], units='ms', t_stop= 1000.0)
-        st_b = SpikeTrain([12, 24, 30], units='ms', t_stop= 1000.0)
-        vp_f = stdm.victor_purpura_dist([st_a, st_b], q)[0, 1]
-        vp_i = stdm.victor_purpura_dist(
-                   [st_a, st_b], q, algorithm='intuitive')[0, 1]
+    np.ndarray
+        2-D Matrix containing the VP distance of all pairs of spike trains.
+
+    Examples
+    --------
+    >>> import quantities as pq
+    >>> from elephant.spike_train_dissimilarity import victor_purpura_distance
+    >>> q = 1.0 / (10.0 * pq.ms)
+    >>> st_a = SpikeTrain([10, 20, 30], units='ms', t_stop= 1000.0)
+    >>> st_b = SpikeTrain([12, 24, 30], units='ms', t_stop= 1000.0)
+    >>> vp_f = victor_purpura_distance([st_a, st_b], q)[0, 1]
+    >>> vp_i = victor_purpura_distance([st_a, st_b], q,
+    ...        algorithm='intuitive')[0, 1]
     """
-    for train in trains:
+    for train in spiketrains:
         if not (isinstance(train, (pq.quantity.Quantity, SpikeTrain)) and
                 train.dimensionality.simplified ==
                 pq.Quantity(1, "s").dimensionality.simplified):
             raise TypeError("Spike trains must have a time unit.")
 
-    if not (isinstance(q, pq.quantity.Quantity) and
-            q.dimensionality.simplified ==
+    if not (isinstance(cost_factor, pq.quantity.Quantity) and
+            cost_factor.dimensionality.simplified ==
             pq.Quantity(1, "Hz").dimensionality.simplified):
-        raise TypeError("q must be a rate quantity.")
+        raise TypeError("cost_factor must be a rate quantity.")
 
     if kernel is None:
-        if q == 0.0:
-            num_spikes = np.atleast_2d([st.size for st in trains])
+        if cost_factor == 0.0:
+            num_spikes = np.atleast_2d([st.size for st in spiketrains])
             return np.absolute(num_spikes.T - num_spikes)
-        elif q == np.inf:
-            num_spikes = np.atleast_2d([st.size for st in trains])
+        elif cost_factor == np.inf:
+            num_spikes = np.atleast_2d([st.size for st in spiketrains])
             return num_spikes.T + num_spikes
         else:
-            kernel = kernels.TriangularKernel(2.0 / (np.sqrt(6.0) * q))
+            kernel = kernels.TriangularKernel(
+                sigma=2.0 / (np.sqrt(6.0) * cost_factor))
 
     if sort:
-        trains = [np.sort(st.view(type=pq.Quantity)) for st in trains]
+        spiketrains = [np.sort(st.view(type=pq.Quantity))
+                       for st in spiketrains]
 
     def compute(i, j):
         if i == j:
@@ -140,19 +148,25 @@ def victor_purpura_dist(
         else:
             if algorithm == 'fast':
                 return _victor_purpura_dist_for_st_pair_fast(
-                    trains[i], trains[j], kernel)
+                    spiketrains[i], spiketrains[j], kernel)
             elif algorithm == 'intuitive':
                 return _victor_purpura_dist_for_st_pair_intuitive(
-                    trains[i], trains[j], q)
+                    spiketrains[i], spiketrains[j], cost_factor)
             else:
                 raise NameError("algorithm must be either 'fast' "
                                 "or 'intuitive'.")
 
     return _create_matrix_from_indexed_function(
-        (len(trains), len(trains)), compute, kernel.is_symmetric())
+        (len(spiketrains), len(spiketrains)), compute, kernel.is_symmetric())
+
+
+def victor_purpura_dist(*args, **kwargs):
+    warnings.warn("'victor_purpura_dist' funcion is deprecated; "
+                  "use 'victor_purpura_distance'", DeprecationWarning)
+    return victor_purpura_distance(*args, **kwargs)
 
 
-def _victor_purpura_dist_for_st_pair_fast(train_a, train_b, kernel):
+def _victor_purpura_dist_for_st_pair_fast(spiketrain_a, spiketrain_b, kernel):
     """
     The algorithm used is based on the one given in
 
@@ -190,37 +204,37 @@ def _victor_purpura_dist_for_st_pair_fast(train_a, train_b, kernel):
 
     Parameters
     ----------
-    train_a, train_b : :class:`neo.core.SpikeTrain` objects of
+    spiketrain_a, spiketrain_b : :class:`neo.core.SpikeTrain` objects of
         which the Victor-Purpura distance will be calculated pairwise.
     kernel: :class:`.kernels.Kernel`
         Kernel to use in the calculation of the distance.
 
     Returns
     -------
-        float
+    float
         The Victor-Purpura distance of train_a and train_b
     """
 
-    if train_a.size <= 0 or train_b.size <= 0:
-        return max(train_a.size, train_b.size)
+    if spiketrain_a.size <= 0 or spiketrain_b.size <= 0:
+        return max(spiketrain_a.size, spiketrain_b.size)
 
-    if train_a.size < train_b.size:
-        train_a, train_b = train_b, train_a
+    if spiketrain_a.size < spiketrain_b.size:
+        spiketrain_a, spiketrain_b = spiketrain_b, spiketrain_a
 
-    min_dim, max_dim = train_b.size, train_a.size + 1
+    min_dim, max_dim = spiketrain_b.size, spiketrain_a.size + 1
     cost = np.asfortranarray(np.tile(np.arange(float(max_dim)), (2, 1)))
     decreasing_sequence = np.asfortranarray(cost[:, ::-1])
-    kern = kernel((np.atleast_2d(train_a).T.view(type=pq.Quantity) -
-                   train_b.view(type=pq.Quantity)))
+    kern = kernel((np.atleast_2d(spiketrain_a).T.view(type=pq.Quantity) -
+                   spiketrain_b.view(type=pq.Quantity)))
     as_fortran = np.asfortranarray(
         ((np.sqrt(6.0) * kernel.sigma) * kern).simplified)
     k = 1 - 2 * as_fortran
 
-    for i in xrange(min_dim):
+    for i in range(min_dim):
         # determine G[i, i] == accumulated_min[:, 0]
         accumulated_min = cost[:, :-i - 1] + k[i:, i]
-        accumulated_min[1, :train_b.size - i] = \
-            cost[1, :train_b.size - i] + k[i, i:]
+        accumulated_min[1, :spiketrain_b.size - i] = \
+            cost[1, :spiketrain_b.size - i] + k[i, i:]
         accumulated_min = np.minimum(
             accumulated_min,  # shift
             cost[:, 1:max_dim - i])  # insert
@@ -234,8 +248,8 @@ def _victor_purpura_dist_for_st_pair_fast(train_a, train_b, kernel):
     return cost[0, -min_dim - 1]
 
 
-def _victor_purpura_dist_for_st_pair_intuitive(
-                                             train_a, train_b, q=1.0 * pq.Hz):
+def _victor_purpura_dist_for_st_pair_intuitive(spiketrain_a, spiketrain_b,
+                                               cost_factor=1.0 * pq.Hz):
     """
     Function to calculate the Victor-Purpura distance between two spike trains
     described in *J. D. Victor and K. P. Purpura, Nature and precision of
@@ -258,40 +272,43 @@ def _victor_purpura_dist_for_st_pair_intuitive(
 
     Parameters
     ----------
-    train_a, train_b : :class:`neo.core.SpikeTrain` objects of
+    spiketrain_a, spiketrain_b : :class:`neo.core.SpikeTrain` objects of
         which the Victor-Purpura distance will be calculated pairwise.
-    q : Quantity scalar of rate dimension
+    cost_factor : Quantity scalar of rate dimension
         The cost parameter.
         Default: 1.0 * pq.Hz
 
     Returns
     -------
-        float
+    float
         The Victor-Purpura distance of train_a and train_b
     """
-    nspk_a = len(train_a)
-    nspk_b = len(train_b)
+    nspk_a = len(spiketrain_a)
+    nspk_b = len(spiketrain_b)
     scr = np.zeros((nspk_a+1, nspk_b+1))
-    scr[:, 0] = xrange(0, nspk_a+1)
-    scr[0, :] = xrange(0, nspk_b+1)
+    scr[:, 0] = range(0, nspk_a+1)
+    scr[0, :] = range(0, nspk_b+1)
 
     if nspk_a > 0 and nspk_b > 0:
-        for i in xrange(1, nspk_a+1):
-            for j in xrange(1, nspk_b+1):
+        for i in range(1, nspk_a+1):
+            for j in range(1, nspk_b+1):
                 scr[i, j] = min(scr[i-1, j]+1, scr[i, j-1]+1)
-                scr[i, j] = min(scr[i, j], scr[i-1, j-1] + np.float64((
-                               q*abs(train_a[i-1]-train_b[j-1])).simplified))
+                scr[i, j] = min(scr[i, j], scr[i-1, j-1] +
+                                np.float64((
+                                    cost_factor * abs(
+                                        spiketrain_a[i - 1] -
+                                        spiketrain_b[j - 1])).simplified))
     return scr[nspk_a, nspk_b]
 
 
-def van_rossum_dist(trains, tau=1.0 * pq.s, sort=True):
+@deprecated_alias(trains='spiketrains', tau='time_constant')
+def van_rossum_distance(spiketrains, time_constant=1.0 * pq.s, sort=True):
     """
     Calculates the van Rossum distance.
 
     It is defined as Euclidean distance of the spike trains convolved with a
     causal decaying exponential smoothing filter. A detailed description can
-    be found in *Rossum, M. C. W. (2001). A novel spike distance. Neural
-    Computation, 13(4), 751-763.* This implementation is normalized to yield
+    be found in [1]_. This implementation is normalized to yield
     a distance of 1.0 for the distance between an empty spike train and a
     spike train with a single spike. Divide the result by sqrt(2.0) to get
     the normalization used in the cited paper.
@@ -301,13 +318,14 @@ def van_rossum_dist(trains, tau=1.0 * pq.s, sort=True):
 
     Parameters
     ----------
-    trains : Sequence of :class:`neo.core.SpikeTrain` objects of
+    spiketrains : Sequence of :class:`neo.core.SpikeTrain` objects of
         which the van Rossum distance will be calculated pairwise.
-    tau : Quantity scalar
+    time_constant : Quantity scalar
         Decay rate of the exponential function as time scalar. Controls for
-        which time scale the metric will be sensitive. This parameter will
-        be ignored if `kernel` is not `None`. May also be :const:`scipy.inf`
-        which will lead to only measuring differences in spike count.
+        which time scale the metric will be sensitive. Denoted as :math:`t_c`
+        in [1]_. This parameter will be ignored if `kernel` is not `None`.
+        May also be :const:`scipy.inf` which will lead to only measuring
+        differences in spike count.
         Default: 1.0 * pq.s
     sort : bool
         Spike trains with sorted spike times might be needed for the
@@ -317,38 +335,44 @@ def van_rossum_dist(trains, tau=1.0 * pq.s, sort=True):
 
     Returns
     -------
-        2-D array
-        Matrix containing the van Rossum distances for all pairs of
+    np.ndarray
+        2-D Matrix containing the van Rossum distances for all pairs of
         spike trains.
 
-    Example
-    -------
-        import elephant.spike_train_dissimilarity_measures as stdm
-        tau = 10.0 * pq.ms
-        st_a = SpikeTrain([10, 20, 30], units='ms', t_stop= 1000.0)
-        st_b = SpikeTrain([12, 24, 30], units='ms', t_stop= 1000.0)
-        vr   = stdm.van_rossum_dist([st_a, st_b], tau)[0, 1]
+    References
+    ----------
+    [1] Rossum, M. V. (2001). A novel spike distance. Neural computation,
+        13(4), 751-763.
+
+    Examples
+    --------
+    >>> from elephant.spike_train_dissimilarity import van_rossum_distance
+    >>> tau = 10.0 * pq.ms
+    >>> st_a = SpikeTrain([10, 20, 30], units='ms', t_stop= 1000.0)
+    >>> st_b = SpikeTrain([12, 24, 30], units='ms', t_stop= 1000.0)
+    >>> vr = van_rossum_distance([st_a, st_b], tau)[0, 1]
     """
-    for train in trains:
+    for train in spiketrains:
         if not (isinstance(train, (pq.quantity.Quantity, SpikeTrain)) and
                 train.dimensionality.simplified ==
                 pq.Quantity(1, "s").dimensionality.simplified):
             raise TypeError("Spike trains must have a time unit.")
 
-    if not (isinstance(tau, pq.quantity.Quantity) and
-            tau.dimensionality.simplified ==
+    if not (isinstance(time_constant, pq.quantity.Quantity) and
+            time_constant.dimensionality.simplified ==
             pq.Quantity(1, "s").dimensionality.simplified):
         raise TypeError("tau must be a time quantity.")
 
-    if tau == 0:
-        spike_counts = [st.size for st in trains]
+    if time_constant == 0:
+        spike_counts = [st.size for st in spiketrains]
         return np.sqrt(spike_counts + np.atleast_2d(spike_counts).T)
-    elif tau == np.inf:
-        spike_counts = [st.size for st in trains]
+    elif time_constant == np.inf:
+        spike_counts = [st.size for st in spiketrains]
         return np.absolute(spike_counts - np.atleast_2d(spike_counts).T)
 
     k_dist = _summed_dist_matrix(
-        [st.view(type=pq.Quantity) for st in trains], tau, not sort)
+        [st.view(type=pq.Quantity)
+         for st in spiketrains], time_constant, not sort)
     vr_dist = np.empty_like(k_dist)
     for i, j in np.ndindex(k_dist.shape):
         vr_dist[i, j] = (
@@ -356,6 +380,12 @@ def van_rossum_dist(trains, tau=1.0 * pq.s, sort=True):
     return sp.sqrt(vr_dist)
 
 
+def van_rossum_dist(*args, **kwargs):
+    warnings.warn("'van_rossum_dist' function is deprecated; "
+                  "use 'van_rossum_distance'", DeprecationWarning)
+    return van_rossum_distance(*args, **kwargs)
+
+
 def _summed_dist_matrix(spiketrains, tau, presorted=False):
     # The algorithm underlying this implementation is described in
     # Houghton, C., & Kreuz, T. (2012). On the efficient calculation of van
@@ -384,9 +414,9 @@ def _summed_dist_matrix(spiketrains, tau, presorted=False):
 
     exp_diffs = np.exp(values[:, :-1] - values[:, 1:])
     markage = np.zeros(values.shape)
-    for u in xrange(len(spiketrains)):
+    for u in range(len(spiketrains)):
         markage[u, 0] = 0
-        for i in xrange(sizes[u] - 1):
+        for i in range(sizes[u] - 1):
             markage[u, i + 1] = (markage[u, i] + 1.0) * exp_diffs[u, i]
 
     # Same spiketrain terms
@@ -394,9 +424,9 @@ def _summed_dist_matrix(spiketrains, tau, presorted=False):
     D[np.diag_indices_from(D)] = sizes + 2.0 * np.sum(markage, axis=1)
 
     # Cross spiketrain terms
-    for u in xrange(D.shape[0]):
+    for u in range(D.shape[0]):
         all_ks = np.searchsorted(values[u], values, 'left') - 1
-        for v in xrange(u):
+        for v in range(u):
             js = np.searchsorted(values[v], values[u], 'right') - 1
             ks = all_ks[v]
             slice_j = np.s_[np.searchsorted(js, 0):sizes[u]]

Разница между файлами не показана из-за своего большого размера
+ 814 - 502
code/elephant/elephant/spike_train_generation.py


Разница между файлами не показана из-за своего большого размера
+ 1129 - 259
code/elephant/elephant/spike_train_surrogates.py


+ 22 - 18
code/elephant/elephant/sta.py

@@ -3,11 +3,12 @@
 Functions to calculate spike-triggered average and spike-field coherence of
 analog signals.
 
-:copyright: Copyright 2015-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2015-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 '''
 
-from __future__ import division
+from __future__ import division, print_function, unicode_literals
+
 import numpy as np
 import scipy.signal
 import quantities as pq
@@ -15,6 +16,11 @@ from neo.core import AnalogSignal, SpikeTrain
 import warnings
 from .conversion import BinnedSpikeTrain
 
+__all__ = [
+    "spike_triggered_average",
+    "spike_field_coherence"
+]
+
 
 def spike_triggered_average(signal, spiketrains, window):
     """
@@ -33,8 +39,8 @@ def spike_triggered_average(signal, spiketrains, window):
     window : tuple of 2 Quantity objects with dimensions of time.
         'window' is the start time and the stop time, relative to a spike, of
         the time interval for signal averaging.
-        If the window size is not a multiple of the sampling interval of the 
-        signal the window will be extended to the next multiple. 
+        If the window size is not a multiple of the sampling interval of the
+        signal the window will be extended to the next multiple.
 
     Returns
     -------
@@ -47,7 +53,7 @@ def spike_triggered_average(signal, spiketrains, window):
         no spike was either given or all given spikes had to be ignored
         because of a too large averaging interval, the corresponding returned
         analog signal has all entries as nan. The number of used spikes and
-        unused spikes for each analog signal are returned as annotations to 
+        unused spikes for each analog signal are returned as annotations to
         the returned AnalogSignal object.
 
     Examples
@@ -201,12 +207,10 @@ def spike_field_coherence(signal, spiketrain, **kwargs):
     signal : neo AnalogSignal object
         'signal' contains n analog signals.
     spiketrain : SpikeTrain or BinnedSpikeTrain
-        Single spike train to perform the analysis on. The binsize of the
+        Single spike train to perform the analysis on. The bin_size of the
         binned spike train must match the sampling_rate of signal.
-
-    KWArgs
-    ------
-    All KWArgs are passed to scipy.signal.coherence().
+    **kwargs:
+        All kwargs are passed to `scipy.signal.coherence()`.
 
     Returns
     -------
@@ -218,8 +222,8 @@ def spike_field_coherence(signal, spiketrain, **kwargs):
         contains the frequency values corresponding to the first dimension of
         the 'coherence' array
 
-    Example
-    -------
+    Examples
+    --------
 
     Plot the SFC between a regular spike train at 20 Hz, and two sinusoidal
     time series at 20 Hz and 23 Hz, respectively.
@@ -278,7 +282,7 @@ def spike_field_coherence(signal, spiketrain, **kwargs):
     # bin spiketrain if necessary
     if isinstance(spiketrain, SpikeTrain):
         spiketrain = BinnedSpikeTrain(
-            spiketrain, binsize=signal.sampling_period)
+            spiketrain, bin_size=signal.sampling_period)
 
     # check the start and stop times of signal and spike trains
     if spiketrain.t_start < signal.t_start:
@@ -289,18 +293,18 @@ def spike_field_coherence(signal, spiketrain, **kwargs):
             "The spiketrain stops later than the analog signal.")
 
     # check equal time resolution for both signals
-    if spiketrain.binsize != signal.sampling_period:
+    if spiketrain.bin_size != signal.sampling_period:
         raise ValueError(
             "The spiketrain and signal must have a "
-            "common sampling frequency / binsize")
+            "common sampling frequency / bin_size")
 
     # calculate how many bins to add on the left of the binned spike train
     delta_t = spiketrain.t_start - signal.t_start
-    if delta_t % spiketrain.binsize == 0:
-        left_edge = int((delta_t / spiketrain.binsize).magnitude)
+    if delta_t % spiketrain.bin_size == 0:
+        left_edge = int((delta_t / spiketrain.bin_size).magnitude)
     else:
         raise ValueError("Incompatible binning of spike train and LFP")
-    right_edge = int(left_edge + spiketrain.num_bins)
+    right_edge = int(left_edge + spiketrain.n_bins)
 
     # duplicate spike trains
     spiketrain_array = np.zeros((1, len_signals))

Разница между файлами не показана из-за своего большого размера
+ 771 - 771
code/elephant/elephant/statistics.py


+ 57 - 56
code/elephant/elephant/test/make_spike_extraction_test_data.py

@@ -1,64 +1,65 @@
-def main(): # pragma: no cover
-  from brian2 import start_scope,mvolt,ms,NeuronGroup,StateMonitor,run
-  import matplotlib.pyplot as plt
-  import neo
-  import quantities as pq
+def main():  # pragma: no cover
+    from brian2 import start_scope, mvolt, ms, NeuronGroup, StateMonitor, run
+    import matplotlib.pyplot as plt
+    import neo
+    import quantities as pq
 
-  start_scope()
-  
-  # Izhikevich neuron parameters.  
-  a = 0.02/ms
-  b = 0.2/ms
-  c = -65*mvolt
-  d = 6*mvolt/ms
-  I = 4*mvolt/ms
-  
-  # Standard Izhikevich neuron equations.  
-  eqs = '''
+    start_scope()
+
+    # Izhikevich neuron parameters.
+    a = 0.02 / ms
+    b = 0.2 / ms
+    c = -65 * mvolt
+    d = 6 * mvolt / ms
+    I = 4 * mvolt / ms
+
+    # Standard Izhikevich neuron equations.
+    eqs = '''
   dv/dt = 0.04*v**2/(ms*mvolt) + (5/ms)*v + 140*mvolt/ms - u + I : volt
   du/dt = a*((b*v) - u) : volt/second
   '''
-  
-  reset = '''
+
+    reset = '''
   v = c
   u += d
   '''
-  
-  # Setup and run simulation.  
-  G = NeuronGroup(1, eqs, threshold='v>30*mvolt', reset='v = -70*mvolt')
-  G.v = -65*mvolt
-  G.u = b*G.v
-  M = StateMonitor(G, 'v', record=True)
-  run(300*ms)
-  
-  # Store results in neo format.  
-  vm = neo.core.AnalogSignal(M.v[0], units=pq.V, sampling_period=0.1*pq.ms)
-  
-  # Plot results.  
-  plt.figure()
-  plt.plot(vm.times*1000,vm*1000) # Plot mV and ms instead of V and s.  
-  plt.xlabel('Time (ms)')
-  plt.ylabel('mv')
-  
-  # Save results.  
-  iom = neo.io.PyNNNumpyIO('spike_extraction_test_data')
-  block = neo.core.Block()
-  segment = neo.core.Segment()
-  segment.analogsignals.append(vm)
-  block.segments.append(segment)
-  iom.write(block)
-  
-  # Load results.  
-  iom2 = neo.io.PyNNNumpyIO('spike_extraction_test_data.npz')
-  data = iom2.read()
-  vm = data[0].segments[0].analogsignals[0]
-  
-  # Plot results. 
-  # The two figures should match.   
-  plt.figure()
-  plt.plot(vm.times*1000,vm*1000) # Plot mV and ms instead of V and s.  
-  plt.xlabel('Time (ms)')
-  plt.ylabel('mv')
-  
+
+    # Setup and run simulation.
+    G = NeuronGroup(1, eqs, threshold='v>30*mvolt', reset='v = -70*mvolt')
+    G.v = -65 * mvolt
+    G.u = b * G.v
+    M = StateMonitor(G, 'v', record=True)
+    run(300 * ms)
+
+    # Store results in neo format.
+    vm = neo.core.AnalogSignal(M.v[0], units=pq.V, sampling_period=0.1 * pq.ms)
+
+    # Plot results.
+    plt.figure()
+    plt.plot(vm.times * 1000, vm * 1000)  # Plot mV and ms instead of V and s.
+    plt.xlabel('Time (ms)')
+    plt.ylabel('mv')
+
+    # Save results.
+    iom = neo.io.PyNNNumpyIO('spike_extraction_test_data')
+    block = neo.core.Block()
+    segment = neo.core.Segment()
+    segment.analogsignals.append(vm)
+    block.segments.append(segment)
+    iom.write(block)
+
+    # Load results.
+    iom2 = neo.io.PyNNNumpyIO('spike_extraction_test_data.npz')
+    data = iom2.read()
+    vm = data[0].segments[0].analogsignals[0]
+
+    # Plot results.
+    # The two figures should match.
+    plt.figure()
+    plt.plot(vm.times * 1000, vm * 1000)  # Plot mV and ms instead of V and s.
+    plt.xlabel('Time (ms)')
+    plt.ylabel('mv')
+
+
 if __name__ == '__main__':
-  main()
+    main()

+ 408 - 110
code/elephant/elephant/test/test_asset.py

@@ -2,15 +2,22 @@
 """
 Unit tests for the ASSET analysis.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
+import random
 import unittest
+import itertools
+
+import neo
 import numpy as np
-import scipy.spatial
 import quantities as pq
-import neo
+import scipy.spatial
+from numpy.testing import assert_array_almost_equal, assert_array_equal
+
+from elephant import statistics, kernels
+from elephant.spike_train_generation import homogeneous_poisson_process
 
 try:
     import sklearn
@@ -18,9 +25,9 @@ except ImportError:
     HAVE_SKLEARN = False
 else:
     import elephant.asset as asset
+
     HAVE_SKLEARN = True
     stretchedmetric2d = asset._stretched_metric_2d
-    cluster = asset.cluster_matrix_entries
 
 
 @unittest.skipUnless(HAVE_SKLEARN, 'requires sklearn')
@@ -46,7 +53,7 @@ class AssetTestCase(unittest.TestCase):
         y = (1, 2, 0)
         stretch = 10
         D = stretchedmetric2d(x, y, stretch=stretch, ref_angle=45)
-        np.testing.assert_array_almost_equal(D, D.T, decimal=12)
+        assert_array_almost_equal(D, D.T, decimal=12)
 
     def test_stretched_metric_2d_equals_euclidean_if_stretch_1(self):
         x = np.arange(10)
@@ -58,32 +65,7 @@ class AssetTestCase(unittest.TestCase):
         points = np.vstack([x, y]).T
         E = scipy.spatial.distance_matrix(points, points)
         # assert D == E
-        np.testing.assert_array_almost_equal(D, E, decimal=12)
-
-    def test_cluster_correct(self):
-        mat = np.zeros((6, 6))
-        mat[[2, 4, 5], [0, 0, 1]] = 1
-        mat_clustered = cluster(mat, eps=4, min=2, stretch=6)
-
-        mat_correct = np.zeros((6, 6))
-        mat_correct[[4, 5], [0, 1]] = 1
-        mat_correct[2, 0] = -1
-        np.testing.assert_array_equal(mat_clustered, mat_correct)
-
-    def test_cluster_symmetric(self):
-        x = [0, 1, 2, 5, 6, 7]
-        y = [3, 4, 5, 1, 2, 3]
-        mat = np.zeros((10, 10))
-        mat[x, y] = 1
-        mat = mat + mat.T
-        # compute stretched distance matrix
-        mat_clustered = cluster(mat, eps=4, min=2, stretch=6)
-        mat_equals_m1 = (mat_clustered == -1)
-        mat_equals_0 = (mat_clustered == 0)
-        mat_larger_0 = (mat_clustered > 0)
-        np.testing.assert_array_equal(mat_equals_m1, mat_equals_m1.T)
-        np.testing.assert_array_equal(mat_equals_0, mat_equals_0.T)
-        np.testing.assert_array_equal(mat_larger_0, mat_larger_0.T)
+        assert_array_almost_equal(D, E, decimal=12)
 
     def test_sse_difference(self):
         a = {(1, 2): set([1, 2, 3]), (3, 4): set([5, 6]), (6, 7): set([0, 1])}
@@ -93,13 +75,17 @@ class AssetTestCase(unittest.TestCase):
         diff_ab_linkwise = {(1, 2): set([3]), (3, 4): set([5, 6])}
         diff_ba_linkwise = {(1, 2): set([5]), (5, 6): set([0, 2])}
         self.assertEqual(
-            asset.sse_difference(a, b, 'pixelwise'), diff_ab_pixelwise)
+            asset.synchronous_events_difference(a, b, 'pixelwise'),
+            diff_ab_pixelwise)
         self.assertEqual(
-            asset.sse_difference(b, a, 'pixelwise'), diff_ba_pixelwise)
+            asset.synchronous_events_difference(b, a, 'pixelwise'),
+            diff_ba_pixelwise)
         self.assertEqual(
-            asset.sse_difference(a, b, 'linkwise'), diff_ab_linkwise)
+            asset.synchronous_events_difference(a, b, 'linkwise'),
+            diff_ab_linkwise)
         self.assertEqual(
-            asset.sse_difference(b, a, 'linkwise'), diff_ba_linkwise)
+            asset.synchronous_events_difference(b, a, 'linkwise'),
+            diff_ba_linkwise)
 
     def test_sse_intersection(self):
         a = {(1, 2): set([1, 2, 3]), (3, 4): set([5, 6]), (6, 7): set([0, 1])}
@@ -109,109 +95,421 @@ class AssetTestCase(unittest.TestCase):
         inters_ab_linkwise = {(1, 2): set([1, 2]), (6, 7): set([0, 1])}
         inters_ba_linkwise = {(1, 2): set([1, 2]), (6, 7): set([0, 1])}
         self.assertEqual(
-            asset.sse_intersection(a, b, 'pixelwise'), inters_ab_pixelwise)
+            asset.synchronous_events_intersection(a, b, 'pixelwise'),
+            inters_ab_pixelwise)
         self.assertEqual(
-            asset.sse_intersection(b, a, 'pixelwise'), inters_ba_pixelwise)
+            asset.synchronous_events_intersection(b, a, 'pixelwise'),
+            inters_ba_pixelwise)
         self.assertEqual(
-            asset.sse_intersection(a, b, 'linkwise'), inters_ab_linkwise)
+            asset.synchronous_events_intersection(a, b, 'linkwise'),
+            inters_ab_linkwise)
         self.assertEqual(
-            asset.sse_intersection(b, a, 'linkwise'), inters_ba_linkwise)
+            asset.synchronous_events_intersection(b, a, 'linkwise'),
+            inters_ba_linkwise)
 
     def test_sse_relations(self):
         a = {(1, 2): set([1, 2, 3]), (3, 4): set([5, 6]), (6, 7): set([0, 1])}
         b = {(1, 2): set([1, 2, 5]), (5, 6): set([0, 2]), (6, 7): set([0, 1])}
         c = {(5, 6): set([0, 2])}
         d = {(3, 4): set([0, 1]), (5, 6): set([0, 1, 2])}
-        self.assertTrue(asset.sse_isequal({}, {}))
-        self.assertTrue(asset.sse_isequal(a, a))
-        self.assertFalse(asset.sse_isequal(b, c))
-        self.assertTrue(asset.sse_isdisjoint(a, c))
-        self.assertTrue(asset.sse_isdisjoint(a, d))
-        self.assertFalse(asset.sse_isdisjoint(a, b))
-        self.assertTrue(asset.sse_issub(c, b))
-        self.assertTrue(asset.sse_issub(c, d))
-        self.assertFalse(asset.sse_issub(a, b))
-        self.assertTrue(asset.sse_issuper(b, c))
-        self.assertTrue(asset.sse_issuper(d, c))
-        self.assertFalse(asset.sse_issuper(a, b))
-        self.assertTrue(asset.sse_overlap(a, b))
-        self.assertFalse(asset.sse_overlap(c, d))
+        self.assertTrue(asset.synchronous_events_identical({}, {}))
+        self.assertTrue(asset.synchronous_events_identical(a, a))
+        self.assertFalse(asset.synchronous_events_identical(b, c))
+        self.assertTrue(asset.synchronous_events_no_overlap(a, c))
+        self.assertTrue(asset.synchronous_events_no_overlap(a, d))
+        self.assertFalse(asset.synchronous_events_no_overlap(a, b))
+        self.assertFalse(asset.synchronous_events_no_overlap({}, {}))
+        self.assertTrue(asset.synchronous_events_contained_in(c, b))
+        self.assertTrue(asset.synchronous_events_contained_in(c, d))
+        self.assertFalse(asset.synchronous_events_contained_in(a, d))
+        self.assertFalse(asset.synchronous_events_contained_in(a, b))
+        self.assertTrue(asset.synchronous_events_contains_all(b, c))
+        self.assertTrue(asset.synchronous_events_contains_all(d, c))
+        self.assertFalse(asset.synchronous_events_contains_all(a, b))
+        self.assertTrue(asset.synchronous_events_overlap(a, b))
+        self.assertFalse(asset.synchronous_events_overlap(c, d))
 
     def test_mask_matrix(self):
         mat1 = np.array([[0, 1], [1, 2]])
         mat2 = np.array([[2, 1], [1, 3]])
-        mask_1_2 = asset.mask_matrices([mat1, mat2], [1, 2])
+
+        mask_1_2 = asset.ASSET.mask_matrices([mat1, mat2], [1, 2])
         mask_1_2_correct = np.array([[False, False], [False, True]])
         self.assertTrue(np.all(mask_1_2 == mask_1_2_correct))
         self.assertIsInstance(mask_1_2[0, 0], np.bool_)
 
+        self.assertRaises(ValueError, asset.ASSET.mask_matrices, [], [])
+        self.assertRaises(ValueError, asset.ASSET.mask_matrices,
+                          [np.arange(5)], [])
+
     def test_cluster_matrix_entries(self):
-        mat = np.array([[False, False, True, False],
-                        [False, True, False, False],
-                        [True, False, False, True],
-                        [False, False, True, False]])
-        clustered1 = asset.cluster_matrix_entries(
-            mat, eps=1.5, min=2, stretch=1)
-        clustered2 = asset.cluster_matrix_entries(
-            mat, eps=1.5, min=3, stretch=1)
-        clustered1_correctA = np.array([[0, 0, 1, 0],
-                                       [0, 1, 0, 0],
-                                       [1, 0, 0, 2],
-                                       [0, 0, 2, 0]])
-        clustered1_correctB = np.array([[0, 0, 2, 0],
-                                       [0, 2, 0, 0],
-                                       [2, 0, 0, 1],
-                                       [0, 0, 1, 0]])
-        clustered2_correct = np.array([[0, 0, 1, 0],
-                                       [0, 1, 0, 0],
-                                       [1, 0, 0, -1],
-                                       [0, 0, -1, 0]])
-        self.assertTrue(np.all(clustered1 == clustered1_correctA) or
-                        np.all(clustered1 == clustered1_correctB))
-        self.assertTrue(np.all(clustered2 == clustered2_correct))
+        # test with symmetric matrix
+        mat = np.array([[0, 0, 1, 0],
+                        [0, 0, 0, 1],
+                        [1, 0, 0, 0],
+                        [0, 1, 0, 0]])
+
+        clustered = asset.ASSET.cluster_matrix_entries(
+            mat, max_distance=1.5, min_neighbors=2, stretch=1)
+        correct = np.array([[0, 0, 1, 0],
+                            [0, 0, 0, 1],
+                            [2, 0, 0, 0],
+                            [0, 2, 0, 0]])
+        assert_array_equal(clustered, correct)
+
+        # test with non-symmetric matrix
+        mat = np.array([[0, 1, 0, 0],
+                        [0, 0, 1, 0],
+                        [1, 0, 0, 1],
+                        [0, 1, 0, 0]])
+        clustered = asset.ASSET.cluster_matrix_entries(
+            mat, max_distance=1.5, min_neighbors=3, stretch=1)
+        correct = np.array([[0, 1, 0, 0],
+                            [0, 0, 1, 0],
+                            [-1, 0, 0, 1],
+                            [0, -1, 0, 0]])
+        assert_array_equal(clustered, correct)
+
+        # test with lowered min_neighbors
+        mat = np.array([[0, 1, 0, 0],
+                        [0, 0, 1, 0],
+                        [1, 0, 0, 1],
+                        [0, 1, 0, 0]])
+        clustered = asset.ASSET.cluster_matrix_entries(
+            mat, max_distance=1.5, min_neighbors=2, stretch=1)
+        correct = np.array([[0, 1, 0, 0],
+                            [0, 0, 1, 0],
+                            [2, 0, 0, 1],
+                            [0, 2, 0, 0]])
+        assert_array_equal(clustered, correct)
+
+        mat = np.zeros((4, 4))
+        clustered = asset.ASSET.cluster_matrix_entries(
+            mat, max_distance=1.5, min_neighbors=2, stretch=1)
+        correct = mat
+        assert_array_equal(clustered, correct)
 
     def test_intersection_matrix(self):
-        st1 = neo.SpikeTrain([1, 2, 4]*pq.ms, t_stop=6*pq.ms)
-        st2 = neo.SpikeTrain([1, 3, 4]*pq.ms, t_stop=6*pq.ms)
-        st3 = neo.SpikeTrain([2, 5]*pq.ms, t_start=1*pq.ms, t_stop=6*pq.ms)
-        st4 = neo.SpikeTrain([1, 3, 6]*pq.ms, t_stop=8*pq.ms)
-        binsize = 1 * pq.ms
+        st1 = neo.SpikeTrain([1, 2, 4] * pq.ms, t_stop=6 * pq.ms)
+        st2 = neo.SpikeTrain([1, 3, 4] * pq.ms, t_stop=6 * pq.ms)
+        st3 = neo.SpikeTrain([2, 5] * pq.ms, t_start=1 * pq.ms,
+                             t_stop=6 * pq.ms)
+        bin_size = 1 * pq.ms
+
+        asset_obj_same_t_start_stop = asset.ASSET(
+            [st1, st2], bin_size=bin_size, t_stop_i=5 * pq.ms,
+            t_stop_j=5 * pq.ms)
 
         # Check that the routine works for correct input...
         # ...same t_start, t_stop on both time axes
-        imat_1_2, xedges, yedges = asset.intersection_matrix(
-            [st1, st2], binsize, dt=5*pq.ms)
-        trueimat_1_2 = np.array([[0.,  0.,  0.,  0.,  0.],
-                                 [0.,  2.,  1.,  1.,  2.],
-                                 [0.,  1.,  1.,  0.,  1.],
-                                 [0.,  1.,  0.,  1.,  1.],
-                                 [0.,  2.,  1.,  1.,  2.]])
-        self.assertTrue(np.all(xedges == np.arange(6)*pq.ms))  # correct bins
-        self.assertTrue(np.all(yedges == np.arange(6)*pq.ms))  # correct bins
-        self.assertTrue(np.all(imat_1_2 == trueimat_1_2))  # correct matrix
+        imat_1_2 = asset_obj_same_t_start_stop.intersection_matrix()
+        trueimat_1_2 = np.array([[0., 0., 0., 0., 0.],
+                                 [0., 2., 1., 1., 2.],
+                                 [0., 1., 1., 0., 1.],
+                                 [0., 1., 0., 1., 1.],
+                                 [0., 2., 1., 1., 2.]])
+        assert_array_equal(asset_obj_same_t_start_stop.x_edges,
+                           np.arange(6) * pq.ms)  # correct bins
+        assert_array_equal(asset_obj_same_t_start_stop.y_edges,
+                           np.arange(6) * pq.ms)  # correct bins
+        assert_array_equal(imat_1_2, trueimat_1_2)  # correct matrix
         # ...different t_start, t_stop on the two time axes
-        imat_1_2, xedges, yedges = asset.intersection_matrix(
-            [st1, st2], binsize, t_start_y=1*pq.ms, dt=5*pq.ms)
-        trueimat_1_2 = np.array([[0.,  0.,  0.,  0., 0.],
-                                 [2.,  1.,  1.,  2., 0.],
-                                 [1.,  1.,  0.,  1., 0.],
-                                 [1.,  0.,  1.,  1., 0.],
-                                 [2.,  1.,  1.,  2., 0.]])
-        self.assertTrue(np.all(xedges == np.arange(6)*pq.ms))  # correct bins
+        asset_obj_different_t_start_stop = asset.ASSET(
+            [st1, st2], spiketrains_j=[st + 6 * pq.ms for st in [st1, st2]],
+            bin_size=bin_size, t_start_j=6 * pq.ms, t_stop_i=5 * pq.ms,
+            t_stop_j=11 * pq.ms)
+        imat_1_2 = asset_obj_different_t_start_stop.intersection_matrix()
+        assert_array_equal(asset_obj_different_t_start_stop.x_edges,
+                           np.arange(6) * pq.ms)  # correct bins
+        assert_array_equal(asset_obj_different_t_start_stop.y_edges,
+                           np.arange(6, 12) * pq.ms)
         self.assertTrue(np.all(imat_1_2 == trueimat_1_2))  # correct matrix
 
+        # test with norm=1
+        imat_1_2 = asset_obj_same_t_start_stop.intersection_matrix(
+            normalization='intersection')
+        trueimat_1_2 = np.array([[0., 0., 0., 0., 0.],
+                                 [0., 1., 1., 1., 1.],
+                                 [0., 1., 1., 0., 1.],
+                                 [0., 1., 0., 1., 1.],
+                                 [0., 1., 1., 1., 1.]])
+        assert_array_equal(imat_1_2, trueimat_1_2)
+
+        # test with norm=2
+        imat_1_2 = asset_obj_same_t_start_stop.intersection_matrix(
+            normalization='mean')
+        sq = np.sqrt(2) / 2.
+        trueimat_1_2 = np.array([[0., 0., 0., 0., 0.],
+                                 [0., 1., sq, sq, 1.],
+                                 [0., sq, 1., 0., sq],
+                                 [0., sq, 0., 1., sq],
+                                 [0., 1., sq, sq, 1.]])
+        assert_array_almost_equal(imat_1_2, trueimat_1_2)
+
+        # test with norm=3
+        imat_1_2 = asset_obj_same_t_start_stop.intersection_matrix(
+            normalization='union')
+        trueimat_1_2 = np.array([[0., 0., 0., 0., 0.],
+                                 [0., 1., .5, .5, 1.],
+                                 [0., .5, 1., 0., .5],
+                                 [0., .5, 0., 1., .5],
+                                 [0., 1., .5, .5, 1.]])
+        assert_array_almost_equal(imat_1_2, trueimat_1_2)
+
         # Check that errors are raised correctly...
-        # ...for dt too large compared to length of spike trains
-        self.assertRaises(ValueError, asset.intersection_matrix,
-                          spiketrains=[st1, st2], binsize=binsize, dt=8*pq.ms)
+        # ...for partially overlapping time intervals
+        self.assertRaises(ValueError, asset.ASSET,
+                          spiketrains_i=[st1, st2], bin_size=bin_size,
+                          t_start_j=1 * pq.ms)
         # ...for different SpikeTrain's t_starts
-        self.assertRaises(ValueError, asset.intersection_matrix,
-                          spiketrains=[st1, st3], binsize=binsize, dt=8*pq.ms)
-        # ...when the analysis is specified for a time span where the
-        # spike trains are not defined (e.g. t_start_x < SpikeTrain.t_start)
-        self.assertRaises(ValueError, asset.intersection_matrix,
-                          spiketrains=[st1, st2], binsize=binsize, dt=8*pq.ms,
-                          t_start_x=-2*pq.ms, t_start_y=-2*pq.ms)
+        self.assertRaises(ValueError, asset.ASSET,
+                          spiketrains_i=[st1, st3], bin_size=bin_size)
+        # ...for different SpikeTrain's t_stops
+        self.assertRaises(ValueError, asset.ASSET,
+                          spiketrains_i=[st1, st2], bin_size=bin_size,
+                          t_stop_j=5 * pq.ms)
+
+    def test_combinations_with_replacement(self):
+        # Test that _combinations_with_replacement yields the same tuples
+        # as in the original implementation with itertools.product(*lists)
+        # and filtering by _wrong_order.
+
+        def _wrong_order(a):
+            if a[-1] > a[0]:
+                return True
+            for i in range(len(a) - 1):
+                if a[i] < a[i + 1]:
+                    return True
+            return False
+
+        for n in range(1, 15):
+            for d in range(1, 6):
+                lists = [range(j, n + 1) for j in range(d, 0, -1)]
+                matrix_entries = list(
+                    asset._combinations_with_replacement(n=n, d=d)
+                )
+                matrix_entries_correct = [
+                    indices for indices in itertools.product(*lists)
+                    if not _wrong_order(indices)
+                ]
+                it_todo = asset._num_iterations(n=n, d=d)
+                self.assertEqual(matrix_entries, matrix_entries_correct)
+                self.assertEqual(it_todo, len(matrix_entries_correct))
+
+
+@unittest.skipUnless(HAVE_SKLEARN, 'requires sklearn')
+class AssetTestIntegration(unittest.TestCase):
+    def setUp(self):
+        # common for all tests
+        self.bin_size = 3 * pq.ms
+
+    def test_probability_matrix_symmetric(self):
+        np.random.seed(1)
+        kernel_width = 9 * pq.ms
+        rate = 50 * pq.Hz
+        n_spiketrains = 50
+        spiketrains = []
+        spiketrains_copy = []
+        for _ in range(n_spiketrains):
+            st = homogeneous_poisson_process(rate, t_stop=100 * pq.ms)
+            spiketrains.append(st)
+            spiketrains_copy.append(st.copy())
+
+        asset_obj = asset.ASSET(spiketrains, bin_size=self.bin_size)
+        asset_obj_symmetric = asset.ASSET(spiketrains,
+                                          spiketrains_j=spiketrains_copy,
+                                          bin_size=self.bin_size)
+
+        imat = asset_obj.intersection_matrix()
+        pmat = asset_obj.probability_matrix_analytical(
+            kernel_width=kernel_width)
+
+        imat_symm = asset_obj_symmetric.intersection_matrix()
+        pmat_symm = asset_obj_symmetric.probability_matrix_analytical(
+            kernel_width=kernel_width)
+
+        assert_array_almost_equal(pmat, pmat_symm)
+        assert_array_almost_equal(imat, imat_symm)
+        assert_array_almost_equal(asset_obj.x_edges,
+                                  asset_obj_symmetric.x_edges)
+        assert_array_almost_equal(asset_obj.y_edges,
+                                  asset_obj_symmetric.y_edges)
+
+    def _test_integration_subtest(self, spiketrains, spiketrains_y,
+                                  indices_pmat, index_proba, expected_sses):
+        # define parameters
+        random.seed(1)
+        kernel_width = 9 * pq.ms
+        surrogate_dt = 9 * pq.ms
+        alpha = 0.9
+        filter_shape = (5, 1)
+        nr_largest = 3
+        max_distance = 3
+        min_neighbors = 3
+        stretch = 5
+        n_surr = 20
+
+        def _get_rates(_spiketrains):
+            kernel_sigma = kernel_width / 2. / np.sqrt(3.)
+            kernel = kernels.RectangularKernel(sigma=kernel_sigma)
+            rates = [statistics.instantaneous_rate(
+                st,
+                kernel=kernel,
+                sampling_period=1 * pq.ms)
+                for st in _spiketrains]
+            return rates
+
+        asset_obj = asset.ASSET(spiketrains, spiketrains_y,
+                                bin_size=self.bin_size)
+
+        # calculate the intersection matrix
+        imat = asset_obj.intersection_matrix()
+
+        # calculate probability matrix analytical
+        pmat = asset_obj.probability_matrix_analytical(
+            imat,
+            kernel_width=kernel_width)
+
+        # check if pmat is the same when rates are provided
+        pmat_as_rates = asset_obj.probability_matrix_analytical(
+            imat,
+            firing_rates_x=_get_rates(spiketrains),
+            firing_rates_y=_get_rates(spiketrains_y))
+        assert_array_almost_equal(pmat, pmat_as_rates)
+
+        # calculate probability matrix montecarlo
+        pmat_montecarlo = asset_obj.probability_matrix_montecarlo(
+            n_surrogates=n_surr,
+            imat=imat,
+            surrogate_dt=surrogate_dt,
+            surrogate_method='dither_spikes')
+
+        # test probability matrices
+        assert_array_equal(np.where(pmat > alpha), indices_pmat)
+        assert_array_equal(np.where(pmat_montecarlo > alpha),
+                           indices_pmat)
+        # calculate joint probability matrix
+        jmat = asset_obj.joint_probability_matrix(
+            pmat,
+            filter_shape=filter_shape,
+            n_largest=nr_largest)
+        # test joint probability matrix
+        assert_array_equal(np.where(jmat > 0.98), index_proba['high'])
+        assert_array_equal(np.where(jmat > 0.9), index_proba['medium'])
+        assert_array_equal(np.where(jmat > 0.8), index_proba['low'])
+        # test if all other entries are zeros
+        mask_zeros = np.ones(jmat.shape, bool)
+        mask_zeros[index_proba['low']] = False
+        self.assertTrue(np.all(jmat[mask_zeros] == 0))
+
+        # calculate mask matrix and cluster matrix
+        mmat = asset_obj.mask_matrices([pmat, jmat], [alpha, alpha])
+        cmat = asset_obj.cluster_matrix_entries(
+            mmat,
+            max_distance=max_distance,
+            min_neighbors=min_neighbors,
+            stretch=stretch)
+
+        # extract sses and test them
+        sses = asset_obj.extract_synchronous_events(cmat)
+        self.assertDictEqual(sses, expected_sses)
+
+    def test_integration(self):
+        """
+        The test is written according to the notebook (for developers only):
+        https://github.com/INM-6/elephant-tutorials/blob/master/
+        simple_test_asset.ipynb
+        """
+        # define parameters
+        np.random.seed(1)
+        size_group = 3
+        size_sse = 3
+        T = 60 * pq.ms
+        delay = 9 * pq.ms
+        bins_between_sses = 3
+        time_between_sses = 9 * pq.ms
+        # ground truth for pmats
+        starting_bin_1 = int((delay / self.bin_size).magnitude.item())
+        starting_bin_2 = int(
+            (2 * delay / self.bin_size + time_between_sses / self.bin_size
+             ).magnitude.item())
+        indices_pmat_1 = np.arange(starting_bin_1, starting_bin_1 + size_sse)
+        indices_pmat_2 = np.arange(starting_bin_2,
+                                   starting_bin_2 + size_sse)
+        indices_pmat = (np.concatenate((indices_pmat_1, indices_pmat_2)),
+                        np.concatenate((indices_pmat_2, indices_pmat_1)))
+        # generate spike trains
+        spiketrains = [neo.SpikeTrain([index_spiketrain,
+                                       index_spiketrain +
+                                       size_sse +
+                                       bins_between_sses] * self.bin_size
+                                      + delay + 1 * pq.ms,
+                                      t_stop=T)
+                       for index_group in range(size_group)
+                       for index_spiketrain in range(size_sse)]
+        index_proba = {
+            "high": (np.array([9, 9, 10, 10, 10, 11, 11]),
+                     np.array([3, 4, 3, 4, 5, 4, 5])),
+            "medium": (np.array([8, 8, 9, 9, 9, 10, 10,
+                                 10, 11, 11, 11, 12, 12]),
+                       np.array([2, 3, 2, 3, 4, 3, 4, 5, 4, 5, 6, 5, 6])),
+            "low": (np.array([7, 8, 8, 9, 9, 9, 10, 10, 10,
+                              11, 11, 11, 12, 12, 12, 13, 13]),
+                    np.array([2, 2, 3, 2, 3, 4, 3, 4, 5, 4, 5,
+                              6, 5, 6, 7, 6, 7]))
+        }
+        expected_sses = {1: {(9, 3): {0, 3, 6}, (10, 4): {1, 4, 7},
+                             (11, 5): {2, 5, 8}}}
+        self._test_integration_subtest(spiketrains,
+                                       spiketrains_y=spiketrains,
+                                       indices_pmat=indices_pmat,
+                                       index_proba=index_proba,
+                                       expected_sses=expected_sses)
+
+    def test_integration_nonsymmetric(self):
+        # define parameters
+        np.random.seed(1)
+        random.seed(1)
+        size_group = 3
+        size_sse = 3
+        delay = 18 * pq.ms
+        T = 4 * delay + 2 * size_sse * self.bin_size
+        time_between_sses = 2 * delay
+        # ground truth for pmats
+        starting_bin = int((delay / self.bin_size).magnitude.item())
+        indices_pmat_1 = np.arange(starting_bin, starting_bin + size_sse)
+        indices_pmat = (indices_pmat_1, indices_pmat_1)
+        # generate spike trains
+        spiketrains = [
+            neo.SpikeTrain([index_spiketrain] * self.bin_size + delay,
+                           t_start=0 * pq.ms,
+                           t_stop=2 * delay + size_sse * self.bin_size)
+            for index_group in range(size_group)
+            for index_spiketrain in range(size_sse)]
+        spiketrains_y = [
+            neo.SpikeTrain([index_spiketrain] * self.bin_size + delay +
+                           time_between_sses + size_sse * self.bin_size,
+                           t_start=size_sse * self.bin_size + 2 * delay,
+                           t_stop=T)
+            for index_group in range(size_group)
+            for index_spiketrain in range(size_sse)]
+        index_proba = {
+            "high": ([6, 6, 7, 7, 7, 8, 8],
+                     [6, 7, 6, 7, 8, 7, 8]),
+            "medium": ([5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9],
+                       [5, 6, 5, 6, 7, 6, 7, 8, 7, 8, 9, 8, 9]),
+            "low": ([4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7,
+                     8, 8, 8, 9, 9, 9, 10, 10],
+                    [4, 5, 4, 5, 6, 5, 6, 7, 6, 7, 8,
+                     7, 8, 9, 8, 9, 10, 9, 10])
+        }
+        expected_sses = {1: {(6, 6): {0, 3, 6}, (7, 7): {1, 4, 7},
+                             (8, 8): {2, 5, 8}}}
+        self._test_integration_subtest(spiketrains,
+                                       spiketrains_y=spiketrains_y,
+                                       indices_pmat=indices_pmat,
+                                       index_proba=index_proba,
+                                       expected_sses=expected_sses)
 
 
 def suite():

+ 53 - 42
code/elephant/elephant/test/test_cell_assembly_detection.py

@@ -17,10 +17,10 @@ class CadTestCase(unittest.TestCase):
     def setUp(self):
 
         # Parameters
-        self.binsize = 1*pq.ms
+        self.bin_size = 1 * pq.ms
         self.alpha = 0.05
         self.size_chunks = 100
-        self.maxlag = 10
+        self.max_lag = 10
         self.reference_lag = 2
         self.min_occ = 1
         self.max_spikes = np.inf
@@ -53,32 +53,32 @@ class CadTestCase(unittest.TestCase):
         np.random.seed(1)
         self.patt1_times = neo.SpikeTrain(
             np.random.uniform(0, 1 - max(self.lags1), self.n_occ1) * pq.s,
-            t_start=0*pq.s, t_stop=1*pq.s)
+            t_start=0 * pq.s, t_stop=1 * pq.s)
         self.patt2_times = neo.SpikeTrain(
             np.random.uniform(0, 1 - max(self.lags2), self.n_occ2) * pq.s,
-            t_start=0*pq.s, t_stop=1*pq.s)
+            t_start=0 * pq.s, t_stop=1 * pq.s)
         self.patt3_times = neo.SpikeTrain(
             np.random.uniform(0, 1 - max(self.lags3), self.n_occ3) * pq.s,
-            t_start=0*pq.s, t_stop=1*pq.s)
+            t_start=0 * pq.s, t_stop=1 * pq.s)
 
         # Patterns
         self.patt1 = [self.patt1_times] + [neo.SpikeTrain(
-            self.patt1_times+l * pq.s, t_start=self.t_start * pq.s,
+            self.patt1_times + l * pq.s, t_start=self.t_start * pq.s,
             t_stop=self.t_stop * pq.s) for l in self.lags1]
         self.patt2 = [self.patt2_times] + [neo.SpikeTrain(
-            self.patt2_times+l * pq.s,  t_start=self.t_start * pq.s,
+            self.patt2_times + l * pq.s, t_start=self.t_start * pq.s,
             t_stop=self.t_stop * pq.s) for l in self.lags2]
         self.patt3 = [self.patt3_times] + [neo.SpikeTrain(
-            self.patt3_times+l * pq.s,  t_start=self.t_start * pq.s,
+            self.patt3_times + l * pq.s, t_start=self.t_start * pq.s,
             t_stop=self.t_stop * pq.s) for l in self.lags3]
 
         # Binning spiketrains
         self.bin_patt1 = conv.BinnedSpikeTrain(self.patt1,
-                                               binsize=self.binsize)
+                                               bin_size=self.bin_size)
 
         # Data
         self.msip = self.patt1 + self.patt2 + self.patt3
-        self.msip = conv.BinnedSpikeTrain(self.msip, binsize=self.binsize)
+        self.msip = conv.BinnedSpikeTrain(self.msip, bin_size=self.bin_size)
 
         # Expected results
         self.n_spk1 = len(self.lags1) + 1
@@ -92,11 +92,11 @@ class CadTestCase(unittest.TestCase):
             range(self.n_spk1 + self.n_spk2,
                   self.n_spk1 + self.n_spk2 + self.n_spk3)]
         self.occ1 = np.unique(conv.BinnedSpikeTrain(
-            self.patt1_times, self.binsize).spike_indices[0])
+            self.patt1_times, self.bin_size).spike_indices[0])
         self.occ2 = np.unique(conv.BinnedSpikeTrain(
-            self.patt2_times, self.binsize).spike_indices[0])
+            self.patt2_times, self.bin_size).spike_indices[0])
         self.occ3 = np.unique(conv.BinnedSpikeTrain(
-            self.patt3_times, self.binsize).spike_indices[0])
+            self.patt3_times, self.bin_size).spike_indices[0])
         self.occ_msip = [list(self.occ1), list(self.occ2), list(self.occ3)]
         self.lags_msip = [self.output_lags1,
                           self.output_lags2,
@@ -105,8 +105,8 @@ class CadTestCase(unittest.TestCase):
     # test for single pattern injection input
     def test_cad_single_sip(self):
         # collecting cad output
-        output_single = cad.\
-            cell_assembly_detection(data=self.bin_patt1, maxlag=self.maxlag)
+        output_single = cad.cell_assembly_detection(
+            binned_spiketrain=self.bin_patt1, max_lag=self.max_lag)
         # check neurons in the pattern
         assert_array_equal(sorted(output_single[0]['neurons']),
                            self.elements1)
@@ -120,8 +120,8 @@ class CadTestCase(unittest.TestCase):
     # test with multiple (3) patterns injected in the data
     def test_cad_msip(self):
         # collecting cad output
-        output_msip = cad.\
-            cell_assembly_detection(data=self.msip, maxlag=self.maxlag)
+        output_msip = cad.cell_assembly_detection(
+            binned_spiketrain=self.msip, max_lag=self.max_lag)
 
         elements_msip = []
         occ_msip = []
@@ -149,54 +149,65 @@ class CadTestCase(unittest.TestCase):
         # test error data input format
         self.assertRaises(TypeError, cad.cell_assembly_detection,
                           data=[[1, 2, 3], [3, 4, 5]],
-                          maxlag=self.maxlag)
+                          maxlag=self.max_lag)
         # test error significance level
         self.assertRaises(ValueError, cad.cell_assembly_detection,
                           data=conv.BinnedSpikeTrain(
-                              [neo.SpikeTrain([1, 2, 3]*pq.s, t_stop=5*pq.s),
-                               neo.SpikeTrain([3, 4, 5]*pq.s, t_stop=5*pq.s)],
-                              binsize=self.binsize),
-                          maxlag=self.maxlag,
+                              [neo.SpikeTrain([1, 2, 3] * pq.s,
+                                              t_stop=5 * pq.s),
+                               neo.SpikeTrain([3, 4, 5] * pq.s,
+                                              t_stop=5 * pq.s)],
+                              bin_size=self.bin_size),
+                          maxlag=self.max_lag,
                           alpha=-3)
         # test error minimum number of occurrences
         self.assertRaises(ValueError, cad.cell_assembly_detection,
                           data=conv.BinnedSpikeTrain(
-                              [neo.SpikeTrain([1, 2, 3]*pq.s, t_stop=5*pq.s),
-                               neo.SpikeTrain([3, 4, 5]*pq.s, t_stop=5*pq.s)],
-                              binsize=self.binsize),
-                          maxlag=self.maxlag,
+                              [neo.SpikeTrain([1, 2, 3] * pq.s,
+                                              t_stop=5 * pq.s),
+                               neo.SpikeTrain([3, 4, 5] * pq.s,
+                                              t_stop=5 * pq.s)],
+                              bin_size=self.bin_size),
+                          maxlag=self.max_lag,
                           min_occ=-1)
         # test error minimum number of spikes in a pattern
         self.assertRaises(ValueError, cad.cell_assembly_detection,
                           data=conv.BinnedSpikeTrain(
-                              [neo.SpikeTrain([1, 2, 3]*pq.s, t_stop=5*pq.s),
-                               neo.SpikeTrain([3, 4, 5]*pq.s, t_stop=5*pq.s)],
-                              binsize=self.binsize),
-                          maxlag=self.maxlag,
+                              [neo.SpikeTrain([1, 2, 3] * pq.s,
+                                              t_stop=5 * pq.s),
+                               neo.SpikeTrain([3, 4, 5] * pq.s,
+                                              t_stop=5 * pq.s)],
+                              bin_size=self.bin_size),
+                          maxlag=self.max_lag,
                           max_spikes=1)
         # test error chunk size for variance computation
         self.assertRaises(ValueError, cad.cell_assembly_detection,
                           data=conv.BinnedSpikeTrain(
-                              [neo.SpikeTrain([1, 2, 3]*pq.s, t_stop=5*pq.s),
-                               neo.SpikeTrain([3, 4, 5]*pq.s, t_stop=5*pq.s)],
-                              binsize=self.binsize),
-                          maxlag=self.maxlag,
+                              [neo.SpikeTrain([1, 2, 3] * pq.s,
+                                              t_stop=5 * pq.s),
+                               neo.SpikeTrain([3, 4, 5] * pq.s,
+                                              t_stop=5 * pq.s)],
+                              bin_size=self.bin_size),
+                          maxlag=self.max_lag,
                           size_chunks=1)
         # test error maximum lag
         self.assertRaises(ValueError, cad.cell_assembly_detection,
                           data=conv.BinnedSpikeTrain(
-                              [neo.SpikeTrain([1, 2, 3]*pq.s, t_stop=5*pq.s),
-                               neo.SpikeTrain([3, 4, 5]*pq.s, t_stop=5*pq.s)],
-                              binsize=self.binsize),
+                              [neo.SpikeTrain([1, 2, 3] * pq.s,
+                                              t_stop=5 * pq.s),
+                               neo.SpikeTrain([3, 4, 5] * pq.s,
+                                              t_stop=5 * pq.s)],
+                              bin_size=self.bin_size),
                           maxlag=1)
         # test error minimum length spike train
         self.assertRaises(ValueError, cad.cell_assembly_detection,
                           data=conv.BinnedSpikeTrain(
-                              [neo.SpikeTrain([1, 2, 3]*pq.ms, t_stop=6*pq.ms),
-                               neo.SpikeTrain([3, 4, 5]*pq.ms,
-                                              t_stop=6*pq.ms)],
-                              binsize=1*pq.ms),
-                          maxlag=self.maxlag)
+                              [neo.SpikeTrain([1, 2, 3] * pq.ms,
+                                              t_stop=6 * pq.ms),
+                               neo.SpikeTrain([3, 4, 5] * pq.ms,
+                                              t_stop=6 * pq.ms)],
+                              bin_size=1 * pq.ms),
+                          maxlag=self.max_lag)
 
 
 def suite():

+ 59 - 33
code/elephant/elephant/test/test_change_point_detection.py

@@ -6,9 +6,9 @@ import quantities as pq
 import unittest
 import elephant.change_point_detection as mft
 from numpy.testing.utils import assert_array_almost_equal, assert_allclose
-                                     
-                                     
-#np.random.seed(13)
+
+
+# np.random.seed(13)
 
 class FilterTestCase(unittest.TestCase):
     def setUp(self):
@@ -21,7 +21,7 @@ class FilterTestCase(unittest.TestCase):
         mu_le = (0.1 + 0.15 + 0.05) / 3
         sigma_ri = ((0.25 - 0.15) ** 2 + (0.05 - 0.15) ** 2) / 2
         sigma_le = ((0.1 - 0.1) ** 2 + (0.15 - 0.1) ** 2 + (
-                0.05 - 0.1) ** 2) / 3
+            0.05 - 0.1) ** 2) / 3
         self.targ_t08_h025 = 0
         self.targ_t08_h05 = (3 - 4) / np.sqrt(
             (sigma_ri / mu_ri ** (3)) * 0.5 + (sigma_le / mu_le ** (3)) * 0.5)
@@ -36,7 +36,7 @@ class FilterTestCase(unittest.TestCase):
         self.assertRaises(ValueError, mft._filter, 0.8 * pq.s, 0.5, st)
         self.assertRaises(ValueError, mft._filter, 0.8 * pq.s, 0.5 * pq.s,
                           self.test_array)
-        
+
     # Window Small #
     def test_filter_with_spiketrain_h025(self):
         st = neo.SpikeTrain(self.test_array, units='s', t_stop=2.0)
@@ -55,7 +55,7 @@ class FilterTestCase(unittest.TestCase):
         target = self.targ_t08_h025
         res = mft._filter(0.8 * pq.s, 0.25 * pq.s, st * pq.s)
         assert_array_almost_equal(res, target, decimal=9)
-        
+
     def test_isi_with_quantities_h05(self):
         st = pq.Quantity(self.test_array, units='s')
         target = self.targ_t08_h05
@@ -84,15 +84,15 @@ class FilterProcessTestCase(unittest.TestCase):
         res = mft._filter_process(0.5 * pq.s, 0.5 * pq.s, st, 2.01 * pq.s,
                                   np.array([[0.5], [1.7], [0.4]]))
         assert_array_almost_equal(res[1], target[1], decimal=3)
-        
-        self.assertRaises(ValueError, mft._filter_process, 0.5 , 0.5 * pq.s,
-                              st, 2.01 * pq.s, np.array([[0.5], [1.7], [0.4]]))
+
+        self.assertRaises(ValueError, mft._filter_process, 0.5, 0.5 * pq.s,
+                          st, 2.01 * pq.s, np.array([[0.5], [1.7], [0.4]]))
         self.assertRaises(ValueError, mft._filter_process, 0.5 * pq.s, 0.5,
-                              st, 2.01 * pq.s, np.array([[0.5], [1.7], [0.4]]))
+                          st, 2.01 * pq.s, np.array([[0.5], [1.7], [0.4]]))
         self.assertRaises(ValueError, mft._filter_process, 0.5 * pq.s,
                           0.5 * pq.s, self.test_array, 2.01 * pq.s,
                           np.array([[0.5], [1.7], [0.4]]))
-      
+
     def test_filter_proces_with_quantities_h05(self):
         st = pq.Quantity(self.test_array, units='s')
         target = self.targ_h05
@@ -113,49 +113,68 @@ class MultipleFilterAlgorithmTestCase(unittest.TestCase):
     def setUp(self):
         self.test_array = [1.1, 1.2, 1.4, 1.6, 1.7, 1.75, 1.8, 1.85, 1.9, 1.95]
         self.targ_h05_dt05 = [1.5 * pq.s]
-        
-        # to speed up the test, the following `test_param` and `test_quantile` 
+
+        # to speed up the test, the following `test_param` and `test_quantile`
         # paramters have been calculated offline using the function:
-        # empirical_parameters([10, 25, 50, 75, 100, 125, 150]*pq.s,700*pq.s,5, 
+        # empirical_parameters([10, 25, 50, 75, 100, 125, 150]*pq.s,700*pq.s,5,
         #                                                                10000)
-        # the user should do the same, if the metohd has to be applied to several
-        # spike trains of the same length `T` and with the same set of window.
-        self.test_param = np.array([[10., 25.,  50.,  75.,   100., 125., 150.],
-                            [3.167, 2.955,  2.721, 2.548, 2.412, 2.293, 2.180],
-                            [0.150, 0.185, 0.224, 0.249, 0.269, 0.288, 0.301]])
+        # the user should do the same, if the metohd has to be applied to
+        # several spike trains of the same length `T` and with the same set of
+        # window.
+        self.test_param = np.array([[10.,
+                                     25.,
+                                     50.,
+                                     75.,
+                                     100.,
+                                     125.,
+                                     150.],
+                                    [3.167,
+                                     2.955,
+                                     2.721,
+                                     2.548,
+                                     2.412,
+                                     2.293,
+                                     2.180],
+                                    [0.150,
+                                     0.185,
+                                     0.224,
+                                     0.249,
+                                     0.269,
+                                     0.288,
+                                     0.301]])
         self.test_quantile = 2.75
 
     def test_MultipleFilterAlgorithm_with_spiketrain_h05(self):
         st = neo.SpikeTrain(self.test_array, units='s', t_stop=2.1)
         target = [self.targ_h05_dt05]
         res = mft.multiple_filter_test([0.5] * pq.s, st, 2.1 * pq.s, 5, 100,
-                                       dt=0.1 * pq.s)
+                                       time_step=0.1 * pq.s)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_MultipleFilterAlgorithm_with_quantities_h05(self):
         st = pq.Quantity(self.test_array, units='s')
         target = [self.targ_h05_dt05]
         res = mft.multiple_filter_test([0.5] * pq.s, st, 2.1 * pq.s, 5, 100,
-                                       dt=0.5 * pq.s)
+                                       time_step=0.5 * pq.s)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_MultipleFilterAlgorithm_with_plain_array_h05(self):
         st = self.test_array
         target = [self.targ_h05_dt05]
         res = mft.multiple_filter_test([0.5] * pq.s, st * pq.s, 2.1 * pq.s, 5,
-                                       100, dt=0.5 * pq.s)
+                                       100, time_step=0.5 * pq.s)
         self.assertNotIsInstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
-	 
+
     def test_MultipleFilterAlgorithm_with_longdata(self):
-        
+
         def gamma_train(k, teta, tmax):
             x = np.random.gamma(k, teta, int(tmax * (k * teta) ** (-1) * 3))
             s = np.cumsum(x)
             idx = np.where(s < tmax)
             s = s[idx]  # gamma process
             return s
-	
+
         def alternative_hypothesis(k1, teta1, c1, k2, teta2, c2, k3, teta3, c3,
                                    k4, teta4, T):
             s1 = gamma_train(k1, teta1, c1)
@@ -169,22 +188,29 @@ class MultipleFilterAlgorithmTestCase(unittest.TestCase):
                                               2, 1 / 33., 200)[0]
 
         window_size = [10, 25, 50, 75, 100, 125, 150] * pq.s
-        self.target_points = [150, 180, 500] 
+        self.target_points = [150, 180, 500]
         target = self.target_points
-                        
-        result = mft.multiple_filter_test(window_size, st * pq.s, 700 * pq.s, 5,
-        10000, test_quantile=self.test_quantile, test_param=self.test_param, 
-                                                                   dt=1 * pq.s)
+
+        result = mft.multiple_filter_test(
+            window_size,
+            st * pq.s,
+            700 * pq.s,
+            5,
+            10000,
+            test_quantile=self.test_quantile,
+            test_param=self.test_param,
+            time_step=1 * pq.s)
         self.assertNotIsInstance(result, pq.Quantity)
 
         result_concatenated = []
         for i in result:
             result_concatenated = np.hstack([result_concatenated, i])
-        result_concatenated = np.sort(result_concatenated)   
+        result_concatenated = np.sort(result_concatenated)
         assert_allclose(result_concatenated[:3], target[:3], rtol=0,
                         atol=5)
         print('detected {0} cps: {1}'.format(len(result_concatenated),
-                                                           result_concatenated))
-                                                
+                                             result_concatenated))
+
+
 if __name__ == '__main__':
     unittest.main()

+ 211 - 85
code/elephant/elephant/test/test_conversion.py

@@ -2,22 +2,26 @@
 """
 Unit tests for the conversion module.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
+import sys
 import unittest
 
 import neo
 import numpy as np
-from numpy.testing.utils import assert_array_almost_equal
 import quantities as pq
+from numpy.testing.utils import (assert_array_almost_equal,
+                                 assert_array_equal)
 
 import elephant.conversion as cv
 
+python_version_major = sys.version_info.major
+
 
 def get_nearest(times, time):
-    return (np.abs(times-time)).argmin()
+    return (np.abs(times - time)).argmin()
 
 
 class binarize_TestCase(unittest.TestCase):
@@ -27,7 +31,7 @@ class binarize_TestCase(unittest.TestCase):
     def test_binarize_with_spiketrain_exact(self):
         st = neo.SpikeTrain(self.test_array_1d, units='ms',
                             t_stop=10.0, sampling_rate=100)
-        times = np.arange(0, 10.+.01, .01)
+        times = np.arange(0, 10. + .01, .01)
         target = np.zeros_like(times).astype('bool')
         for time in self.test_array_1d:
             target[get_nearest(times, time)] = True
@@ -40,7 +44,7 @@ class binarize_TestCase(unittest.TestCase):
     def test_binarize_with_spiketrain_exact_set_ends(self):
         st = neo.SpikeTrain(self.test_array_1d, units='ms',
                             t_stop=10.0, sampling_rate=100)
-        times = np.arange(5., 10.+.01, .01)
+        times = np.arange(5., 10. + .01, .01)
         target = np.zeros_like(times).astype('bool')
         times = pq.Quantity(times, units='ms')
 
@@ -51,7 +55,7 @@ class binarize_TestCase(unittest.TestCase):
     def test_binarize_with_spiketrain_round(self):
         st = neo.SpikeTrain(self.test_array_1d, units='ms',
                             t_stop=10.0, sampling_rate=10.0)
-        times = np.arange(0, 10.+.1, .1)
+        times = np.arange(0, 10. + .1, .1)
         target = np.zeros_like(times).astype('bool')
         for time in np.round(self.test_array_1d, 1):
             target[get_nearest(times, time)] = True
@@ -63,44 +67,44 @@ class binarize_TestCase(unittest.TestCase):
 
     def test_binarize_with_quantities_exact(self):
         st = pq.Quantity(self.test_array_1d, units='ms')
-        times = np.arange(0, 1.23+.01, .01)
+        times = np.arange(0, 1.23 + .01, .01)
         target = np.zeros_like(times).astype('bool')
         for time in self.test_array_1d:
             target[get_nearest(times, time)] = True
         times = pq.Quantity(times, units='ms')
 
         res, tres = cv.binarize(st, return_times=True,
-                                sampling_rate=100.*pq.kHz)
+                                sampling_rate=100. * pq.kHz)
         assert_array_almost_equal(res, target, decimal=9)
         assert_array_almost_equal(tres, times, decimal=9)
 
     def test_binarize_with_quantities_exact_set_ends(self):
         st = pq.Quantity(self.test_array_1d, units='ms')
-        times = np.arange(0, 10.+.01, .01)
+        times = np.arange(0, 10. + .01, .01)
         target = np.zeros_like(times).astype('bool')
         for time in self.test_array_1d:
             target[get_nearest(times, time)] = True
         times = pq.Quantity(times, units='ms')
 
         res, tres = cv.binarize(st, return_times=True, t_stop=10.,
-                                sampling_rate=100.*pq.kHz)
+                                sampling_rate=100. * pq.kHz)
         assert_array_almost_equal(res, target, decimal=9)
         assert_array_almost_equal(tres, times, decimal=9)
 
     def test_binarize_with_quantities_round_set_ends(self):
         st = pq.Quantity(self.test_array_1d, units='ms')
-        times = np.arange(5., 10.+.1, .1)
+        times = np.arange(5., 10. + .1, .1)
         target = np.zeros_like(times).astype('bool')
         times = pq.Quantity(times, units='ms')
 
         res, tres = cv.binarize(st, return_times=True, t_start=5., t_stop=10.,
-                                sampling_rate=10.*pq.kHz)
+                                sampling_rate=10. * pq.kHz)
         assert_array_almost_equal(res, target, decimal=9)
         assert_array_almost_equal(tres, times, decimal=9)
 
     def test_binarize_with_plain_array_exact(self):
         st = self.test_array_1d
-        times = np.arange(0, 1.23+.01, .01)
+        times = np.arange(0, 1.23 + .01, .01)
         target = np.zeros_like(times).astype('bool')
         for time in self.test_array_1d:
             target[get_nearest(times, time)] = True
@@ -111,18 +115,19 @@ class binarize_TestCase(unittest.TestCase):
 
     def test_binarize_with_plain_array_exact_set_ends(self):
         st = self.test_array_1d
-        times = np.arange(0, 10.+.01, .01)
+        times = np.arange(0, 10. + .01, .01)
         target = np.zeros_like(times).astype('bool')
         for time in self.test_array_1d:
             target[get_nearest(times, time)] = True
 
-        res, tres = cv.binarize(st, return_times=True, t_stop=10., sampling_rate=100.)
+        res, tres = cv.binarize(st, return_times=True, t_stop=10.,
+                                sampling_rate=100.)
         assert_array_almost_equal(res, target, decimal=9)
         assert_array_almost_equal(tres, times, decimal=9)
 
     def test_binarize_no_time(self):
         st = self.test_array_1d
-        times = np.arange(0, 1.23+.01, .01)
+        times = np.arange(0, 1.23 + .01, .01)
         target = np.zeros_like(times).astype('bool')
         for time in self.test_array_1d:
             target[get_nearest(times, time)] = True
@@ -154,7 +159,7 @@ class binarize_TestCase(unittest.TestCase):
                           t_stop=pq.Quantity(10, 'ms'),
                           sampling_rate=10.)
         self.assertRaises(TypeError, cv.binarize, st,
-                          sampling_rate=10.*pq.Hz)
+                          sampling_rate=10. * pq.Hz)
 
     def test_binariz_without_sampling_rate_valueerror(self):
         st0 = self.test_array_1d
@@ -172,40 +177,61 @@ class binarize_TestCase(unittest.TestCase):
                           t_start=0., t_stop=pq.Quantity(10, 'ms'))
         self.assertRaises(ValueError, cv.binarize, st1)
 
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
+    def test_bin_edges(self):
+        st = neo.SpikeTrain(times=np.array([2.5]) * pq.s, t_start=0 * pq.s,
+                            t_stop=3 * pq.s)
+        with self.assertWarns(UserWarning):
+            bst = cv.BinnedSpikeTrain(st, bin_size=2 * pq.s, t_start=0 * pq.s,
+                                      t_stop=3 * pq.s)
+        assert_array_equal(bst.bin_edges, [0., 2.] * pq.s)
+        assert_array_equal(bst.spike_indices, [[]])  # no binned spikes
+        self.assertEqual(bst.get_num_of_spikes(), 0)
+
 
-class TimeHistogramTestCase(unittest.TestCase):
+class BinnedSpikeTrainTestCase(unittest.TestCase):
     def setUp(self):
         self.spiketrain_a = neo.SpikeTrain(
             [0.5, 0.7, 1.2, 3.1, 4.3, 5.5, 6.7] * pq.s, t_stop=10.0 * pq.s)
         self.spiketrain_b = neo.SpikeTrain(
             [0.1, 0.7, 1.2, 2.2, 4.3, 5.5, 8.0] * pq.s, t_stop=10.0 * pq.s)
-        self.binsize = 1 * pq.s
-
-    def tearDown(self):
-        self.spiketrain_a = None
-        del self.spiketrain_a
-        self.spiketrain_b = None
-        del self.spiketrain_b
+        self.bin_size = 1 * pq.s
+        self.tolerance = 1e-8
+
+    def test_get_num_of_spikes(self):
+        spiketrains = [self.spiketrain_a, self.spiketrain_b]
+        for spiketrain in spiketrains:
+            binned = cv.BinnedSpikeTrain(spiketrain, n_bins=10,
+                                         bin_size=1 * pq.s, t_start=0 * pq.s)
+            self.assertEqual(binned.get_num_of_spikes(),
+                             len(binned.spike_indices[0]))
+        binned_matrix = cv.BinnedSpikeTrain(spiketrains, n_bins=10,
+                                            bin_size=1 * pq.s)
+        n_spikes_per_row = binned_matrix.get_num_of_spikes(axis=1)
+        n_spikes_per_row_from_indices = list(map(len,
+                                                 binned_matrix.spike_indices))
+        assert_array_equal(n_spikes_per_row, n_spikes_per_row_from_indices)
+        self.assertEqual(binned_matrix.get_num_of_spikes(),
+                         sum(n_spikes_per_row_from_indices))
 
     def test_binned_spiketrain_sparse(self):
         a = neo.SpikeTrain([1.7, 1.8, 4.3] * pq.s, t_stop=10.0 * pq.s)
         b = neo.SpikeTrain([1.7, 1.8, 4.3] * pq.s, t_stop=10.0 * pq.s)
-        binsize = 1 * pq.s
+        bin_size = 1 * pq.s
         nbins = 10
-        x = cv.BinnedSpikeTrain([a, b], num_bins=nbins, binsize=binsize,
+        x = cv.BinnedSpikeTrain([a, b], n_bins=nbins, bin_size=bin_size,
                                 t_start=0 * pq.s)
         x_sparse = [2, 1, 2, 1]
         s = x.to_sparse_array()
         self.assertTrue(np.array_equal(s.data, x_sparse))
-        self.assertTrue(
-            np.array_equal(x.spike_indices, [[1, 1, 4], [1, 1, 4]]))
+        assert_array_equal(x.spike_indices, [[1, 1, 4], [1, 1, 4]])
 
     def test_binned_spiketrain_shape(self):
         a = self.spiketrain_a
-        x = cv.BinnedSpikeTrain(a, num_bins=10,
-                                binsize=self.binsize,
+        x = cv.BinnedSpikeTrain(a, n_bins=10,
+                                bin_size=self.bin_size,
                                 t_start=0 * pq.s)
-        x_bool = cv.BinnedSpikeTrain(a, num_bins=10, binsize=self.binsize,
+        x_bool = cv.BinnedSpikeTrain(a, n_bins=10, bin_size=self.bin_size,
                                      t_start=0 * pq.s)
         self.assertTrue(x.to_array().shape == (1, 10))
         self.assertTrue(x_bool.to_bool_array().shape == (1, 10))
@@ -216,9 +242,9 @@ class TimeHistogramTestCase(unittest.TestCase):
         b = self.spiketrain_b
         c = [a, b]
         nbins = 5
-        x = cv.BinnedSpikeTrain(c, num_bins=nbins, t_start=0 * pq.s,
+        x = cv.BinnedSpikeTrain(c, n_bins=nbins, t_start=0 * pq.s,
                                 t_stop=10.0 * pq.s)
-        x_bool = cv.BinnedSpikeTrain(c, num_bins=nbins, t_start=0 * pq.s,
+        x_bool = cv.BinnedSpikeTrain(c, n_bins=nbins, t_start=0 * pq.s,
                                      t_stop=10.0 * pq.s)
         self.assertTrue(x.to_array().shape == (2, 5))
         self.assertTrue(x_bool.to_bool_array().shape == (2, 5))
@@ -227,14 +253,15 @@ class TimeHistogramTestCase(unittest.TestCase):
         a = neo.SpikeTrain(
             [-6.5, 0.5, 0.7, 1.2, 3.1, 4.3, 5.5, 6.7] * pq.s,
             t_start=-6.5 * pq.s, t_stop=10.0 * pq.s)
-        binsize = self.binsize
+        bin_size = self.bin_size
         nbins = 16
-        x = cv.BinnedSpikeTrain(a, num_bins=nbins, binsize=binsize,
+        x = cv.BinnedSpikeTrain(a, n_bins=nbins, bin_size=bin_size,
                                 t_start=-6.5 * pq.s)
         y = [
             np.array([1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0])]
         self.assertTrue(np.array_equal(x.to_bool_array(), y))
 
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
     def test_binned_spiketrain_neg_times_list(self):
         a = neo.SpikeTrain(
             [-6.5, 0.5, 0.7, 1.2, 3.1, 4.3, 5.5, 6.7] * pq.s,
@@ -244,10 +271,11 @@ class TimeHistogramTestCase(unittest.TestCase):
             t_start=-1 * pq.s, t_stop=8 * pq.s)
         c = [a, b]
 
-        binsize = self.binsize
-        x_bool = cv.BinnedSpikeTrain(c, binsize=binsize)
+        bin_size = self.bin_size
+        with self.assertWarns(UserWarning):
+            x_bool = cv.BinnedSpikeTrain(c, bin_size=bin_size)
         y_bool = [[0, 1, 1, 0, 1, 1, 1, 1],
-                     [1, 0, 1, 1, 0, 1, 1, 0]]
+                  [1, 0, 1, 1, 0, 1, 1, 0]]
 
         self.assertTrue(
             np.array_equal(x_bool.to_bool_array(), y_bool))
@@ -255,11 +283,11 @@ class TimeHistogramTestCase(unittest.TestCase):
     # checking spike_indices(f) and matrix(m) for 1 spiketrain
     def test_binned_spiketrain_indices(self):
         a = self.spiketrain_a
-        binsize = self.binsize
+        bin_size = self.bin_size
         nbins = 10
-        x = cv.BinnedSpikeTrain(a, num_bins=nbins, binsize=binsize,
+        x = cv.BinnedSpikeTrain(a, n_bins=nbins, bin_size=bin_size,
                                 t_start=0 * pq.s)
-        x_bool = cv.BinnedSpikeTrain(a, num_bins=nbins, binsize=binsize,
+        x_bool = cv.BinnedSpikeTrain(a, n_bins=nbins, bin_size=bin_size,
                                      t_start=0 * pq.s)
         y_matrix = [
             np.array([2., 1., 0., 1., 1., 1., 1., 0., 0., 0.])]
@@ -274,18 +302,18 @@ class TimeHistogramTestCase(unittest.TestCase):
             np.array_equal(x_bool.to_bool_array(), y_bool_matrix))
         s = x_bool.to_sparse_bool_array()[
             x_bool.to_sparse_bool_array().nonzero()]
-        self.assertTrue(np.array_equal(s, [[True]*6]))
+        self.assertTrue(np.array_equal(s, [[True] * 6]))
 
     def test_binned_spiketrain_list(self):
         a = self.spiketrain_a
         b = self.spiketrain_b
 
-        binsize = self.binsize
+        bin_size = self.bin_size
         nbins = 10
         c = [a, b]
-        x = cv.BinnedSpikeTrain(c, num_bins=nbins, binsize=binsize,
+        x = cv.BinnedSpikeTrain(c, n_bins=nbins, bin_size=bin_size,
                                 t_start=0 * pq.s)
-        x_bool = cv.BinnedSpikeTrain(c, num_bins=nbins, binsize=binsize,
+        x_bool = cv.BinnedSpikeTrain(c, n_bins=nbins, bin_size=bin_size,
                                      t_start=0 * pq.s)
         y_matrix = np.array(
             [[2, 1, 0, 1, 1, 1, 1, 0, 0, 0],
@@ -304,12 +332,12 @@ class TimeHistogramTestCase(unittest.TestCase):
         a = self.spiketrain_a
         b = self.spiketrain_b
         c = [a, b]
-        binsize = self.binsize
+        bin_size = self.bin_size
         nbins = 10
-        x = cv.BinnedSpikeTrain(c, num_bins=nbins, binsize=binsize,
+        x = cv.BinnedSpikeTrain(c, n_bins=nbins, bin_size=bin_size,
                                 t_start=0 * pq.s,
                                 t_stop=None)
-        x_bool = cv.BinnedSpikeTrain(c, num_bins=nbins, binsize=binsize,
+        x_bool = cv.BinnedSpikeTrain(c, n_bins=nbins, bin_size=bin_size,
                                      t_start=0 * pq.s)
         self.assertTrue(x.t_stop == 10 * pq.s)
         self.assertTrue(x_bool.t_stop == 10 * pq.s)
@@ -319,21 +347,21 @@ class TimeHistogramTestCase(unittest.TestCase):
         a = self.spiketrain_a
         b = self.spiketrain_b
         c = [a, b]
-        binsize = 1 * pq.s
-        x = cv.BinnedSpikeTrain(c, binsize=binsize, t_start=0 * pq.s,
+        bin_size = 1 * pq.s
+        x = cv.BinnedSpikeTrain(c, bin_size=bin_size, t_start=0 * pq.s,
                                 t_stop=10. * pq.s)
-        x_bool = cv.BinnedSpikeTrain(c, binsize=binsize, t_start=0 * pq.s,
+        x_bool = cv.BinnedSpikeTrain(c, bin_size=bin_size, t_start=0 * pq.s,
                                      t_stop=10. * pq.s)
-        self.assertTrue(x.num_bins == 10)
-        self.assertTrue(x_bool.num_bins == 10)
+        self.assertTrue(x.n_bins == 10)
+        self.assertTrue(x_bool.n_bins == 10)
 
     def test_binned_spiketrain_matrix(self):
         # Init
         a = self.spiketrain_a
         b = self.spiketrain_b
-        x_bool_a = cv.BinnedSpikeTrain(a, binsize=pq.s, t_start=0 * pq.s,
+        x_bool_a = cv.BinnedSpikeTrain(a, bin_size=pq.s, t_start=0 * pq.s,
                                        t_stop=10. * pq.s)
-        x_bool_b = cv.BinnedSpikeTrain(b, binsize=pq.s, t_start=0 * pq.s,
+        x_bool_b = cv.BinnedSpikeTrain(b, bin_size=pq.s, t_start=0 * pq.s,
                                        t_stop=10. * pq.s)
 
         # Assumed results
@@ -354,9 +382,9 @@ class TimeHistogramTestCase(unittest.TestCase):
         a = self.spiketrain_a
         b = self.spiketrain_b
 
-        x_bool = cv.BinnedSpikeTrain(a, binsize=pq.s, t_start=0 * pq.s,
+        x_bool = cv.BinnedSpikeTrain(a, bin_size=pq.s, t_start=0 * pq.s,
                                      t_stop=10. * pq.s)
-        x = cv.BinnedSpikeTrain(b, binsize=pq.s, t_start=0 * pq.s,
+        x = cv.BinnedSpikeTrain(b, bin_size=pq.s, t_start=0 * pq.s,
                                 t_stop=10. * pq.s)
         # Store Matrix in variable
         matrix_bool = x_bool.to_bool_array()
@@ -375,11 +403,12 @@ class TimeHistogramTestCase(unittest.TestCase):
 
         # Test storing of sparse mat
         sparse_bool = x_bool.to_sparse_bool_array()
-        self.assertTrue(np.array_equal(sparse_bool.toarray(),
-                                       x_bool.to_sparse_bool_array().toarray()))
+        self.assertTrue(np.array_equal(
+            sparse_bool.toarray(),
+            x_bool.to_sparse_bool_array().toarray()))
 
         # New class without calculating the matrix
-        x = cv.BinnedSpikeTrain(b, binsize=pq.s, t_start=0 * pq.s,
+        x = cv.BinnedSpikeTrain(b, bin_size=pq.s, t_start=0 * pq.s,
                                 t_stop=10. * pq.s)
         # No matrix calculated, should be None
         self.assertEqual(x._mat_u, None)
@@ -389,7 +418,7 @@ class TimeHistogramTestCase(unittest.TestCase):
     # Test matrix removing
     def test_binned_spiketrain_remove_matrix(self):
         a = self.spiketrain_a
-        x = cv.BinnedSpikeTrain(a, binsize=1 * pq.s, num_bins=10,
+        x = cv.BinnedSpikeTrain(a, bin_size=1 * pq.s, n_bins=10,
                                 t_stop=10. * pq.s)
         # Store
         x.to_array(store_array=True)
@@ -401,18 +430,18 @@ class TimeHistogramTestCase(unittest.TestCase):
     # Test if t_start is calculated correctly
     def test_binned_spiketrain_parameter_calc_tstart(self):
         a = self.spiketrain_a
-        x = cv.BinnedSpikeTrain(a, binsize=1 * pq.s, num_bins=10,
+        x = cv.BinnedSpikeTrain(a, bin_size=1 * pq.s, n_bins=10,
                                 t_stop=10. * pq.s)
         self.assertEqual(x.t_start, 0. * pq.s)
         self.assertEqual(x.t_stop, 10. * pq.s)
-        self.assertEqual(x.binsize, 1 * pq.s)
-        self.assertEqual(x.num_bins, 10)
+        self.assertEqual(x.bin_size, 1 * pq.s)
+        self.assertEqual(x.n_bins, 10)
 
-    # Test if error raises when type of num_bins is not an integer
+    # Test if error raises when type of n_bins is not an integer
     def test_binned_spiketrain_numbins_type_error(self):
         a = self.spiketrain_a
-        self.assertRaises(TypeError, cv.BinnedSpikeTrain, a, binsize=pq.s,
-                          num_bins=1.4, t_start=0 * pq.s,
+        self.assertRaises(TypeError, cv.BinnedSpikeTrain, a, bin_size=pq.s,
+                          n_bins=1.4, t_start=0 * pq.s,
                           t_stop=10. * pq.s)
 
     # Test if error is raised when providing insufficient number of
@@ -420,17 +449,26 @@ class TimeHistogramTestCase(unittest.TestCase):
     def test_binned_spiketrain_insufficient_arguments(self):
         a = self.spiketrain_a
         self.assertRaises(AttributeError, cv.BinnedSpikeTrain, a)
-        self.assertRaises(ValueError, cv.BinnedSpikeTrain, a, binsize=1 * pq.s,
-                          t_start=0 * pq.s, t_stop=0 * pq.s)
+        self.assertRaises(
+            ValueError,
+            cv.BinnedSpikeTrain,
+            a,
+            bin_size=1 * pq.s,
+            t_start=0 * pq.s,
+            t_stop=0 * pq.s)
 
     def test_calc_attributes_error(self):
-        self.assertRaises(ValueError, cv._calc_num_bins, 1, 1 * pq.s, 0 * pq.s)
-        self.assertRaises(ValueError, cv._calc_binsize, 1, 1 * pq.s, 0 * pq.s)
+        self.assertRaises(ValueError, cv._calc_number_of_bins,
+                          1, 1 * pq.s, 0 * pq.s, self.tolerance)
+        self.assertRaises(ValueError, cv._calc_bin_size,
+                          1, 1 * pq.s, 0 * pq.s)
 
     def test_different_input_types(self):
         a = self.spiketrain_a
         q = [1, 2, 3] * pq.s
-        self.assertRaises(TypeError, cv.BinnedSpikeTrain, [a, q], binsize=pq.s)
+        self.assertRaises(
+            TypeError, cv.BinnedSpikeTrain, [
+                a, q], bin_size=pq.s)
 
     def test_get_start_stop(self):
         a = self.spiketrain_a
@@ -449,24 +487,29 @@ class TimeHistogramTestCase(unittest.TestCase):
         b = neo.SpikeTrain([-2, -1] * pq.s, t_start=-2 * pq.s,
                            t_stop=-1 * pq.s)
         self.assertRaises(ValueError, cv.BinnedSpikeTrain, [a, b], t_start=5,
-                          t_stop=0, binsize=pq.s, num_bins=10)
+                          t_stop=0, bin_size=pq.s, n_bins=10)
 
         b = neo.SpikeTrain([-7, -8, -9] * pq.s, t_start=-9 * pq.s,
                            t_stop=-7 * pq.s)
         self.assertRaises(ValueError, cv.BinnedSpikeTrain, b, t_start=0,
-                          t_stop=10, binsize=pq.s, num_bins=10)
+                          t_stop=10, bin_size=pq.s, n_bins=10)
         self.assertRaises(ValueError, cv.BinnedSpikeTrain, a, t_start=0 * pq.s,
-                          t_stop=10 * pq.s, binsize=3 * pq.s, num_bins=10)
+                          t_stop=10 * pq.s, bin_size=3 * pq.s, n_bins=10)
 
         b = neo.SpikeTrain([-4, -2, 0, 1] * pq.s, t_start=-4 * pq.s,
                            t_stop=1 * pq.s)
-        self.assertRaises(TypeError, cv.BinnedSpikeTrain, b, binsize=-2*pq.s,
-                          t_start=-4 * pq.s, t_stop=0 * pq.s)
+        self.assertRaises(
+            TypeError,
+            cv.BinnedSpikeTrain,
+            b,
+            bin_size=-2 * pq.s,
+            t_start=-4 * pq.s,
+            t_stop=0 * pq.s)
 
     # Test edges
     def test_binned_spiketrain_bin_edges(self):
         a = self.spiketrain_a
-        x = cv.BinnedSpikeTrain(a, binsize=1 * pq.s, num_bins=10,
+        x = cv.BinnedSpikeTrain(a, bin_size=1 * pq.s, n_bins=10,
                                 t_stop=10. * pq.s)
         # Test all edges
         edges = [float(i) for i in range(11)]
@@ -488,9 +531,9 @@ class TimeHistogramTestCase(unittest.TestCase):
     def test_binned_spiketrain_different_units(self):
         a = self.spiketrain_a
         b = a.rescale(pq.ms)
-        binsize = 1 * pq.s
-        xa = cv.BinnedSpikeTrain(a, binsize=binsize)
-        xb = cv.BinnedSpikeTrain(b, binsize=binsize.rescale(pq.ms))
+        bin_size = 1 * pq.s
+        xa = cv.BinnedSpikeTrain(a, bin_size=bin_size)
+        xb = cv.BinnedSpikeTrain(b, bin_size=bin_size.rescale(pq.ms))
         self.assertTrue(
             np.array_equal(xa.to_bool_array(), xb.to_bool_array()))
         self.assertTrue(
@@ -498,14 +541,80 @@ class TimeHistogramTestCase(unittest.TestCase):
                            xb.to_sparse_array().data))
         self.assertTrue(
             np.array_equal(xa.bin_edges[:-1],
-                           xb.bin_edges[:-1].rescale(binsize.units)))
+                           xb.bin_edges[:-1].rescale(bin_size.units)))
+
+    def test_binary_to_binned_matrix(self):
+        a = [[1, 0, 0, 0], [0, 1, 1, 0]]
+        x = cv.BinnedSpikeTrain(a, t_start=0 * pq.s, t_stop=5 * pq.s)
+        # Check for correctness with different init params
+        self.assertTrue(np.array_equal(a, x.to_bool_array()))
+        self.assertTrue(np.array_equal(np.array(a), x.to_bool_array()))
+        self.assertTrue(np.array_equal(a, x.to_bool_array()))
+        self.assertEqual(x.n_bins, 4)
+        self.assertEqual(x.bin_size, 1.25 * pq.s)
+
+        x = cv.BinnedSpikeTrain(a, t_start=1 * pq.s, bin_size=2 * pq.s)
+        self.assertTrue(np.array_equal(a, x.to_bool_array()))
+        self.assertEqual(x.t_stop, 9 * pq.s)
+
+        x = cv.BinnedSpikeTrain(a, t_stop=9 * pq.s, bin_size=2 * pq.s)
+        self.assertEqual(x.t_start, 1 * pq.s)
+
+        # Raise error
+        self.assertRaises(ValueError, cv.BinnedSpikeTrain, a,
+                          t_start=5 * pq.s, t_stop=0 * pq.s, bin_size=pq.s,
+                          n_bins=10)
+        self.assertRaises(ValueError, cv.BinnedSpikeTrain, a, t_start=0 * pq.s,
+                          t_stop=10 * pq.s, bin_size=3 * pq.s, n_bins=10)
+        self.assertRaises(ValueError, cv.BinnedSpikeTrain, a,
+                          bin_size=-2 * pq.s, t_start=-4 * pq.s,
+                          t_stop=0 * pq.s)
+
+        # Check binary property
+        self.assertTrue(x.is_binary)
+
+    def test_binned_to_binned(self):
+        a = self.spiketrain_a
+        x = cv.BinnedSpikeTrain(a, bin_size=1 * pq.s).to_array()
+        y = cv.BinnedSpikeTrain(x, bin_size=1 * pq.s, t_start=0 * pq.s)
+        self.assertTrue(np.array_equal(x, y.to_array()))
+
+        # test with a list
+        x = cv.BinnedSpikeTrain([[0, 1, 2, 3]], bin_size=1 * pq.s,
+                                t_stop=3 * pq.s).to_array()
+        y = cv.BinnedSpikeTrain(x, bin_size=1 * pq.s, t_start=0 * pq.s)
+        self.assertTrue(np.array_equal(x, y.to_array()))
+
+        # test with a numpy array
+        a = np.array([[0, 1, 2, 3], [1, 2, 2.5, 3]])
+        x = cv.BinnedSpikeTrain(a, bin_size=1 * pq.s,
+                                t_stop=3 * pq.s).to_array()
+        y = cv.BinnedSpikeTrain(x, bin_size=1 * pq.s, t_start=0 * pq.s)
+        self.assertTrue(np.array_equal(x, y.to_array()))
+
+        # Check binary property
+        self.assertFalse(y.is_binary)
+
+        # Raise Errors
+        # give a strangely shaped matrix as input (not MxN), which should
+        # produce a TypeError
+        a = np.array([[0, 1, 2, 3], [1, 2, 3]])
+        self.assertRaises(TypeError, cv.BinnedSpikeTrain, a, t_start=0 * pq.s,
+                          bin_size=1 * pq.s)
+        # Give no t_start or t_stop
+        a = np.array([[0, 1, 2, 3], [1, 2, 3, 4]])
+        self.assertRaises(AttributeError, cv.BinnedSpikeTrain, a,
+                          bin_size=1 * pq.s)
+        # Input format not supported
+        a = np.array(([0, 1, 2], [0, 1, 2, 3, 4]))
+        self.assertRaises(TypeError, cv.BinnedSpikeTrain, a, bin_size=1 * pq.s)
 
     def test_binnend_spiketrain_rescaling(self):
         train = neo.SpikeTrain(times=np.array([1.001, 1.002, 1.005]) * pq.s,
                                t_start=1 * pq.s, t_stop=1.01 * pq.s)
         bst = cv.BinnedSpikeTrain(train,
                                   t_start=1 * pq.s, t_stop=1.01 * pq.s,
-                                  binsize=1 * pq.ms)
+                                  bin_size=1 * pq.ms)
         target_edges = np.array([1000, 1001, 1002, 1003, 1004, 1005, 1006,
                                  1007, 1008, 1009, 1010], dtype=np.float)
         target_centers = np.array(
@@ -517,12 +626,29 @@ class TimeHistogramTestCase(unittest.TestCase):
         self.assertTrue(bst.bin_edges.units == pq.ms)
         bst = cv.BinnedSpikeTrain(train,
                                   t_start=1 * pq.s, t_stop=1010 * pq.ms,
-                                  binsize=1 * pq.ms)
+                                  bin_size=1 * pq.ms)
         self.assertTrue(np.allclose(bst.bin_edges.magnitude, target_edges))
         self.assertTrue(np.allclose(bst.bin_centers.magnitude, target_centers))
         self.assertTrue(bst.bin_centers.units == pq.ms)
         self.assertTrue(bst.bin_edges.units == pq.ms)
 
+    def test_binned_sparsity(self):
+        train = neo.SpikeTrain(np.arange(10), t_stop=10 * pq.s, units=pq.s)
+        bst = cv.BinnedSpikeTrain(train, n_bins=100)
+        self.assertAlmostEqual(bst.sparsity, 0.1)
+
+    # Test fix for rounding errors
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
+    def test_binned_spiketrain_rounding(self):
+        train = neo.SpikeTrain(times=np.arange(120000) / 30000. * pq.s,
+                               t_start=0 * pq.s, t_stop=4 * pq.s)
+        with self.assertWarns(UserWarning):
+            bst = cv.BinnedSpikeTrain(train,
+                                      t_start=0 * pq.s, t_stop=4 * pq.s,
+                                      bin_size=1. / 30000. * pq.s)
+        assert_array_equal(bst.to_array().nonzero()[1],
+                           np.arange(120000))
+
 
 if __name__ == '__main__':
     unittest.main()

+ 21 - 13
code/elephant/elephant/test/test_cubic.py

@@ -2,15 +2,20 @@
 """
 Unit tests for the CUBIC analysis.
 
-:copyright: Copyright 2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
+import sys
 import unittest
-import elephant.cubic as cubic
-import quantities as pq
+
 import neo
 import numpy
+import quantities as pq
+
+import elephant.cubic as cubic
+
+python_version_major = sys.version_info.major
 
 
 class CubicTestCase(unittest.TestCase):
@@ -28,13 +33,14 @@ class CubicTestCase(unittest.TestCase):
     ----------
     [1]Staude, Rotter, Gruen, (2009) J. Comp. Neurosci
     '''
+
     def setUp(self):
         n2 = 300
-        n0 = 100000-n2
+        n0 = 100000 - n2
         self.xi = 10
         self.data_signal = neo.AnalogSignal(
             numpy.array([self.xi] * n2 + [0] * n0).reshape(n0 + n2, 1) *
-            pq.dimensionless, sampling_period=1*pq.s)
+            pq.dimensionless, sampling_period=1 * pq.s)
         self.data_array = numpy.array([self.xi] * n2 + [0] * n0)
         self.alpha = 0.05
         self.ximax = 10
@@ -104,10 +110,12 @@ class CubicTestCase(unittest.TestCase):
         # Check the output for test_aborted
         self.assertEqual(test_aborted, False)
 
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
     def test_cubic_ximax(self):
         # Test exceeding ximax
-        xi_ximax, p_vals_ximax, k_ximax, test_aborted = cubic.cubic(
-            self.data_signal, alpha=1, ximax=self.ximax)
+        with self.assertWarns(UserWarning):
+            xi_ximax, p_vals_ximax, k_ximax, test_aborted = cubic.cubic(
+                self.data_signal, alpha=1, max_iterations=self.ximax)
 
         self.assertEqual(test_aborted, True)
         self.assertEqual(xi_ximax - 1, self.ximax)
@@ -119,14 +127,14 @@ class CubicTestCase(unittest.TestCase):
         # Empty signal
         self.assertRaises(
             ValueError, cubic.cubic, neo.AnalogSignal(
-                []*pq.dimensionless, sampling_period=10*pq.ms))
+                [] * pq.dimensionless, sampling_period=10 * pq.ms))
 
+        dummy_data = numpy.tile([1, 2, 3], reps=3)
         # Multidimensional array
         self.assertRaises(ValueError, cubic.cubic, neo.AnalogSignal(
-            [[1, 2, 3], [1, 2, 3]] * pq.dimensionless,
+            dummy_data * pq.dimensionless,
             sampling_period=10 * pq.ms))
-        self.assertRaises(ValueError, cubic.cubic, numpy.array(
-            [[1, 2, 3], [1, 2, 3]]))
+        self.assertRaises(ValueError, cubic.cubic, dummy_data.copy())
 
         # Negative alpha
         self.assertRaises(ValueError, cubic.cubic, self.data_array, alpha=-0.1)
@@ -137,8 +145,8 @@ class CubicTestCase(unittest.TestCase):
         # Checking case in which the second cumulant of the signal is smaller
         # than the first cumulant (analitycal constrain of the method)
         self.assertRaises(ValueError, cubic.cubic, neo.AnalogSignal(
-            numpy.array([1]*1000).reshape(1000, 1), units=pq.dimensionless,
-            sampling_period=10*pq.ms), alpha=self.alpha)
+            numpy.array([1] * 1000).reshape(1000, 1), units=pq.dimensionless,
+            sampling_period=10 * pq.ms), alpha=self.alpha)
 
 
 def suite():

Разница между файлами не показана из-за своего большого размера
+ 514 - 527
code/elephant/elephant/test/test_icsd.py


+ 9 - 5
code/elephant/elephant/test/test_kcsd.py

@@ -31,7 +31,7 @@ class KCSD1D_TestCase(unittest.TestCase):
         for ii in range(len(self.pots)):
             temp_signals.append(self.pots[ii])
         self.an_sigs = neo.AnalogSignal(np.array(temp_signals).T * pq.mV,
-                                       sampling_rate=1000 * pq.Hz)
+                                        sampling_rate=1000 * pq.Hz)
         chidx = neo.ChannelIndex(range(len(self.pots)))
         chidx.analogsignals.append(self.an_sigs)
         chidx.coordinates = self.ele_pos * pq.mm
@@ -71,7 +71,11 @@ class KCSD2D_TestCase(unittest.TestCase):
                                                    ylims=[0.05, 0.95])
         self.ele_pos = np.vstack((xx_ele, yy_ele)).T
         self.csd_profile = utils.large_source_2D
-        pots = CSD.generate_lfp(self.csd_profile, xx_ele, yy_ele, res=100)
+        pots = CSD.generate_lfp(
+            self.csd_profile,
+            xx_ele,
+            yy_ele,
+            resolution=100)
         self.pots = np.reshape(pots, (-1, 1))
         self.test_method = 'KCSD2D'
         self.test_params = {'gdx': 0.25, 'gdy': 0.25, 'R_init': 0.08,
@@ -81,14 +85,13 @@ class KCSD2D_TestCase(unittest.TestCase):
         for ii in range(len(self.pots)):
             temp_signals.append(self.pots[ii])
         self.an_sigs = neo.AnalogSignal(np.array(temp_signals).T * pq.mV,
-                                       sampling_rate=1000 * pq.Hz)
+                                        sampling_rate=1000 * pq.Hz)
         chidx = neo.ChannelIndex(range(len(self.pots)))
         chidx.analogsignals.append(self.an_sigs)
         chidx.coordinates = self.ele_pos * pq.mm
 
         chidx.create_relationship()
 
-
     def test_kcsd2d_estimate(self, cv_params={}):
         self.test_params.update(cv_params)
         result = CSD.estimate_csd(self.an_sigs, method=self.test_method,
@@ -145,7 +148,7 @@ class KCSD3D_TestCase(unittest.TestCase):
         for ii in range(len(self.pots)):
             temp_signals.append(self.pots[ii])
         self.an_sigs = neo.AnalogSignal(np.array(temp_signals).T * pq.mV,
-                                       sampling_rate=1000 * pq.Hz)
+                                        sampling_rate=1000 * pq.Hz)
         chidx = neo.ChannelIndex(range(len(self.pots)))
         chidx.analogsignals.append(self.an_sigs)
         chidx.coordinates = self.ele_pos * pq.mm
@@ -179,5 +182,6 @@ class KCSD3D_TestCase(unittest.TestCase):
         cv_params = {'InvalidCVArg': np.array((0.1, 0.25, 0.5))}
         self.assertRaises(TypeError, self.test_kcsd3d_estimate, cv_params)
 
+
 if __name__ == '__main__':
     unittest.main()

+ 291 - 39
code/elephant/elephant/test/test_kernels.py

@@ -2,27 +2,30 @@
 """
 Unit tests for the kernels module.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
+import math
 import unittest
+import warnings
 
 import numpy as np
 import quantities as pq
 import scipy.integrate as spint
+from numpy.testing import assert_array_almost_equal
+
 import elephant.kernels as kernels
 
 
 class kernel_TestCase(unittest.TestCase):
     def setUp(self):
-        self.kernel_types = [obj for obj in kernels.__dict__.values()
-                             if isinstance(obj, type) and
-                             issubclass(obj, kernels.Kernel) and
-                             hasattr(obj, "_evaluate") and
-                             obj is not kernels.Kernel and
-                             obj is not kernels.SymmetricKernel]
-        self.fraction = 0.9999
+        self.kernel_types = tuple(
+            kern_cls for kern_cls in kernels.__dict__.values()
+            if isinstance(kern_cls, type) and
+            issubclass(kern_cls, kernels.Kernel) and
+            kern_cls is not kernels.Kernel and
+            kern_cls is not kernels.SymmetricKernel)
 
     def test_error_kernels(self):
         """
@@ -31,18 +34,18 @@ class kernel_TestCase(unittest.TestCase):
         self.assertRaises(
             TypeError, kernels.RectangularKernel, sigma=2.0)
         self.assertRaises(
-            ValueError, kernels.RectangularKernel, sigma=-0.03*pq.s)
+            ValueError, kernels.RectangularKernel, sigma=-0.03 * pq.s)
         self.assertRaises(
-            ValueError, kernels.RectangularKernel, sigma=2.0*pq.ms,
+            ValueError, kernels.AlphaKernel, sigma=2.0 * pq.ms,
             invert=2)
-        rec_kernel = kernels.RectangularKernel(sigma=0.3*pq.ms)
+        rec_kernel = kernels.RectangularKernel(sigma=0.3 * pq.ms)
         self.assertRaises(
             TypeError, rec_kernel, [1, 2, 3])
         self.assertRaises(
-            TypeError, rec_kernel, [1, 2, 3]*pq.V)
-        kernel = kernels.Kernel(sigma=0.3*pq.ms)
+            TypeError, rec_kernel, [1, 2, 3] * pq.V)
+        kernel = kernels.Kernel(sigma=0.3 * pq.ms)
         self.assertRaises(
-            NotImplementedError, kernel._evaluate, [1, 2, 3]*pq.V)
+            NotImplementedError, kernel._evaluate, [1, 2, 3] * pq.V)
         self.assertRaises(
             NotImplementedError, kernel.boundary_enclosing_area_fraction,
             fraction=0.9)
@@ -50,27 +53,28 @@ class kernel_TestCase(unittest.TestCase):
                           rec_kernel.boundary_enclosing_area_fraction, [1, 2])
         self.assertRaises(ValueError,
                           rec_kernel.boundary_enclosing_area_fraction, -10)
-        self.assertEquals(kernel.is_symmetric(), False)
-        self.assertEquals(rec_kernel.is_symmetric(), True)
+        self.assertEqual(kernel.is_symmetric(), False)
+        self.assertEqual(rec_kernel.is_symmetric(), True)
 
-    @unittest.skip('very time-consuming test')
-    def test_error_alpha_kernel(self):
-        alp_kernel = kernels.AlphaKernel(sigma=0.3*pq.ms)
-        self.assertRaises(ValueError,
-            alp_kernel.boundary_enclosing_area_fraction, 0.9999999)
+    def test_alpha_kernel_extreme(self):
+        alp_kernel = kernels.AlphaKernel(sigma=0.3 * pq.ms)
+        quantile = alp_kernel.boundary_enclosing_area_fraction(0.9999999)
+        self.assertAlmostEqual(quantile.magnitude, 4.055922083048838)
 
     def test_kernels_normalization(self):
         """
         Test that each kernel normalizes to area one.
         """
         sigma = 0.1 * pq.mV
+        fraction = 0.9999
         kernel_resolution = sigma / 100.0
         kernel_list = [kernel_type(sigma, invert=False) for
                        kernel_type in self.kernel_types]
         for kernel in kernel_list:
-            b = kernel.boundary_enclosing_area_fraction(self.fraction).magnitude
-            restric_defdomain = \
-                np.linspace(-b, b, 2*b/kernel_resolution.magnitude) * sigma.units
+            b = kernel.boundary_enclosing_area_fraction(fraction).magnitude
+            n_points = int(2 * b / kernel_resolution.magnitude)
+            restric_defdomain = np.linspace(
+                -b, b, num=n_points) * sigma.units
             kern = kernel(restric_defdomain)
             norm = spint.cumtrapz(y=kern.magnitude,
                                   x=restric_defdomain.magnitude)[-1]
@@ -82,26 +86,28 @@ class kernel_TestCase(unittest.TestCase):
         equals the parameter sigma with which the kernel was constructed.
         """
         sigma = 0.5 * pq.s
+        fraction = 0.9999
         kernel_resolution = sigma / 50.0
         for invert in (False, True):
             kernel_list = [kernel_type(sigma, invert) for
                            kernel_type in self.kernel_types]
             for kernel in kernel_list:
-                b = kernel.boundary_enclosing_area_fraction(self.fraction).magnitude
-                restric_defdomain = \
-                    np.linspace(-b, b, 2*b/kernel_resolution.magnitude) * \
-                    sigma.units
+                b = kernel.boundary_enclosing_area_fraction(
+                    fraction).magnitude
+                n_points = int(2 * b / kernel_resolution.magnitude)
+                restric_defdomain = np.linspace(
+                    -b, b, num=n_points) * sigma.units
                 kern = kernel(restric_defdomain)
                 av_integr = kern * restric_defdomain
-                average = spint.cumtrapz(y=av_integr.magnitude,
-                                         x=restric_defdomain.magnitude)[-1] * \
-                          sigma.units
-                var_integr = (restric_defdomain-average)**2 * kern
-                variance = spint.cumtrapz(y=var_integr.magnitude,
-                                          x=restric_defdomain.magnitude)[-1] * \
-                           sigma.units**2
+                average = spint.cumtrapz(
+                    y=av_integr.magnitude,
+                    x=restric_defdomain.magnitude)[-1] * sigma.units
+                var_integr = (restric_defdomain - average) ** 2 * kern
+                variance = spint.cumtrapz(
+                    y=var_integr.magnitude,
+                    x=restric_defdomain.magnitude)[-1] * sigma.units ** 2
                 stddev = np.sqrt(variance)
-                self.assertAlmostEqual(stddev, sigma, delta=0.01*sigma)
+                self.assertAlmostEqual(stddev, sigma, delta=0.01 * sigma)
 
     def test_kernel_boundary_enclosing(self):
         """
@@ -117,13 +123,259 @@ class kernel_TestCase(unittest.TestCase):
         for fraction in np.arange(0.15, 1.0, 0.4):
             for kernel in kernel_list:
                 b = kernel.boundary_enclosing_area_fraction(fraction).magnitude
-                restric_defdomain = \
-                    np.linspace(-b, b, 2*b/kernel_resolution.magnitude) * \
-                    sigma.units
+                n_points = int(2 * b / kernel_resolution.magnitude)
+                restric_defdomain = np.linspace(
+                    -b, b, num=n_points) * sigma.units
                 kern = kernel(restric_defdomain)
                 frac = spint.cumtrapz(y=kern.magnitude,
                                       x=restric_defdomain.magnitude)[-1]
                 self.assertAlmostEqual(frac, fraction, delta=0.002)
 
+    def test_kernel_output_same_size(self):
+        time_array = np.linspace(0, 10, num=20) * pq.s
+        for kernel_type in self.kernel_types:
+            kernel = kernel_type(sigma=1 * pq.s)
+            kernel_points = kernel(time_array)
+            self.assertEqual(len(kernel_points), len(time_array))
+
+    def test_element_wise_only(self):
+        # Test that kernel operation is applied element-wise without any
+        # recurrent magic (e.g., convolution)
+        np.random.seed(19)
+        t_array = np.linspace(-10, 10, num=100) * pq.s
+        t_shuffled = t_array.copy()
+        np.random.shuffle(t_shuffled)
+        for kern_cls in self.kernel_types:
+            for invert in (False, True):
+                kernel = kern_cls(sigma=1 * pq.s, invert=invert)
+                kernel_shuffled = np.sort(kernel(t_shuffled))
+                kernel_expected = np.sort(kernel(t_array))
+                assert_array_almost_equal(kernel_shuffled, kernel_expected)
+
+    def test_kernel_pdf_range(self):
+        t_array = np.linspace(-10, 10, num=1000) * pq.s
+        for kern_cls in self.kernel_types:
+            for invert in (False, True):
+                kernel = kern_cls(sigma=1 * pq.s, invert=invert)
+                kernel_array = kernel(t_array)
+                in_range = (kernel_array <= 1) & (kernel_array >= 0)
+                self.assertTrue(in_range.all())
+
+    def test_boundary_enclosing_area_fraction(self):
+        # test that test_boundary_enclosing_area_fraction does not depend
+        # on the invert
+        sigma = 1 * pq.s
+        fractions_test = np.linspace(0, 1, num=10, endpoint=False)
+        for kern_cls in self.kernel_types:
+            kernel = kern_cls(sigma=sigma, invert=False)
+            kernel_inverted = kern_cls(sigma=sigma, invert=True)
+            for fraction in fractions_test:
+                self.assertAlmostEqual(
+                    kernel.boundary_enclosing_area_fraction(fraction),
+                    kernel_inverted.boundary_enclosing_area_fraction(fraction)
+                )
+
+    def test_icdf(self):
+        sigma = 1 * pq.s
+        fractions_test = np.linspace(0, 1, num=10, endpoint=False)
+        for kern_cls in self.kernel_types:
+            kernel = kern_cls(sigma=sigma, invert=False)
+            kernel_inverted = kern_cls(sigma=sigma, invert=True)
+            for fraction in fractions_test:
+                # ICDF(0) for several kernels produces -inf
+                # of fsolve complains about stuck at local optima
+                with warnings.catch_warnings():
+                    warnings.simplefilter('ignore', RuntimeWarning)
+                    icdf = kernel.icdf(fraction)
+                    icdf_inverted = kernel_inverted.icdf(fraction)
+                if kernel.is_symmetric():
+                    self.assertAlmostEqual(icdf, icdf_inverted)
+                else:
+                    # AlphaKernel, ExponentialKernel
+                    self.assertGreaterEqual(icdf, 0 * pq.s)
+                    self.assertLessEqual(icdf_inverted, 0 * pq.s)
+
+    def test_cdf_icdf(self):
+        sigma = 1 * pq.s
+        fractions_test = np.linspace(0, 1, num=10, endpoint=False)
+        for kern_cls in self.kernel_types:
+            for invert in (False, True):
+                kernel = kern_cls(sigma=sigma, invert=invert)
+                for fraction in fractions_test:
+                    # ICDF(0) for several kernels produces -inf
+                    # of fsolve complains about stuck at local optima
+                    with warnings.catch_warnings():
+                        warnings.simplefilter('ignore', RuntimeWarning)
+                        self.assertAlmostEqual(
+                            kernel.cdf(kernel.icdf(fraction)), fraction)
+
+    def test_icdf_cdf(self):
+        sigma = 1 * pq.s
+        times = np.linspace(-10, 10) * sigma.units
+        for kern_cls in self.kernel_types:
+            for invert in (False, True):
+                kernel = kern_cls(sigma=sigma, invert=invert)
+                for t in times:
+                    cdf = kernel.cdf(t)
+                    self.assertGreaterEqual(cdf, 0.)
+                    self.assertLessEqual(cdf, 1.)
+                    if 0 < cdf < 1:
+                        self.assertAlmostEqual(
+                            kernel.icdf(cdf), t, places=2)
+
+    def test_icdf_at_1(self):
+        sigma = 1 * pq.s
+        for kern_cls in self.kernel_types:
+            for invert in (False, True):
+                kernel = kern_cls(sigma=sigma, invert=invert)
+                if isinstance(kernel, (kernels.RectangularKernel,
+                                       kernels.TriangularKernel)):
+                    icdf = kernel.icdf(1.0)
+                    # check finite
+                    self.assertLess(np.abs(icdf.magnitude), np.inf)
+                else:
+                    self.assertRaises(ValueError, kernel.icdf, 1.0)
+
+    def test_cdf_symmetric(self):
+        sigma = 1 * pq.s
+        cutoff = 1e2 * sigma  # a large value
+        times = np.linspace(-cutoff, cutoff, num=10)
+        kern_symmetric = filter(lambda kern_type: issubclass(
+            kern_type, kernels.SymmetricKernel), self.kernel_types)
+        for kern_cls in kern_symmetric:
+            kernel = kern_cls(sigma=sigma, invert=False)
+            kernel_inverted = kern_cls(sigma=sigma, invert=True)
+            for t in times:
+                self.assertAlmostEqual(kernel.cdf(t), kernel_inverted.cdf(t))
+
+
+class KernelOldImplementation(unittest.TestCase):
+    def setUp(self):
+        self.kernel_types = tuple(
+            kern_cls for kern_cls in kernels.__dict__.values()
+            if isinstance(kern_cls, type) and
+            issubclass(kern_cls, kernels.Kernel) and
+            kern_cls is not kernels.Kernel and
+            kern_cls is not kernels.SymmetricKernel)
+        self.sigma = 1 * pq.s
+        self.time_input = np.linspace(-10, 10, num=100) * self.sigma.units
+
+    def test_triangular(self):
+        def evaluate_old(t):
+            t_units = t.units
+            t_abs = np.abs(t.magnitude)
+            tau = math.sqrt(6) * kernel.sigma.rescale(t_units).magnitude
+            kernel_pdf = (t_abs < tau) * 1 / tau * (1 - t_abs / tau)
+            kernel_pdf = pq.Quantity(kernel_pdf, units=1 / t_units)
+            return kernel_pdf
+
+        for invert in (False, True):
+            kernel = kernels.TriangularKernel(self.sigma, invert=invert)
+            assert_array_almost_equal(kernel(self.time_input),
+                                      evaluate_old(self.time_input))
+
+    def test_gaussian(self):
+        def evaluate_old(t):
+            t_units = t.units
+            t = t.magnitude
+            sigma = kernel.sigma.rescale(t_units).magnitude
+            kernel_pdf = (1.0 / (math.sqrt(2.0 * math.pi) * sigma)) * np.exp(
+                -0.5 * (t / sigma) ** 2)
+            kernel_pdf = pq.Quantity(kernel_pdf, units=1 / t_units)
+            return kernel_pdf
+
+        for invert in (False, True):
+            kernel = kernels.GaussianKernel(self.sigma, invert=invert)
+            assert_array_almost_equal(kernel(self.time_input),
+                                      evaluate_old(self.time_input))
+
+    def test_laplacian(self):
+        def evaluate_old(t):
+            t_units = t.units
+            t = t.magnitude
+            tau = kernel.sigma.rescale(t_units).magnitude / math.sqrt(2)
+            kernel_pdf = 1 / (2 * tau) * np.exp(-np.abs(t / tau))
+            kernel_pdf = pq.Quantity(kernel_pdf, units=1 / t_units)
+            return kernel_pdf
+
+        for invert in (False, True):
+            kernel = kernels.LaplacianKernel(self.sigma, invert=invert)
+            assert_array_almost_equal(kernel(self.time_input),
+                                      evaluate_old(self.time_input))
+
+    def test_exponential(self):
+        def evaluate_old(t):
+            t_units = t.units
+            t = t.magnitude
+            tau = kernel.sigma.rescale(t_units).magnitude
+            if not kernel.invert:
+                kernel_pdf = (t >= 0) * 1 / tau * np.exp(-t / tau)
+            else:
+                kernel_pdf = (t <= 0) * 1 / tau * np.exp(t / tau)
+            kernel_pdf = pq.Quantity(kernel_pdf, units=1 / t_units)
+            return kernel_pdf
+
+        for invert in (False, True):
+            kernel = kernels.ExponentialKernel(self.sigma, invert=invert)
+            assert_array_almost_equal(kernel(self.time_input),
+                                      evaluate_old(self.time_input))
+
+
+class KernelMedianIndex(unittest.TestCase):
+    def setUp(self):
+        kernel_types = tuple(
+            kern_cls for kern_cls in kernels.__dict__.values()
+            if isinstance(kern_cls, type) and
+            issubclass(kern_cls, kernels.Kernel) and
+            kern_cls is not kernels.Kernel and
+            kern_cls is not kernels.SymmetricKernel)
+        self.sigma = 1 * pq.s
+        self.time_input = np.linspace(-10, 10, num=100) * self.sigma.units
+        self.kernels = []
+        for kern_cls in kernel_types:
+            for invert in (False, True):
+                self.kernels.append(kern_cls(self.sigma, invert=invert))
+
+    def test_small_size(self):
+        time_empty = [] * pq.s
+        time_size_2 = [0, 1] * pq.s
+        for kernel in self.kernels:
+            self.assertRaises(ValueError, kernel.median_index, time_empty)
+            median_id = kernel.median_index(time_size_2)
+            self.assertEqual(median_id, 0)
+
+    def test_not_sorted(self):
+        np.random.seed(9)
+        np.random.shuffle(self.time_input)
+        for kernel in self.kernels:
+            self.assertRaises(ValueError, kernel.median_index, self.time_input)
+
+    def test_non_support(self):
+        time_negative = np.linspace(-100, -20) * pq.s
+        for kernel in self.kernels:
+            if isinstance(kernel, (kernels.GaussianKernel,
+                                   kernels.LaplacianKernel)):
+                continue
+            kernel.invert = False
+            median_id = kernel.median_index(time_negative)
+            self.assertEqual(median_id, len(time_negative) // 2)
+            self.assertAlmostEqual(kernel.cdf(time_negative[median_id]), 0.)
+
+    def test_old_implementation(self):
+        def median_index(t):
+            cumsum = kernel(t).cumsum()
+            dt = (t[-1] - t[0]) / (len(t) - 1)
+            quantiles = cumsum * dt
+            return np.nonzero(quantiles >= 0.5)[0].min()
+
+        for kernel in self.kernels:
+            median_id = kernel.median_index(self.time_input)
+            median_id_old = median_index(self.time_input)
+            # the old implementation was off by 1 index, because the cumsum
+            # did not start with 0 (the zero element should have been added
+            # in the cumsum in old implementation).
+            self.assertLessEqual(abs(median_id - median_id_old), 1)
+
+
 if __name__ == '__main__':
     unittest.main()

+ 216 - 176
code/elephant/elephant/test/test_neo_tools.py

@@ -2,7 +2,7 @@
 """
 Unit tests for the neo_tools module.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
@@ -26,7 +26,8 @@ ARRAY_ATTRS = ['waveforms',
                'index',
                'channel_names',
                'channel_ids',
-               'coordinates'
+               'coordinates',
+               'array_annotations'
                ]
 
 
@@ -316,16 +317,16 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('SpikeTrain', seed=0)
         targ = strip_iter_values(targ)
 
-        res00 = nt.extract_neo_attrs(obj, parents=False, skip_array=True)
-        res10 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=True)
-        res20 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=False)
-        res01 = nt.extract_neo_attrs(obj, parents=True, skip_array=True)
-        res11 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=True)
-        res21 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=False)
+        res00 = nt.extract_neo_attributes(obj, parents=False, skip_array=True)
+        res10 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=True)
+        res20 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=False)
+        res01 = nt.extract_neo_attributes(obj, parents=True, skip_array=True)
+        res11 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=True)
+        res21 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=False)
 
         self.assertEqual(targ, res00)
         self.assertEqual(targ, res10)
@@ -342,18 +343,18 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
             if value is None:
                 del targ[key]
 
-        res00 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     skip_none=True)
-        res10 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=True, skip_none=True)
-        res20 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=False, skip_none=True)
-        res01 = nt.extract_neo_attrs(obj, parents=True, skip_array=True,
-                                     skip_none=True)
-        res11 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=True, skip_none=True)
-        res21 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=False, skip_none=True)
+        res00 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          skip_none=True)
+        res10 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=True, skip_none=True)
+        res20 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=False, skip_none=True)
+        res01 = nt.extract_neo_attributes(obj, parents=True, skip_array=True,
+                                          skip_none=True)
+        res11 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=True, skip_none=True)
+        res21 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=False, skip_none=True)
 
         self.assertEqual(targ, res00)
         self.assertEqual(targ, res10)
@@ -367,16 +368,16 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('Epoch', seed=0)
         targ = strip_iter_values(targ)
 
-        res00 = nt.extract_neo_attrs(obj, parents=False, skip_array=True)
-        res10 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=True)
-        res20 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=False)
-        res01 = nt.extract_neo_attrs(obj, parents=True, skip_array=True)
-        res11 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=True)
-        res21 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=False)
+        res00 = nt.extract_neo_attributes(obj, parents=False, skip_array=True)
+        res10 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=True)
+        res20 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=False)
+        res01 = nt.extract_neo_attributes(obj, parents=True, skip_array=True)
+        res11 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=True)
+        res21 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=False)
 
         self.assertEqual(targ, res00)
         self.assertEqual(targ, res10)
@@ -390,16 +391,16 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('Event', seed=0)
         targ = strip_iter_values(targ)
 
-        res00 = nt.extract_neo_attrs(obj, parents=False, skip_array=True)
-        res10 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=True)
-        res20 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=False)
-        res01 = nt.extract_neo_attrs(obj, parents=True, skip_array=True)
-        res11 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=True)
-        res21 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                     child_first=False)
+        res00 = nt.extract_neo_attributes(obj, parents=False, skip_array=True)
+        res10 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=True)
+        res20 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=False)
+        res01 = nt.extract_neo_attributes(obj, parents=True, skip_array=True)
+        res11 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=True)
+        res21 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                          child_first=False)
 
         self.assertEqual(targ, res00)
         self.assertEqual(targ, res10)
@@ -413,22 +414,26 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('SpikeTrain', seed=0)
         del targ['times']
 
-        res000 = nt.extract_neo_attrs(obj, parents=False)
-        res100 = nt.extract_neo_attrs(obj, parents=False, child_first=True)
-        res200 = nt.extract_neo_attrs(obj, parents=False, child_first=False)
-        res010 = nt.extract_neo_attrs(obj, parents=False, skip_array=False)
-        res110 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                      child_first=True)
-        res210 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                      child_first=False)
-        res001 = nt.extract_neo_attrs(obj, parents=True)
-        res101 = nt.extract_neo_attrs(obj, parents=True, child_first=True)
-        res201 = nt.extract_neo_attrs(obj, parents=True, child_first=False)
-        res011 = nt.extract_neo_attrs(obj, parents=True, skip_array=False)
-        res111 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                      child_first=True)
-        res211 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                      child_first=False)
+        res000 = nt.extract_neo_attributes(obj, parents=False)
+        res100 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=True)
+        res200 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=False)
+        res010 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False)
+        res110 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False, child_first=True)
+        res210 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False, child_first=False)
+        res001 = nt.extract_neo_attributes(obj, parents=True)
+        res101 = nt.extract_neo_attributes(obj, parents=True, child_first=True)
+        res201 = nt.extract_neo_attributes(
+            obj, parents=True, child_first=False)
+        res011 = nt.extract_neo_attributes(obj, parents=True, skip_array=False)
+        res111 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                           child_first=True)
+        res211 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                           child_first=False)
 
         self.assert_dicts_equal(targ, res000)
         self.assert_dicts_equal(targ, res100)
@@ -443,27 +448,42 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         self.assert_dicts_equal(targ, res111)
         self.assert_dicts_equal(targ, res211)
 
+    @staticmethod
+    def _fix_neo_issue_749(obj, targ):
+        # TODO: remove once fixed
+        # https://github.com/NeuralEnsemble/python-neo/issues/749
+        num_times = len(targ['times'])
+        obj = obj[:num_times]
+        del targ['array_annotations']
+        return obj
+
     def test__extract_neo_attrs__epoch_parents_empty_array(self):
         obj = fake_neo('Epoch', seed=0)
         targ = get_fake_values('Epoch', seed=0)
+
+        obj = self._fix_neo_issue_749(obj, targ)
         del targ['times']
 
-        res000 = nt.extract_neo_attrs(obj, parents=False)
-        res100 = nt.extract_neo_attrs(obj, parents=False, child_first=True)
-        res200 = nt.extract_neo_attrs(obj, parents=False, child_first=False)
-        res010 = nt.extract_neo_attrs(obj, parents=False, skip_array=False)
-        res110 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                      child_first=True)
-        res210 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                      child_first=False)
-        res001 = nt.extract_neo_attrs(obj, parents=True)
-        res101 = nt.extract_neo_attrs(obj, parents=True, child_first=True)
-        res201 = nt.extract_neo_attrs(obj, parents=True, child_first=False)
-        res011 = nt.extract_neo_attrs(obj, parents=True, skip_array=False)
-        res111 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                      child_first=True)
-        res211 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                      child_first=False)
+        res000 = nt.extract_neo_attributes(obj, parents=False)
+        res100 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=True)
+        res200 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=False)
+        res010 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False)
+        res110 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False, child_first=True)
+        res210 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False, child_first=False)
+        res001 = nt.extract_neo_attributes(obj, parents=True)
+        res101 = nt.extract_neo_attributes(obj, parents=True, child_first=True)
+        res201 = nt.extract_neo_attributes(
+            obj, parents=True, child_first=False)
+        res011 = nt.extract_neo_attributes(obj, parents=True, skip_array=False)
+        res111 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                           child_first=True)
+        res211 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                           child_first=False)
 
         self.assert_dicts_equal(targ, res000)
         self.assert_dicts_equal(targ, res100)
@@ -483,22 +503,26 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('Event', seed=0)
         del targ['times']
 
-        res000 = nt.extract_neo_attrs(obj, parents=False)
-        res100 = nt.extract_neo_attrs(obj, parents=False, child_first=True)
-        res200 = nt.extract_neo_attrs(obj, parents=False, child_first=False)
-        res010 = nt.extract_neo_attrs(obj, parents=False, skip_array=False)
-        res110 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                      child_first=True)
-        res210 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                      child_first=False)
-        res001 = nt.extract_neo_attrs(obj, parents=True)
-        res101 = nt.extract_neo_attrs(obj, parents=True, child_first=True)
-        res201 = nt.extract_neo_attrs(obj, parents=True, child_first=False)
-        res011 = nt.extract_neo_attrs(obj, parents=True, skip_array=False)
-        res111 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                      child_first=True)
-        res211 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                      child_first=False)
+        res000 = nt.extract_neo_attributes(obj, parents=False)
+        res100 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=True)
+        res200 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=False)
+        res010 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False)
+        res110 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False, child_first=True)
+        res210 = nt.extract_neo_attributes(
+            obj, parents=False, skip_array=False, child_first=False)
+        res001 = nt.extract_neo_attributes(obj, parents=True)
+        res101 = nt.extract_neo_attributes(obj, parents=True, child_first=True)
+        res201 = nt.extract_neo_attributes(
+            obj, parents=True, child_first=False)
+        res011 = nt.extract_neo_attributes(obj, parents=True, skip_array=False)
+        res111 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                           child_first=True)
+        res211 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                           child_first=False)
 
         self.assert_dicts_equal(targ, res000)
         self.assert_dicts_equal(targ, res100)
@@ -518,11 +542,11 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('SpikeTrain', seed=obj.annotations['seed'])
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=False, skip_array=True)
-        res1 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                    child_first=True)
-        res2 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                    child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=False, skip_array=True)
+        res1 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                         child_first=True)
+        res2 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                         child_first=False)
 
         del res0['i']
         del res1['i']
@@ -540,11 +564,11 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('Epoch', seed=obj.annotations['seed'])
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=False, skip_array=True)
-        res1 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                    child_first=True)
-        res2 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                    child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=False, skip_array=True)
+        res1 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                         child_first=True)
+        res2 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                         child_first=False)
 
         del res0['i']
         del res1['i']
@@ -562,11 +586,11 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('Event', seed=obj.annotations['seed'])
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=False, skip_array=True)
-        res1 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                    child_first=True)
-        res2 = nt.extract_neo_attrs(obj, parents=False, skip_array=True,
-                                    child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=False, skip_array=True)
+        res1 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                         child_first=True)
+        res2 = nt.extract_neo_attributes(obj, parents=False, skip_array=True,
+                                         child_first=False)
 
         del res0['i']
         del res1['i']
@@ -584,14 +608,15 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('SpikeTrain', seed=obj.annotations['seed'])
         del targ['times']
 
-        res00 = nt.extract_neo_attrs(obj, parents=False, skip_array=False)
-        res10 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                     child_first=True)
-        res20 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                     child_first=False)
-        res01 = nt.extract_neo_attrs(obj, parents=False)
-        res11 = nt.extract_neo_attrs(obj, parents=False, child_first=True)
-        res21 = nt.extract_neo_attrs(obj, parents=False, child_first=False)
+        res00 = nt.extract_neo_attributes(obj, parents=False, skip_array=False)
+        res10 = nt.extract_neo_attributes(obj, parents=False, skip_array=False,
+                                          child_first=True)
+        res20 = nt.extract_neo_attributes(obj, parents=False, skip_array=False,
+                                          child_first=False)
+        res01 = nt.extract_neo_attributes(obj, parents=False)
+        res11 = nt.extract_neo_attributes(obj, parents=False, child_first=True)
+        res21 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=False)
 
         del res00['i']
         del res10['i']
@@ -616,16 +641,20 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
     def test__extract_neo_attrs__epoch_noparents_array(self):
         obj = self.block.list_children_by_class('Epoch')[0]
         targ = get_fake_values('Epoch', seed=obj.annotations['seed'])
+
+        # 'times' is not in obj._necessary_attrs + obj._recommended_attrs
+        obj = self._fix_neo_issue_749(obj, targ)
         del targ['times']
 
-        res00 = nt.extract_neo_attrs(obj, parents=False, skip_array=False)
-        res10 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                     child_first=True)
-        res20 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                     child_first=False)
-        res01 = nt.extract_neo_attrs(obj, parents=False)
-        res11 = nt.extract_neo_attrs(obj, parents=False, child_first=True)
-        res21 = nt.extract_neo_attrs(obj, parents=False, child_first=False)
+        res00 = nt.extract_neo_attributes(obj, parents=False, skip_array=False)
+        res10 = nt.extract_neo_attributes(obj, parents=False, skip_array=False,
+                                          child_first=True)
+        res20 = nt.extract_neo_attributes(obj, parents=False, skip_array=False,
+                                          child_first=False)
+        res01 = nt.extract_neo_attributes(obj, parents=False)
+        res11 = nt.extract_neo_attributes(obj, parents=False, child_first=True)
+        res21 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=False)
 
         del res00['i']
         del res10['i']
@@ -652,14 +681,15 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('Event', seed=obj.annotations['seed'])
         del targ['times']
 
-        res00 = nt.extract_neo_attrs(obj, parents=False, skip_array=False)
-        res10 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                     child_first=True)
-        res20 = nt.extract_neo_attrs(obj, parents=False, skip_array=False,
-                                     child_first=False)
-        res01 = nt.extract_neo_attrs(obj, parents=False)
-        res11 = nt.extract_neo_attrs(obj, parents=False, child_first=True)
-        res21 = nt.extract_neo_attrs(obj, parents=False, child_first=False)
+        res00 = nt.extract_neo_attributes(obj, parents=False, skip_array=False)
+        res10 = nt.extract_neo_attributes(obj, parents=False, skip_array=False,
+                                          child_first=True)
+        res20 = nt.extract_neo_attributes(obj, parents=False, skip_array=False,
+                                          child_first=False)
+        res01 = nt.extract_neo_attributes(obj, parents=False)
+        res11 = nt.extract_neo_attributes(obj, parents=False, child_first=True)
+        res21 = nt.extract_neo_attributes(
+            obj, parents=False, child_first=False)
 
         del res00['i']
         del res10['i']
@@ -697,15 +727,16 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
                                     seed=obj.annotations['seed']))
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=True)
-        res1 = nt.extract_neo_attrs(obj, parents=True, skip_array=True,
-                                    child_first=True)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=True)
+        res1 = nt.extract_neo_attributes(obj, parents=True, skip_array=True,
+                                         child_first=True)
 
         del res0['i']
         del res1['i']
         del res0['j']
         del res1['j']
-        del res0['index']  # name clash between Block.index and ChannelIndex.index
+        # name clash between Block.index and ChannelIndex.index
+        del res0['index']
         del res1['index']
 
         self.assertEqual(targ, res0)
@@ -721,15 +752,16 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ.update(get_fake_values('Epoch', seed=obj.annotations['seed']))
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=True)
-        res1 = nt.extract_neo_attrs(obj, parents=True, skip_array=True,
-                                    child_first=True)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=True)
+        res1 = nt.extract_neo_attributes(obj, parents=True, skip_array=True,
+                                         child_first=True)
 
         del res0['i']
         del res1['i']
         del res0['j']
         del res1['j']
-        del res0['index']  # name clash between Block.index and ChannelIndex.index
+        # name clash between Block.index and ChannelIndex.index
+        del res0['index']
         del res1['index']
 
         self.assertEqual(targ, res0)
@@ -745,15 +777,16 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ.update(get_fake_values('Event', seed=obj.annotations['seed']))
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=True)
-        res1 = nt.extract_neo_attrs(obj, parents=True, skip_array=True,
-                                    child_first=True)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=True)
+        res1 = nt.extract_neo_attributes(obj, parents=True, skip_array=True,
+                                         child_first=True)
 
         del res0['i']
         del res1['i']
         del res0['j']
         del res1['j']
-        del res0['index']  # name clash between Block.index and ChannelIndex.index
+        # name clash between Block.index and ChannelIndex.index
+        del res0['index']
         del res1['index']
 
         self.assertEqual(targ, res0)
@@ -774,12 +807,13 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ.update(get_fake_values('Block', seed=blk.annotations['seed']))
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=True,
-                                    child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=True,
+                                         child_first=False)
 
         del res0['i']
         del res0['j']
-        del res0['index']  # name clash between Block.index and ChannelIndex.index
+        # name clash between Block.index and ChannelIndex.index
+        del res0['index']
 
         self.assertEqual(targ, res0)
 
@@ -793,12 +827,13 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ.update(get_fake_values('Block', seed=blk.annotations['seed']))
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=True,
-                                    child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=True,
+                                         child_first=False)
 
         del res0['i']
         del res0['j']
-        del res0['index']  # name clash between Block.index and ChannelIndex.index
+        # name clash between Block.index and ChannelIndex.index
+        del res0['index']
 
         self.assertEqual(targ, res0)
 
@@ -812,12 +847,13 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ.update(get_fake_values('Block', seed=blk.annotations['seed']))
         targ = strip_iter_values(targ)
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=True,
-                                    child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=True,
+                                         child_first=False)
 
         del res0['i']
         del res0['j']
-        del res0['index']  # name clash between Block.index and ChannelIndex.index
+        # name clash between Block.index and ChannelIndex.index
+        del res0['index']
 
         self.assertEqual(targ, res0)
 
@@ -838,11 +874,11 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
                                     seed=obj.annotations['seed']))
         del targ['times']
 
-        res00 = nt.extract_neo_attrs(obj, parents=True, skip_array=False)
-        res10 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                     child_first=True)
-        res01 = nt.extract_neo_attrs(obj, parents=True)
-        res11 = nt.extract_neo_attrs(obj, parents=True,  child_first=True)
+        res00 = nt.extract_neo_attributes(obj, parents=True, skip_array=False)
+        res10 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                          child_first=True)
+        res01 = nt.extract_neo_attributes(obj, parents=True)
+        res11 = nt.extract_neo_attributes(obj, parents=True, child_first=True)
 
         del res00['i']
         del res10['i']
@@ -866,13 +902,15 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('Block', seed=blk.annotations['seed'])
         targ.update(get_fake_values('Segment', seed=seg.annotations['seed']))
         targ.update(get_fake_values('Epoch', seed=obj.annotations['seed']))
+
+        obj = self._fix_neo_issue_749(obj, targ)
         del targ['times']
 
-        res00 = nt.extract_neo_attrs(obj, parents=True, skip_array=False)
-        res10 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                     child_first=True)
-        res01 = nt.extract_neo_attrs(obj, parents=True)
-        res11 = nt.extract_neo_attrs(obj, parents=True,  child_first=True)
+        res00 = nt.extract_neo_attributes(obj, parents=True, skip_array=False)
+        res10 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                          child_first=True)
+        res01 = nt.extract_neo_attributes(obj, parents=True)
+        res11 = nt.extract_neo_attributes(obj, parents=True, child_first=True)
 
         del res00['i']
         del res10['i']
@@ -898,11 +936,11 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ.update(get_fake_values('Event', seed=obj.annotations['seed']))
         del targ['times']
 
-        res00 = nt.extract_neo_attrs(obj, parents=True, skip_array=False)
-        res10 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                     child_first=True)
-        res01 = nt.extract_neo_attrs(obj, parents=True)
-        res11 = nt.extract_neo_attrs(obj, parents=True,  child_first=True)
+        res00 = nt.extract_neo_attributes(obj, parents=True, skip_array=False)
+        res10 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                          child_first=True)
+        res01 = nt.extract_neo_attributes(obj, parents=True)
+        res11 = nt.extract_neo_attributes(obj, parents=True, child_first=True)
 
         del res00['i']
         del res10['i']
@@ -935,9 +973,9 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         del targ['index']
         del targ['channel_names']
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                    child_first=False)
-        res1 = nt.extract_neo_attrs(obj, parents=True, child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                         child_first=False)
+        res1 = nt.extract_neo_attributes(obj, parents=True, child_first=False)
 
         del res0['i']
         del res1['i']
@@ -959,11 +997,13 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ = get_fake_values('Epoch', seed=obj.annotations['seed'])
         targ.update(get_fake_values('Segment', seed=seg.annotations['seed']))
         targ.update(get_fake_values('Block', seed=blk.annotations['seed']))
+
+        obj = self._fix_neo_issue_749(obj, targ)
         del targ['times']
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                    child_first=False)
-        res1 = nt.extract_neo_attrs(obj, parents=True, child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                         child_first=False)
+        res1 = nt.extract_neo_attributes(obj, parents=True, child_first=False)
 
         del res0['i']
         del res1['i']
@@ -983,9 +1023,9 @@ class ExtractNeoAttrsTestCase(unittest.TestCase):
         targ.update(get_fake_values('Block', seed=blk.annotations['seed']))
         del targ['times']
 
-        res0 = nt.extract_neo_attrs(obj, parents=True, skip_array=False,
-                                    child_first=False)
-        res1 = nt.extract_neo_attrs(obj, parents=True, child_first=False)
+        res0 = nt.extract_neo_attributes(obj, parents=True, skip_array=False,
+                                         child_first=False)
+        res1 = nt.extract_neo_attributes(obj, parents=True, child_first=False)
 
         del res0['i']
         del res1['i']

+ 23 - 11
code/elephant/elephant/test/test_pandas_bridge.py

@@ -2,28 +2,32 @@
 """
 Unit tests for the pandas bridge module.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
 from __future__ import division, print_function
 
 import unittest
+import warnings
+from distutils.version import StrictVersion
 from itertools import chain
 
-from neo.test.generate_datasets import fake_neo
 import numpy as np
-from numpy.testing import assert_array_equal
 import quantities as pq
+from neo.test.generate_datasets import fake_neo
+from numpy.testing import assert_array_equal
 
 try:
     import pandas as pd
     from pandas.util.testing import assert_frame_equal, assert_index_equal
 except ImportError:
     HAVE_PANDAS = False
+    pandas_version = StrictVersion('0.0.0')
 else:
     import elephant.pandas_bridge as ep
     HAVE_PANDAS = True
+    pandas_version = StrictVersion(pd.__version__)
 
 if HAVE_PANDAS:
     # Currying, otherwise the unittest will break with pandas>=0.16.0
@@ -39,19 +43,19 @@ if HAVE_PANDAS:
             return pd.util.testing.assert_index_equal(left, right)
 
 
-@unittest.skipUnless(HAVE_PANDAS, 'requires pandas')
+@unittest.skipUnless(pandas_version >= '0.24.0', 'requires pandas v0.24.0')
 class MultiindexFromDictTestCase(unittest.TestCase):
     def test__multiindex_from_dict(self):
         inds = {'test1': 6.5,
                 'test2': 5,
                 'test3': 'test'}
         targ = pd.MultiIndex(levels=[[6.5], [5], ['test']],
-                             labels=[[0], [0], [0]],
+                             codes=[[0], [0], [0]],
                              names=['test1', 'test2', 'test3'])
         res0 = ep._multiindex_from_dict(inds)
         self.assertEqual(targ.levels, res0.levels)
         self.assertEqual(targ.names, res0.names)
-        self.assertEqual(targ.labels, res0.labels)
+        self.assertEqual(targ.codes, res0.codes)
 
 
 def _convert_levels(levels):
@@ -1640,7 +1644,6 @@ class MultiEventsToDataframeTestCase(unittest.TestCase):
             np.array(targ.values, dtype=np.float),
             np.array(res0.values, dtype=np.float))
 
-
         assert_frame_equal(targ, res0)
 
     def test__multi_events_to_dataframe__block_noparents(self):
@@ -2703,7 +2706,10 @@ class SliceSpiketrainTestCase(unittest.TestCase):
         res1_stop = res1.columns.get_level_values('t_stop').values
 
         targ = self.obj.values
-        targ[targ < targ_start] = np.nan
+        with warnings.catch_warnings():
+            warnings.simplefilter("ignore")
+            # targ already has nan values, ignore comparing with nan
+            targ[targ < targ_start] = np.nan
 
         self.assertFalse(res0 is targ)
         self.assertFalse(res1 is targ)
@@ -2731,7 +2737,10 @@ class SliceSpiketrainTestCase(unittest.TestCase):
         res1_stop = res1.columns.get_level_values('t_stop').unique().tolist()
 
         targ = self.obj.values
-        targ[targ > targ_stop] = np.nan
+        with warnings.catch_warnings():
+            warnings.simplefilter("ignore")
+            # targ already has nan values, ignore comparing with nan
+            targ[targ > targ_stop] = np.nan
 
         self.assertFalse(res0 is targ)
         self.assertFalse(res1 is targ)
@@ -2757,8 +2766,11 @@ class SliceSpiketrainTestCase(unittest.TestCase):
         res0_stop = res0.columns.get_level_values('t_stop').unique().tolist()
 
         targ = self.obj.values
-        targ[targ < targ_start] = np.nan
-        targ[targ > targ_stop] = np.nan
+        with warnings.catch_warnings():
+            warnings.simplefilter("ignore")
+            # targ already has nan values, ignore comparing with nan
+            targ[targ < targ_start] = np.nan
+            targ[targ > targ_stop] = np.nan
 
         self.assertFalse(res0 is targ)
 

+ 21 - 5
code/elephant/elephant/test/test_phase_analysis.py

@@ -2,7 +2,7 @@
 """
 Unit tests for the phase analysis module.
 
-:copyright: Copyright 2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 from __future__ import division, print_function
@@ -141,7 +141,7 @@ class SpikeTriggeredPhaseTestCase(unittest.TestCase):
         # This is a spike clearly outside the bounds
         st = SpikeTrain(
             [-50, 50],
-            units='s', t_start=-100*pq.s, t_stop=100*pq.s)
+            units='s', t_start=-100 * pq.s, t_stop=100 * pq.s)
         phases_noint, _, _ = elephant.phase_analysis.spike_triggered_phase(
             elephant.signal_processing.hilbert(self.anasig0),
             st,
@@ -154,7 +154,7 @@ class SpikeTriggeredPhaseTestCase(unittest.TestCase):
         # spike is to be considered.
         st = SpikeTrain(
             [0, 50],
-            units='s', t_start=-100*pq.s, t_stop=100*pq.s)
+            units='s', t_start=-100 * pq.s, t_stop=100 * pq.s)
         phases_noint, _, _ = elephant.phase_analysis.spike_triggered_phase(
             elephant.signal_processing.hilbert(self.anasig0),
             st,
@@ -165,7 +165,7 @@ class SpikeTriggeredPhaseTestCase(unittest.TestCase):
         # This is a spike clearly outside the bounds
         st = SpikeTrain(
             [1, 250],
-            units='s', t_start=-1*pq.s, t_stop=300*pq.s)
+            units='s', t_start=-1 * pq.s, t_stop=300 * pq.s)
         phases_noint, _, _ = elephant.phase_analysis.spike_triggered_phase(
             elephant.signal_processing.hilbert(self.anasig0),
             st,
@@ -178,13 +178,29 @@ class SpikeTriggeredPhaseTestCase(unittest.TestCase):
         # spike is not to be considered.
         st = SpikeTrain(
             [1, 100],
-            units='s', t_start=-1*pq.s, t_stop=200*pq.s)
+            units='s', t_start=-1 * pq.s, t_stop=200 * pq.s)
         phases_noint, _, _ = elephant.phase_analysis.spike_triggered_phase(
             elephant.signal_processing.hilbert(self.anasig0),
             st,
             interpolate=False)
         self.assertEqual(len(phases_noint[0]), 1)
 
+    # This test handles the correct dealing with input signals that have
+    # different time units, including a CompoundUnit
+    def test_regression_269(self):
+        # This is a spike train on a 30KHz sampling, one spike at 1s, one just
+        # before the end of the signal
+        cu = pq.CompoundUnit("1/30000.*s")
+        st = SpikeTrain(
+            [30000., (self.anasig0.t_stop - 1 * pq.s).rescale(cu).magnitude],
+            units=pq.CompoundUnit("1/30000.*s"),
+            t_start=-1 * pq.s, t_stop=300 * pq.s)
+        phases_noint, _, _ = elephant.phase_analysis.spike_triggered_phase(
+            elephant.signal_processing.hilbert(self.anasig0),
+            st,
+            interpolate=False)
+        self.assertEqual(len(phases_noint[0]), 2)
+
 
 if __name__ == '__main__':
     unittest.main()

+ 546 - 53
code/elephant/elephant/test/test_signal_processing.py

@@ -2,7 +2,7 @@
 """
 Unit tests for the signal_processing module.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 from __future__ import division, print_function
@@ -11,14 +11,141 @@ import unittest
 
 import neo
 import numpy as np
+import quantities as pq
 import scipy.signal as spsig
 import scipy.stats
+from numpy.ma.testutils import assert_array_equal, assert_allclose
 from numpy.testing.utils import assert_array_almost_equal
-import quantities as pq
 
 import elephant.signal_processing
 
-from numpy.ma.testutils import assert_array_equal, assert_allclose
+
+class PairwiseCrossCorrelationTest(unittest.TestCase):
+    # Set parameters
+    sampling_period = 0.02 * pq.s
+    sampling_rate = 1. / sampling_period
+    n_samples = 2018
+    times = np.arange(n_samples) * sampling_period
+    freq = 1. * pq.Hz
+
+    def test_cross_correlation_freqs(self):
+        '''
+        Sine vs cosine for different frequencies
+        Note, that accuracy depends on N and min(f).
+        E.g., f=0.1 and N=2018 only has an accuracy on the order decimal=1
+        '''
+        freq_arr = np.linspace(0.5, 15, 8) * pq.Hz
+        signal = np.zeros((self.n_samples, 3))
+        for freq in freq_arr:
+            signal[:, 0] = np.sin(2. * np.pi * freq * self.times)
+            signal[:, 1] = np.cos(2. * np.pi * freq * self.times)
+            signal[:, 2] = np.cos(2. * np.pi * freq * self.times + 0.2)
+            # Convert signal to neo.AnalogSignal
+            signal_neo = neo.AnalogSignal(signal, units='mV',
+                                          t_start=0. * pq.ms,
+                                          sampling_rate=self.sampling_rate,
+                                          dtype=float)
+            rho = elephant.signal_processing.cross_correlation_function(
+                signal_neo, [[0, 1], [0, 2]])
+            # Cross-correlation of sine and cosine should be sine
+            assert_array_almost_equal(
+                rho.magnitude[:, 0], np.sin(2. * np.pi * freq * rho.times),
+                decimal=2)
+            self.assertEqual(rho.shape, (signal.shape[0], 2))  # 2 pairs
+
+    def test_cross_correlation_nlags(self):
+        '''
+        Sine vs cosine for specific nlags
+        '''
+        nlags = 30
+        signal = np.zeros((self.n_samples, 2))
+        signal[:, 0] = 0.2 * np.sin(2. * np.pi * self.freq * self.times)
+        signal[:, 1] = 5.3 * np.cos(2. * np.pi * self.freq * self.times)
+        # Convert signal to neo.AnalogSignal
+        signal = neo.AnalogSignal(signal, units='mV', t_start=0. * pq.ms,
+                                  sampling_rate=self.sampling_rate,
+                                  dtype=float)
+        rho = elephant.signal_processing.cross_correlation_function(
+            signal, [0, 1], n_lags=nlags)
+        # Test if vector of lags tau has correct length
+        assert len(rho.times) == 2 * int(nlags) + 1
+
+    def test_cross_correlation_phi(self):
+        '''
+        Sine with phase shift phi vs cosine
+        '''
+        phi = np.pi / 6.
+        signal = np.zeros((self.n_samples, 2))
+        signal[:, 0] = 0.2 * np.sin(2. * np.pi * self.freq * self.times + phi)
+        signal[:, 1] = 5.3 * np.cos(2. * np.pi * self.freq * self.times)
+        # Convert signal to neo.AnalogSignal
+        signal = neo.AnalogSignal(signal, units='mV', t_start=0. * pq.ms,
+                                  sampling_rate=self.sampling_rate,
+                                  dtype=float)
+        rho = elephant.signal_processing.cross_correlation_function(
+            signal, [0, 1])
+        # Cross-correlation of sine and cosine should be sine + phi
+        assert_array_almost_equal(rho.magnitude[:, 0], np.sin(
+            2. * np.pi * self.freq * rho.times + phi), decimal=2)
+
+    def test_cross_correlation_envelope(self):
+        '''
+        Envelope of sine vs cosine
+        '''
+        # Sine with phase shift phi vs cosine for different frequencies
+        nlags = 800  # nlags need to be smaller than N/2 b/c border effects
+        signal = np.zeros((self.n_samples, 2))
+        signal[:, 0] = 0.2 * np.sin(2. * np.pi * self.freq * self.times)
+        signal[:, 1] = 5.3 * np.cos(2. * np.pi * self.freq * self.times)
+        # Convert signal to neo.AnalogSignal
+        signal = neo.AnalogSignal(signal, units='mV', t_start=0. * pq.ms,
+                                  sampling_rate=self.sampling_rate,
+                                  dtype=float)
+        envelope = elephant.signal_processing.cross_correlation_function(
+            signal, [0, 1], n_lags=nlags, hilbert_envelope=True)
+        # Envelope should be one for sinusoidal function
+        assert_array_almost_equal(envelope, np.ones_like(envelope), decimal=2)
+
+    def test_cross_correlation_biased(self):
+        signal = np.c_[np.sin(2. * np.pi * self.freq * self.times),
+                       np.cos(2. * np.pi * self.freq * self.times)] * pq.mV
+        signal = neo.AnalogSignal(signal, t_start=0. * pq.ms,
+                                  sampling_rate=self.sampling_rate)
+        raw = elephant.signal_processing.cross_correlation_function(
+            signal, [0, 1], scaleopt='none'
+        )
+        biased = elephant.signal_processing.cross_correlation_function(
+            signal, [0, 1], scaleopt='biased'
+        )
+        assert_array_almost_equal(biased, raw / biased.shape[0])
+
+    def test_cross_correlation_coeff(self):
+        signal = np.c_[np.sin(2. * np.pi * self.freq * self.times),
+                       np.cos(2. * np.pi * self.freq * self.times)] * pq.mV
+        signal = neo.AnalogSignal(signal, t_start=0. * pq.ms,
+                                  sampling_rate=self.sampling_rate)
+        normalized = elephant.signal_processing.cross_correlation_function(
+            signal, [0, 1], scaleopt='coeff'
+        )
+        sig1, sig2 = signal.magnitude.T
+        target_numpy = np.correlate(sig1, sig2, mode="same")
+        target_numpy /= np.sqrt((sig1 ** 2).sum() * (sig2 ** 2).sum())
+        target_numpy = np.expand_dims(target_numpy, axis=1)
+        assert_array_almost_equal(normalized.magnitude,
+                                  target_numpy,
+                                  decimal=3)
+
+    def test_cross_correlation_coeff_autocorr(self):
+        # Numpy/Matlab equivalent
+        signal = np.sin(2. * np.pi * self.freq * self.times)
+        signal = signal[:, np.newaxis] * pq.mV
+        signal = neo.AnalogSignal(signal, t_start=0. * pq.ms,
+                                  sampling_rate=self.sampling_rate)
+        normalized = elephant.signal_processing.cross_correlation_function(
+            signal, [0, 0], scaleopt='coeff'
+        )
+        # auto-correlation at zero lag should equal 1
+        self.assertAlmostEqual(normalized[normalized.shape[0] // 2], 1)
 
 
 class ZscoreTestCase(unittest.TestCase):
@@ -112,6 +239,15 @@ class ZscoreTestCase(unittest.TestCase):
         # Assert original signal is untouched
         self.assertEqual(signal[0, 0].magnitude, self.test_seq1[0])
 
+    def test_zscore_array_annotations(self):
+        signal = neo.AnalogSignal(
+            self.test_seq1, units='mV',
+            t_start=0. * pq.ms, sampling_rate=1000. * pq.Hz,
+            array_annotations=dict(valid=True, my_list=[0]))
+        zscored = elephant.signal_processing.zscore(signal, inplace=False)
+        self.assertDictEqual(signal.array_annotations,
+                             zscored.array_annotations)
+
     def test_zscore_single_multidim_inplace(self):
         """
         Test z-score on a single AnalogSignal with multiple dimensions, asking
@@ -123,14 +259,15 @@ class ZscoreTestCase(unittest.TestCase):
 
         m = np.mean(signal.magnitude, axis=0, keepdims=True)
         s = np.std(signal.magnitude, axis=0, keepdims=True)
-        target = (signal.magnitude - m) / s
+        ground_truth = np.divide(signal.magnitude - m, s,
+                                 out=np.zeros_like(signal.magnitude),
+                                 where=s != 0)
+        result = elephant.signal_processing.zscore(signal, inplace=True)
 
-        assert_array_almost_equal(
-            elephant.signal_processing.zscore(
-                signal, inplace=True).magnitude, target, decimal=9)
+        assert_array_almost_equal(result.magnitude, ground_truth, decimal=8)
 
         # Assert original signal is overwritten
-        self.assertEqual(signal[0, 0].magnitude, target[0, 0])
+        self.assertAlmostEqual(signal[0, 0].magnitude, ground_truth[0, 0])
 
     def test_zscore_single_dup_int(self):
         """
@@ -297,13 +434,19 @@ class ButterTestCase(unittest.TestCase):
         self.assertAlmostEqual(psd[0, 256], 0)
 
     def test_butter_filter_function(self):
+        """
+        `elephant.signal_processing.butter` return values test for all
+        available filters (result has to be almost equal):
+            * lfilter
+            * filtfilt
+            * sosfiltfilt
+        """
         # generate white noise AnalogSignal
         noise = neo.AnalogSignal(
             np.random.normal(size=5000),
-            sampling_rate=1000 * pq.Hz, units='mV')
+            sampling_rate=1000 * pq.Hz, units='mV',
+            array_annotations=dict(valid=True, my_list=[0]))
 
-        # test if the filter performance is as well with filftunc=lfilter as
-        # with filtfunc=filtfilt (i.e. default option)
         kwds = {'signal': noise, 'highpass_freq': 250.0 * pq.Hz,
                 'lowpass_freq': None, 'filter_function': 'filtfilt'}
         filtered_noise = elephant.signal_processing.butter(**kwds)
@@ -315,7 +458,17 @@ class ButterTestCase(unittest.TestCase):
         _, psd_lfilter = spsig.welch(
             filtered_noise.T, nperseg=1024, fs=1000.0, detrend=lambda x: x)
 
+        kwds['filter_function'] = 'sosfiltfilt'
+        filtered_noise = elephant.signal_processing.butter(**kwds)
+        _, psd_sosfiltfilt = spsig.welch(
+            filtered_noise.T, nperseg=1024, fs=1000.0, detrend=lambda x: x)
+
         self.assertAlmostEqual(psd_filtfilt[0, 0], psd_lfilter[0, 0])
+        self.assertAlmostEqual(psd_filtfilt[0, 0], psd_sosfiltfilt[0, 0])
+
+        # Test if array_annotations are preserved
+        self.assertDictEqual(noise.array_annotations,
+                             filtered_noise.array_annotations)
 
     def test_butter_invalid_filter_function(self):
         # generate a dummy AnalogSignal
@@ -345,7 +498,7 @@ class ButterTestCase(unittest.TestCase):
 
         # check input as NumPy ndarray
         filtered_noise_np = elephant.signal_processing.butter(
-            noise_np, 400.0, 100.0, fs=1000.0)
+            noise_np, 400.0, 100.0, sampling_frequency=1000.0)
         self.assertTrue(isinstance(filtered_noise_np, np.ndarray))
         self.assertFalse(isinstance(filtered_noise_np, pq.quantity.Quantity))
         self.assertFalse(isinstance(filtered_noise_np, neo.AnalogSignal))
@@ -353,7 +506,7 @@ class ButterTestCase(unittest.TestCase):
 
         # check input as Quantity array
         filtered_noise_pq = elephant.signal_processing.butter(
-            noise_pq, 400.0 * pq.Hz, 100.0 * pq.Hz, fs=1000.0)
+            noise_pq, 400.0 * pq.Hz, 100.0 * pq.Hz, sampling_frequency=1000.0)
         self.assertTrue(isinstance(filtered_noise_pq, pq.quantity.Quantity))
         self.assertFalse(isinstance(filtered_noise_pq, neo.AnalogSignal))
         self.assertEqual(filtered_noise_pq.shape, noise_pq.shape)
@@ -374,9 +527,9 @@ class ButterTestCase(unittest.TestCase):
     def test_butter_axis(self):
         noise = np.random.normal(size=(4, 5000))
         filtered_noise = elephant.signal_processing.butter(
-            noise, 250.0, fs=1000.0)
+            noise, 250.0, sampling_frequency=1000.0)
         filtered_noise_transposed = elephant.signal_processing.butter(
-            noise.T, 250.0, fs=1000.0, axis=0)
+            noise.T, 250.0, sampling_frequency=1000.0, axis=0)
         self.assertTrue(np.all(filtered_noise == filtered_noise_transposed.T))
 
     def test_butter_multidim_input(self):
@@ -386,7 +539,7 @@ class ButterTestCase(unittest.TestCase):
         noise_neo1d = neo.AnalogSignal(
             noise_pq[0], sampling_rate=1000.0 * pq.Hz)
         filtered_noise_pq = elephant.signal_processing.butter(
-            noise_pq, 250.0, fs=1000.0)
+            noise_pq, 250.0, sampling_frequency=1000.0)
         filtered_noise_neo = elephant.signal_processing.butter(
             noise_neo, 250.0)
         filtered_noise_neo1d = elephant.signal_processing.butter(
@@ -426,11 +579,13 @@ class HilbertTestCase(unittest.TestCase):
             self.amplitude[:, 2] * np.cos(self.phase[:, 2]),
             self.amplitude[:, 3] * np.cos(self.phase[:, 3])])
 
+        array_annotations = dict(my_list=np.arange(sigs.shape[0]))
         self.long_signals = neo.AnalogSignal(
             sigs.T, units='mV',
             t_start=0. * pq.ms,
             sampling_rate=(len(time) / (time[-1] - time[0])).rescale(pq.Hz),
-            dtype=float)
+            dtype=float,
+            array_annotations=array_annotations)
 
         # Generate test data covering a single oscillation cycle in 1s only
         phases = np.arange(0, 2 * np.pi, np.pi / 256)
@@ -461,14 +616,22 @@ class HilbertTestCase(unittest.TestCase):
         """
         true_shape = np.shape(self.long_signals)
         output = elephant.signal_processing.hilbert(
-            self.long_signals, N='nextpow')
-        self.assertEquals(np.shape(output), true_shape)
+            self.long_signals, padding='nextpow')
+        self.assertEqual(np.shape(output), true_shape)
         self.assertEqual(output.units, pq.dimensionless)
         output = elephant.signal_processing.hilbert(
-            self.long_signals, N=16384)
-        self.assertEquals(np.shape(output), true_shape)
+            self.long_signals, padding=16384)
+        self.assertEqual(np.shape(output), true_shape)
         self.assertEqual(output.units, pq.dimensionless)
 
+    def test_hilbert_array_annotations(self):
+        output = elephant.signal_processing.hilbert(self.long_signals,
+                                                    padding='nextpow')
+        # Test if array_annotations are preserved
+        self.assertSetEqual(set(output.array_annotations.keys()), {"my_list"})
+        assert_array_equal(output.array_annotations['my_list'],
+                           self.long_signals.array_annotations['my_list'])
+
     def test_hilbert_theoretical_long_signals(self):
         """
         Tests the output of the hilbert function with regard to amplitude and
@@ -478,7 +641,7 @@ class HilbertTestCase(unittest.TestCase):
         for padding in ['nextpow', 'none', 16384]:
 
             h = elephant.signal_processing.hilbert(
-                self.long_signals, N=padding)
+                self.long_signals, padding=padding)
 
             phase = np.angle(h.magnitude)
             amplitude = np.abs(h.magnitude)
@@ -524,7 +687,7 @@ class HilbertTestCase(unittest.TestCase):
         for padding in ['nextpow', 'none', 512]:
 
             h = elephant.signal_processing.hilbert(
-                self.one_period, N=padding)
+                self.one_period, padding=padding)
 
             amplitude = np.abs(h.magnitude)
             phase = np.angle(h.magnitude)
@@ -572,19 +735,31 @@ class WaveletTestCase(unittest.TestCase):
     def setUp(self):
         # generate a 10-sec test data of pure 50 Hz cosine wave
         self.fs = 1000.0
-        self.times = np.arange(0, 10.0, 1/self.fs)
+        self.times = np.arange(0, 10.0, 1 / self.fs)
         self.test_freq1 = 50.0
         self.test_freq2 = 60.0
-        self.test_data1 = np.cos(2*np.pi*self.test_freq1*self.times)
-        self.test_data2 = np.sin(2*np.pi*self.test_freq2*self.times)
+        self.test_data1 = np.cos(2 * np.pi * self.test_freq1 * self.times)
+        self.test_data2 = np.sin(2 * np.pi * self.test_freq2 * self.times)
         self.test_data_arr = np.vstack([self.test_data1, self.test_data2])
         self.test_data = neo.AnalogSignal(
-            self.test_data_arr.T*pq.mV, t_start=self.times[0]*pq.s,
-            t_stop=self.times[-1]*pq.s, sampling_period=(1/self.fs)*pq.s)
+            self.test_data_arr.T * pq.mV, t_start=self.times[0] * pq.s,
+            t_stop=self.times[-1] * pq.s, sampling_period=(1 / self.fs) * pq.s)
         self.true_phase1 = np.angle(
-            self.test_data1 + 1j*np.sin(2*np.pi*self.test_freq1*self.times))
+            self.test_data1 +
+            1j *
+            np.sin(
+                2 *
+                np.pi *
+                self.test_freq1 *
+                self.times))
         self.true_phase2 = np.angle(
-            self.test_data2 - 1j*np.cos(2*np.pi*self.test_freq2*self.times))
+            self.test_data2 -
+            1j *
+            np.cos(
+                2 *
+                np.pi *
+                self.test_freq2 *
+                self.times))
         self.wt_freqs = [10, 20, 30]
 
     def test_wavelet_errors(self):
@@ -592,24 +767,27 @@ class WaveletTestCase(unittest.TestCase):
         Tests if errors are raised as expected.
         """
         # too high center frequency
-        kwds = {'signal': self.test_data, 'freq': self.fs/2}
+        kwds = {'signal': self.test_data, 'freq': self.fs / 2}
         self.assertRaises(
             ValueError, elephant.signal_processing.wavelet_transform, **kwds)
-        kwds = {'signal': self.test_data_arr, 'freq': self.fs/2, 'fs': self.fs}
+        kwds = {
+            'signal': self.test_data_arr,
+            'freq': self.fs / 2,
+            'fs': self.fs}
         self.assertRaises(
             ValueError, elephant.signal_processing.wavelet_transform, **kwds)
 
         # too high center frequency in a list
-        kwds = {'signal': self.test_data, 'freq': [self.fs/10, self.fs/2]}
+        kwds = {'signal': self.test_data, 'freq': [self.fs / 10, self.fs / 2]}
         self.assertRaises(
             ValueError, elephant.signal_processing.wavelet_transform, **kwds)
         kwds = {'signal': self.test_data_arr,
-                'freq': [self.fs/10, self.fs/2], 'fs': self.fs}
+                'freq': [self.fs / 10, self.fs / 2], 'fs': self.fs}
         self.assertRaises(
             ValueError, elephant.signal_processing.wavelet_transform, **kwds)
 
         # nco is not positive
-        kwds = {'signal': self.test_data, 'freq': self.fs/10, 'nco': 0}
+        kwds = {'signal': self.test_data, 'freq': self.fs / 10, 'nco': 0}
         self.assertRaises(
             ValueError, elephant.signal_processing.wavelet_transform, **kwds)
 
@@ -622,13 +800,13 @@ class WaveletTestCase(unittest.TestCase):
         # check the shape of the result array
         # --- case of single center frequency
         wt = elephant.signal_processing.wavelet_transform(self.test_data,
-                                                          self.fs/10)
+                                                          self.fs / 10)
         self.assertTrue(wt.ndim == self.test_data.ndim)
         self.assertTrue(wt.shape[0] == self.test_data.shape[0])  # time axis
         self.assertTrue(wt.shape[1] == self.test_data.shape[1])  # channel axis
 
         wt_arr = elephant.signal_processing.wavelet_transform(
-            self.test_data_arr, self.fs/10, fs=self.fs)
+            self.test_data_arr, self.fs / 10, sampling_frequency=self.fs)
         self.assertTrue(wt_arr.ndim == self.test_data.ndim)
         # channel axis
         self.assertTrue(wt_arr.shape[0] == self.test_data_arr.shape[0])
@@ -636,7 +814,7 @@ class WaveletTestCase(unittest.TestCase):
         self.assertTrue(wt_arr.shape[1] == self.test_data_arr.shape[1])
 
         wt_arr1d = elephant.signal_processing.wavelet_transform(
-            self.test_data1, self.fs/10, fs=self.fs)
+            self.test_data1, self.fs / 10, sampling_frequency=self.fs)
         self.assertTrue(wt_arr1d.ndim == self.test_data1.ndim)
         # time axis
         self.assertTrue(wt_arr1d.shape[0] == self.test_data1.shape[0])
@@ -644,14 +822,14 @@ class WaveletTestCase(unittest.TestCase):
         # --- case of multiple center frequencies
         wt = elephant.signal_processing.wavelet_transform(
             self.test_data, self.wt_freqs)
-        self.assertTrue(wt.ndim == self.test_data.ndim+1)
+        self.assertTrue(wt.ndim == self.test_data.ndim + 1)
         self.assertTrue(wt.shape[0] == self.test_data.shape[0])  # time axis
         self.assertTrue(wt.shape[1] == self.test_data.shape[1])  # channel axis
         self.assertTrue(wt.shape[2] == len(self.wt_freqs))  # frequency axis
 
         wt_arr = elephant.signal_processing.wavelet_transform(
-            self.test_data_arr, self.wt_freqs, fs=self.fs)
-        self.assertTrue(wt_arr.ndim == self.test_data_arr.ndim+1)
+            self.test_data_arr, self.wt_freqs, sampling_frequency=self.fs)
+        self.assertTrue(wt_arr.ndim == self.test_data_arr.ndim + 1)
         # channel axis
         self.assertTrue(wt_arr.shape[0] == self.test_data_arr.shape[0])
         # frequency axis
@@ -660,8 +838,8 @@ class WaveletTestCase(unittest.TestCase):
         self.assertTrue(wt_arr.shape[2] == self.test_data_arr.shape[1])
 
         wt_arr1d = elephant.signal_processing.wavelet_transform(
-            self.test_data1, self.wt_freqs, fs=self.fs)
-        self.assertTrue(wt_arr1d.ndim == self.test_data1.ndim+1)
+            self.test_data1, self.wt_freqs, sampling_frequency=self.fs)
+        self.assertTrue(wt_arr1d.ndim == self.test_data1.ndim + 1)
         # frequency axis
         self.assertTrue(wt_arr1d.shape[0] == len(self.wt_freqs))
         # time axis
@@ -692,7 +870,7 @@ class WaveletTestCase(unittest.TestCase):
         wt = elephant.signal_processing.wavelet_transform(self.test_data,
                                                           self.test_freq1)
         # take a middle segment in order to avoid edge effects
-        amp = np.abs(wt[int(len(wt)/3):int(len(wt)//3*2), 0])
+        amp = np.abs(wt[int(len(wt) / 3):int(len(wt) // 3 * 2), 0])
         mean_amp = amp.mean()
         assert_array_almost_equal((amp - mean_amp) / mean_amp,
                                   np.zeros_like(amp), decimal=6)
@@ -700,14 +878,15 @@ class WaveletTestCase(unittest.TestCase):
         # check that the amplitude of WT is (almost) zero when center frequency
         # is considerably different from signal frequency
         wt_low = elephant.signal_processing.wavelet_transform(
-            self.test_data, self.test_freq1/10)
-        amp_low = np.abs(wt_low[int(len(wt)/3):int(len(wt)//3*2), 0])
+            self.test_data, self.test_freq1 / 10)
+        amp_low = np.abs(wt_low[int(len(wt) / 3):int(len(wt) // 3 * 2), 0])
         assert_array_almost_equal(amp_low, np.zeros_like(amp), decimal=6)
 
         # check that zero padding hardly affect the result
         wt_padded = elephant.signal_processing.wavelet_transform(
             self.test_data, self.test_freq1, zero_padding=False)
-        amp_padded = np.abs(wt_padded[int(len(wt)/3):int(len(wt)//3*2), 0])
+        amp_padded = np.abs(
+            wt_padded[int(len(wt) / 3):int(len(wt) // 3 * 2), 0])
         assert_array_almost_equal(amp_padded, amp, decimal=9)
 
     def test_wavelet_phase(self):
@@ -718,17 +897,331 @@ class WaveletTestCase(unittest.TestCase):
         # sinusoid
         wt = elephant.signal_processing.wavelet_transform(self.test_data,
                                                           self.test_freq1)
-        phase = np.angle(wt[int(len(wt)/3):int(len(wt)//3*2), 0])
-        true_phase = self.true_phase1[int(len(wt)/3):int(len(wt)//3*2)]
-        assert_array_almost_equal(np.exp(1j*phase), np.exp(1j*true_phase),
+        phase = np.angle(wt[int(len(wt) / 3):int(len(wt) // 3 * 2), 0])
+        true_phase = self.true_phase1[int(len(wt) / 3):int(len(wt) // 3 * 2)]
+        assert_array_almost_equal(np.exp(1j * phase), np.exp(1j * true_phase),
                                   decimal=6)
 
         # check that zero padding hardly affect the result
         wt_padded = elephant.signal_processing.wavelet_transform(
             self.test_data, self.test_freq1, zero_padding=False)
-        phase_padded = np.angle(wt_padded[int(len(wt)/3):int(len(wt)//3*2), 0])
-        assert_array_almost_equal(np.exp(1j*phase_padded), np.exp(1j*phase),
-                                  decimal=9)
+        phase_padded = np.angle(
+            wt_padded[int(len(wt) / 3):int(len(wt) // 3 * 2), 0])
+        assert_array_almost_equal(
+            np.exp(
+                1j * phase_padded),
+            np.exp(
+                1j * phase),
+            decimal=9)
+
+
+class DerivativeTestCase(unittest.TestCase):
+
+    def setUp(self):
+        self.fs = 1000.0
+        self.tmin = 0.0
+        self.tmax = 10.0
+        self.times = np.arange(self.tmin, self.tmax, 1 / self.fs)
+        self.test_data1 = np.cos(2 * np.pi * self.times)
+        self.test_data2 = np.vstack(
+            [np.cos(2 * np.pi * self.times), np.sin(2 * np.pi * self.times)]).T
+        self.test_signal1 = neo.AnalogSignal(
+            self.test_data1 * pq.mV, t_start=self.times[0] * pq.s,
+            t_stop=self.times[-1] * pq.s, sampling_period=(1 / self.fs) * pq.s)
+        self.test_signal2 = neo.AnalogSignal(
+            self.test_data2 * pq.mV, t_start=self.times[0] * pq.s,
+            t_stop=self.times[-1] * pq.s, sampling_period=(1 / self.fs) * pq.s)
+
+    def test_derivative_invalid_signal(self):
+        '''Test derivative on non-AnalogSignal'''
+        kwds = {'signal': np.arange(5)}
+        self.assertRaises(
+            TypeError, elephant.signal_processing.derivative, **kwds)
+
+    def test_derivative_units(self):
+        '''Test derivative returns AnalogSignal with correct units'''
+        derivative = elephant.signal_processing.derivative(
+            self.test_signal1)
+        self.assertTrue(isinstance(derivative, neo.AnalogSignal))
+        self.assertEqual(
+            derivative.units,
+            self.test_signal1.units / self.test_signal1.times.units)
+
+    def test_derivative_times(self):
+        '''Test derivative returns AnalogSignal with correct times'''
+        derivative = elephant.signal_processing.derivative(
+            self.test_signal1)
+        self.assertTrue(isinstance(derivative, neo.AnalogSignal))
+
+        # test that sampling period is correct
+        self.assertEqual(
+            derivative.sampling_period,
+            1 / self.fs * self.test_signal1.times.units)
+
+        # test that all times are correct
+        target_times = self.times[:-1] * self.test_signal1.times.units \
+            + derivative.sampling_period / 2
+        assert_array_almost_equal(derivative.times, target_times)
+
+        # test that t_start and t_stop are correct
+        self.assertEqual(derivative.t_start, target_times[0])
+        assert_array_almost_equal(
+            derivative.t_stop,
+            target_times[-1] + derivative.sampling_period)
+
+    def test_derivative_values(self):
+        '''Test derivative returns AnalogSignal with correct values'''
+        derivative1 = elephant.signal_processing.derivative(
+            self.test_signal1)
+        derivative2 = elephant.signal_processing.derivative(
+            self.test_signal2)
+        self.assertTrue(isinstance(derivative1, neo.AnalogSignal))
+        self.assertTrue(isinstance(derivative2, neo.AnalogSignal))
+
+        # single channel
+        assert_array_almost_equal(
+            derivative1.magnitude,
+            np.vstack([np.diff(self.test_data1)]).T / (1 / self.fs))
+
+        # multi channel
+        assert_array_almost_equal(derivative2.magnitude, np.vstack([
+            np.diff(self.test_data2[:, 0]),
+            np.diff(self.test_data2[:, 1])]).T / (1 / self.fs))
+
+
+class RAUCTestCase(unittest.TestCase):
+
+    def setUp(self):
+        self.fs = 1000.0
+        self.tmin = 0.0
+        self.tmax = 10.0
+        self.times = np.arange(self.tmin, self.tmax, 1 / self.fs)
+        self.test_data1 = np.cos(2 * np.pi * self.times)
+        self.test_data2 = np.vstack(
+            [np.cos(2 * np.pi * self.times), np.sin(2 * np.pi * self.times)]).T
+        self.test_signal1 = neo.AnalogSignal(
+            self.test_data1 * pq.mV, t_start=self.times[0] * pq.s,
+            t_stop=self.times[-1] * pq.s, sampling_period=(1 / self.fs) * pq.s)
+        self.test_signal2 = neo.AnalogSignal(
+            self.test_data2 * pq.mV, t_start=self.times[0] * pq.s,
+            t_stop=self.times[-1] * pq.s, sampling_period=(1 / self.fs) * pq.s)
+
+    def test_rauc_invalid_signal(self):
+        '''Test rauc on non-AnalogSignal'''
+        kwds = {'signal': np.arange(5)}
+        self.assertRaises(
+            ValueError, elephant.signal_processing.rauc, **kwds)
+
+    def test_rauc_invalid_bin_duration(self):
+        '''Test rauc on bad bin duration'''
+        kwds = {'signal': self.test_signal1, 'bin_duration': 'bad'}
+        self.assertRaises(
+            ValueError, elephant.signal_processing.rauc, **kwds)
+
+    def test_rauc_invalid_baseline(self):
+        '''Test rauc on bad baseline'''
+        kwds = {'signal': self.test_signal1, 'baseline': 'bad'}
+        self.assertRaises(
+            ValueError, elephant.signal_processing.rauc, **kwds)
+
+    def test_rauc_units(self):
+        '''Test rauc returns Quantity or AnalogSignal with correct units'''
+
+        # test that single-bin result is Quantity with correct units
+        rauc = elephant.signal_processing.rauc(
+            self.test_signal1)
+        self.assertTrue(isinstance(rauc, pq.Quantity))
+        self.assertEqual(
+            rauc.units,
+            self.test_signal1.units * self.test_signal1.times.units)
+
+        # test that multi-bin result is AnalogSignal with correct units
+        rauc_arr = elephant.signal_processing.rauc(
+            self.test_signal1, bin_duration=1 * pq.s)
+        self.assertTrue(isinstance(rauc_arr, neo.AnalogSignal))
+        self.assertEqual(
+            rauc_arr.units,
+            self.test_signal1.units * self.test_signal1.times.units)
+
+    def test_rauc_times_without_overextending_bin(self):
+        '''Test rauc returns correct times when signal is binned evenly'''
+
+        bin_duration = 1 * pq.s  # results in all bin centers < original t_stop
+        rauc_arr = elephant.signal_processing.rauc(
+            self.test_signal1, bin_duration=bin_duration)
+        self.assertTrue(isinstance(rauc_arr, neo.AnalogSignal))
+
+        # test that sampling period is correct
+        self.assertEqual(rauc_arr.sampling_period, bin_duration)
+
+        # test that all times are correct
+        target_times = np.arange(self.tmin,
+                                 self.tmax,
+                                 bin_duration.magnitude) \
+            * bin_duration.units + bin_duration / 2
+        assert_array_almost_equal(rauc_arr.times, target_times)
+
+        # test that t_start and t_stop are correct
+        self.assertEqual(rauc_arr.t_start, target_times[0])
+        assert_array_almost_equal(
+            rauc_arr.t_stop,
+            target_times[-1] + bin_duration)
+
+    def test_rauc_times_with_overextending_bin(self):
+        '''Test rauc returns correct times when signal is NOT binned evenly'''
+
+        bin_duration = 0.99 * pq.s  # results in one bin center > original t_stop
+        rauc_arr = elephant.signal_processing.rauc(
+            self.test_signal1, bin_duration=bin_duration)
+        self.assertTrue(isinstance(rauc_arr, neo.AnalogSignal))
+
+        # test that sampling period is correct
+        self.assertEqual(rauc_arr.sampling_period, bin_duration)
+
+        # test that all times are correct
+        target_times = np.arange(self.tmin,
+                                 self.tmax,
+                                 bin_duration.magnitude) \
+            * bin_duration.units + bin_duration / 2
+        assert_array_almost_equal(rauc_arr.times, target_times)
+
+        # test that t_start and t_stop are correct
+        self.assertEqual(rauc_arr.t_start, target_times[0])
+        assert_array_almost_equal(
+            rauc_arr.t_stop,
+            target_times[-1] + bin_duration)
+
+    def test_rauc_values_one_bin(self):
+        '''Test rauc returns correct values when there is just one bin'''
+        rauc1 = elephant.signal_processing.rauc(
+            self.test_signal1)
+        rauc2 = elephant.signal_processing.rauc(
+            self.test_signal2)
+        self.assertTrue(isinstance(rauc1, pq.Quantity))
+        self.assertTrue(isinstance(rauc2, pq.Quantity))
+
+        # single channel
+        assert_array_almost_equal(
+            rauc1.magnitude,
+            np.array([6.36517679]))
+
+        # multi channel
+        assert_array_almost_equal(
+            rauc2.magnitude,
+            np.array([6.36517679, 6.36617364]))
+
+    def test_rauc_values_multi_bin(self):
+        '''Test rauc returns correct values when there are multiple bins'''
+        rauc_arr1 = elephant.signal_processing.rauc(
+            self.test_signal1, bin_duration=0.99 * pq.s)
+        rauc_arr2 = elephant.signal_processing.rauc(
+            self.test_signal2, bin_duration=0.99 * pq.s)
+        self.assertTrue(isinstance(rauc_arr1, neo.AnalogSignal))
+        self.assertTrue(isinstance(rauc_arr2, neo.AnalogSignal))
+
+        # single channel
+        assert_array_almost_equal(rauc_arr1.magnitude, np.array([
+            [0.62562647],
+            [0.62567202],
+            [0.62576076],
+            [0.62589236],
+            [0.62606628],
+            [0.62628184],
+            [0.62653819],
+            [0.62683432],
+            [0.62716907],
+            [0.62754110],
+            [0.09304862]]))
+
+        # multi channel
+        assert_array_almost_equal(rauc_arr2.magnitude, np.array([
+            [0.62562647, 0.63623770],
+            [0.62567202, 0.63554830],
+            [0.62576076, 0.63486313],
+            [0.62589236, 0.63418488],
+            [0.62606628, 0.63351623],
+            [0.62628184, 0.63285983],
+            [0.62653819, 0.63221825],
+            [0.62683432, 0.63159403],
+            [0.62716907, 0.63098964],
+            [0.62754110, 0.63040747],
+            [0.09304862, 0.03039579]]))
+
+    def test_rauc_mean_baseline(self):
+        '''Test rauc returns correct values when baseline='mean' is given'''
+        rauc1 = elephant.signal_processing.rauc(
+            self.test_signal1, baseline='mean')
+        rauc2 = elephant.signal_processing.rauc(
+            self.test_signal2, baseline='mean')
+        self.assertTrue(isinstance(rauc1, pq.Quantity))
+        self.assertTrue(isinstance(rauc2, pq.Quantity))
+
+        # single channel
+        assert_array_almost_equal(
+            rauc1.magnitude,
+            np.array([6.36517679]))
+
+        # multi channel
+        assert_array_almost_equal(
+            rauc2.magnitude,
+            np.array([6.36517679, 6.36617364]))
+
+    def test_rauc_median_baseline(self):
+        '''Test rauc returns correct values when baseline='median' is given'''
+        rauc1 = elephant.signal_processing.rauc(
+            self.test_signal1, baseline='median')
+        rauc2 = elephant.signal_processing.rauc(
+            self.test_signal2, baseline='median')
+        self.assertTrue(isinstance(rauc1, pq.Quantity))
+        self.assertTrue(isinstance(rauc2, pq.Quantity))
+
+        # single channel
+        assert_array_almost_equal(
+            rauc1.magnitude,
+            np.array([6.36517679]))
+
+        # multi channel
+        assert_array_almost_equal(
+            rauc2.magnitude,
+            np.array([6.36517679, 6.36617364]))
+
+    def test_rauc_arbitrary_baseline(self):
+        '''Test rauc returns correct values when arbitrary baseline is given'''
+        rauc1 = elephant.signal_processing.rauc(
+            self.test_signal1, baseline=0.123 * pq.mV)
+        rauc2 = elephant.signal_processing.rauc(
+            self.test_signal2, baseline=0.123 * pq.mV)
+        self.assertTrue(isinstance(rauc1, pq.Quantity))
+        self.assertTrue(isinstance(rauc2, pq.Quantity))
+
+        # single channel
+        assert_array_almost_equal(
+            rauc1.magnitude,
+            np.array([6.41354725]))
+
+        # multi channel
+        assert_array_almost_equal(
+            rauc2.magnitude,
+            np.array([6.41354725, 6.41429810]))
+
+    def test_rauc_time_slice(self):
+        '''Test rauc returns correct values when t_start, t_stop are given'''
+        rauc1 = elephant.signal_processing.rauc(
+            self.test_signal1, t_start=0.123 * pq.s, t_stop=0.456 * pq.s)
+        rauc2 = elephant.signal_processing.rauc(
+            self.test_signal2, t_start=0.123 * pq.s, t_stop=0.456 * pq.s)
+        self.assertTrue(isinstance(rauc1, pq.Quantity))
+        self.assertTrue(isinstance(rauc2, pq.Quantity))
+
+        # single channel
+        assert_array_almost_equal(
+            rauc1.magnitude,
+            np.array([0.16279006]))
+
+        # multi channel
+        assert_array_almost_equal(
+            rauc2.magnitude,
+            np.array([0.16279006, 0.26677944]))
 
 
 if __name__ == '__main__':

+ 576 - 89
code/elephant/elephant/test/test_spade.py

@@ -1,31 +1,40 @@
 """
 Unit tests for the spade module.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 from __future__ import division
+
+import sys
 import unittest
+import random
 
 import neo
 import numpy as np
-from numpy.testing.utils import assert_array_equal
 import quantities as pq
-import elephant.spade as spade
+from numpy.testing.utils import assert_array_equal
+
 import elephant.conversion as conv
+import elephant.spade as spade
 import elephant.spike_train_generation as stg
 
 try:
-    from elephant.spade_src import fim
-    HAVE_FIM = True
+    import statsmodels
+    HAVE_STATSMODELS = True
 except ImportError:
-    HAVE_FIM = False
+    HAVE_STATSMODELS = False
+
+python_version_major = sys.version_info.major
+
+HAVE_FIM = spade.HAVE_FIM
 
 
 class SpadeTestCase(unittest.TestCase):
     def setUp(self):
+        np.random.seed(0)
         # Spade parameters
-        self.binsize = 1 * pq.ms
+        self.bin_size = 1 * pq.ms
         self.winlen = 10
         self.n_subset = 10
         self.n_surr = 10
@@ -34,45 +43,50 @@ class SpadeTestCase(unittest.TestCase):
         self.psr_param = [0, 0, 0]
         self.min_occ = 4
         self.min_spikes = 4
+        self.max_occ = 4
+        self.max_spikes = 4
         self.min_neu = 4
         # Test data parameters
         # CPP parameters
         self.n_neu = 100
         self.amplitude = [0] * self.n_neu + [1]
-        self.cpp = stg.cpp(rate=3*pq.Hz, A=self.amplitude, t_stop=5*pq.s)
+        self.cpp = stg.cpp(
+            rate=3 * pq.Hz,
+            amplitude_distribution=self.amplitude,
+            t_stop=5 * pq.s)
         # Number of patterns' occurrences
         self.n_occ1 = 10
         self.n_occ2 = 12
         self.n_occ3 = 15
         # Patterns lags
         self.lags1 = [2]
-        self.lags2 = [1, 2]
-        self.lags3 = [1, 2, 3, 4, 5]
+        self.lags2 = [1, 3]
+        self.lags3 = [1, 2, 4, 5, 7]
         # Length of the spiketrain
         self.t_stop = 3000
         # Patterns times
         self.patt1_times = neo.SpikeTrain(
             np.arange(
-                0, 1000, 1000//self.n_occ1) *
-            pq.ms, t_stop=self.t_stop*pq.ms)
+                0, 1000, 1000 // self.n_occ1) *
+            pq.ms, t_stop=self.t_stop * pq.ms)
         self.patt2_times = neo.SpikeTrain(
             np.arange(
-                1000, 2000, 1000 // self.n_occ2) *
+                1000, 2000, 1000 // self.n_occ2)[:-1] *
             pq.ms, t_stop=self.t_stop * pq.ms)
         self.patt3_times = neo.SpikeTrain(
             np.arange(
-                2000, 3000, 1000 // self.n_occ3) *
+                2000, 3000, 1000 // self.n_occ3)[:-1] *
             pq.ms, t_stop=self.t_stop * pq.ms)
         # Patterns
         self.patt1 = [self.patt1_times] + [neo.SpikeTrain(
-            self.patt1_times.view(pq.Quantity)+l * pq.ms,
-            t_stop=self.t_stop*pq.ms) for l in self.lags1]
+            self.patt1_times.view(pq.Quantity) + lag * pq.ms,
+            t_stop=self.t_stop * pq.ms) for lag in self.lags1]
         self.patt2 = [self.patt2_times] + [neo.SpikeTrain(
-            self.patt2_times.view(pq.Quantity)+l * pq.ms,
-            t_stop=self.t_stop*pq.ms) for l in self.lags2]
+            self.patt2_times.view(pq.Quantity) + lag * pq.ms,
+            t_stop=self.t_stop * pq.ms) for lag in self.lags2]
         self.patt3 = [self.patt3_times] + [neo.SpikeTrain(
-            self.patt3_times.view(pq.Quantity)+l * pq.ms,
-            t_stop=self.t_stop*pq.ms) for l in self.lags3]
+            self.patt3_times.view(pq.Quantity) + lag * pq.ms,
+            t_stop=self.t_stop * pq.ms) for lag in self.lags3]
         # Data
         self.msip = self.patt1 + self.patt2 + self.patt3
         # Expected results
@@ -83,28 +97,41 @@ class SpadeTestCase(unittest.TestCase):
         self.elements2 = list(range(self.n_spk2))
         self.elements3 = list(range(self.n_spk3))
         self.elements_msip = [
-            self.elements1, list(range(self.n_spk1, self.n_spk1 + self.n_spk2)),
-                list(range(self.n_spk1 + self.n_spk2, self.n_spk1 +
-                      self.n_spk2 + self.n_spk3))]
+            self.elements1,
+            list(
+                range(
+                    self.n_spk1,
+                    self.n_spk1 +
+                    self.n_spk2)),
+            list(
+                range(
+                    self.n_spk1 +
+                    self.n_spk2,
+                    self.n_spk1 +
+                    self.n_spk2 +
+                    self.n_spk3))]
         self.occ1 = np.unique(conv.BinnedSpikeTrain(
-            self.patt1_times, self.binsize).spike_indices[0])
+            self.patt1_times, self.bin_size).spike_indices[0])
         self.occ2 = np.unique(conv.BinnedSpikeTrain(
-            self.patt2_times, self.binsize).spike_indices[0])
+            self.patt2_times, self.bin_size).spike_indices[0])
         self.occ3 = np.unique(conv.BinnedSpikeTrain(
-            self.patt3_times, self.binsize).spike_indices[0])
+            self.patt3_times, self.bin_size).spike_indices[0])
         self.occ_msip = [
             list(self.occ1), list(self.occ2), list(self.occ3)]
         self.lags_msip = [self.lags1, self.lags2, self.lags3]
+        self.patt_psr = self.patt3 + [self.patt3[-1][:3]]
 
     # Testing cpp
+    @unittest.skipUnless(HAVE_FIM, "Time consuming with pythonic FIM")
     def test_spade_cpp(self):
-        output_cpp = spade.spade(self.cpp, self.binsize,
-                                  1,
-                                  n_subsets=self.n_subset,
-                                  stability_thresh=self.stability_thresh,
-                                  n_surr=self.n_surr, alpha=self.alpha,
-                                  psr_param=self.psr_param,
-                                  output_format='patterns')['patterns']
+        output_cpp = spade.spade(self.cpp, self.bin_size, 1,
+                                 approx_stab_pars=dict(
+                                     n_subsets=self.n_subset,
+                                     stability_thresh=self.stability_thresh),
+                                 n_surr=self.n_surr, alpha=self.alpha,
+                                 psr_param=self.psr_param,
+                                 stat_corr='no',
+                                 output_format='patterns')['patterns']
         elements_cpp = []
         lags_cpp = []
         # collecting spade output
@@ -114,24 +141,29 @@ class SpadeTestCase(unittest.TestCase):
         # check neurons in the patterns
         assert_array_equal(elements_cpp, [range(self.n_neu)])
         # check the lags
-        assert_array_equal(lags_cpp, [np.array([0]*(self.n_neu - 1))])
+        assert_array_equal(lags_cpp, [np.array([0] * (self.n_neu - 1))])
 
     # Testing spectrum cpp
-    def test_spade_cpp(self):
+    def test_spade_spectrum_cpp(self):
         # Computing Spectrum
-        spectrum_cpp = spade.concepts_mining(self.cpp, self.binsize,
-                                  1,report='#')[0]
+        spectrum_cpp = spade.concepts_mining(self.cpp, self.bin_size,
+                                             1, report='#')[0]
         # Check spectrum
-        assert_array_equal(spectrum_cpp, [(len(self.cpp), len(self.cpp[0]), 1)])
+        assert_array_equal(
+            spectrum_cpp,
+            [(len(self.cpp),
+              np.sum(conv.BinnedSpikeTrain(
+                  self.cpp[0], self.bin_size).to_bool_array()), 1)])
 
     # Testing with multiple patterns input
     def test_spade_msip(self):
-        output_msip = spade.spade(self.msip, self.binsize,
-                                  self.winlen,
-                                  n_subsets=self.n_subset,
-                                  stability_thresh=self.stability_thresh,
+        output_msip = spade.spade(self.msip, self.bin_size, self.winlen,
+                                  approx_stab_pars=dict(
+                                      n_subsets=self.n_subset,
+                                      stability_thresh=self.stability_thresh),
                                   n_surr=self.n_surr, alpha=self.alpha,
                                   psr_param=self.psr_param,
+                                  stat_corr='no',
                                   output_format='patterns')['patterns']
         elements_msip = []
         occ_msip = []
@@ -141,9 +173,9 @@ class SpadeTestCase(unittest.TestCase):
             elements_msip.append(out['neurons'])
             occ_msip.append(list(out['times'].magnitude))
             lags_msip.append(list(out['lags'].magnitude))
-        elements_msip = sorted(elements_msip, key=lambda d: len(d))
-        occ_msip = sorted(occ_msip, key=lambda d: len(d))
-        lags_msip = sorted(lags_msip, key=lambda d: len(d))
+        elements_msip = sorted(elements_msip, key=len)
+        occ_msip = sorted(occ_msip, key=len)
+        lags_msip = sorted(lags_msip, key=len)
         # check neurons in the patterns
         assert_array_equal(elements_msip, self.elements_msip)
         # check the occurrences time of the patters
@@ -151,56 +183,128 @@ class SpadeTestCase(unittest.TestCase):
         # check the lags
         assert_array_equal(lags_msip, self.lags_msip)
 
-    # test under different configuration of parameters than the default one
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
     def test_parameters(self):
+        """
+        Test under different configuration of parameters than the default one
+        """
         # test min_spikes parameter
-        output_msip_min_spikes = spade.spade(self.msip, self.binsize,
-                                        self.winlen,
-                                        n_subsets=self.n_subset,
-                                        n_surr=self.n_surr, alpha=self.alpha,
-                                        min_spikes=self.min_spikes,
-                                        psr_param=self.psr_param,
-                                        output_format='patterns')['patterns']
+        with self.assertWarns(UserWarning):
+            # n_surr=0 and alpha=0.05 spawns expected UserWarning
+            output_msip_min_spikes = spade.spade(
+                self.msip,
+                self.bin_size,
+                self.winlen,
+                min_spikes=self.min_spikes,
+                approx_stab_pars=dict(n_subsets=self.n_subset),
+                n_surr=0,
+                alpha=self.alpha,
+                psr_param=self.psr_param,
+                stat_corr='no',
+                output_format='patterns')['patterns']
         # collecting spade output
-        elements_msip_min_spikes= []
+        elements_msip_min_spikes = []
         for out in output_msip_min_spikes:
             elements_msip_min_spikes.append(out['neurons'])
-        elements_msip_min_spikes = sorted(elements_msip_min_spikes, key=lambda d: len(d))
-        lags_msip_min_spikes= []
+        elements_msip_min_spikes = sorted(
+            elements_msip_min_spikes, key=len)
+        lags_msip_min_spikes = []
         for out in output_msip_min_spikes:
             lags_msip_min_spikes.append(list(out['lags'].magnitude))
-        lags_msip_min_spikes = sorted(lags_msip_min_spikes, key=lambda d: len(d))
+            pvalue = out['pvalue']
+        lags_msip_min_spikes = sorted(
+            lags_msip_min_spikes, key=len)
         # check the lags
         assert_array_equal(lags_msip_min_spikes, [
-            l for l in self.lags_msip if len(l)+1>=self.min_spikes])
+            l for l in self.lags_msip if len(l) + 1 >= self.min_spikes])
         # check the neurons in the patterns
         assert_array_equal(elements_msip_min_spikes, [
-            el for el in self.elements_msip if len(el)>=self.min_neu and len(
-                el)>=self.min_spikes])
+            el for el in self.elements_msip if len(el) >= self.min_neu and len(
+                el) >= self.min_spikes])
+        # check that the p-values assigned are equal to -1 (n_surr=0)
+        assert_array_equal(-1, pvalue)
 
         # test min_occ parameter
-        output_msip_min_occ = spade.spade(self.msip, self.binsize,
-                                        self.winlen,
-                                        n_subsets=self.n_subset,
-                                        n_surr=self.n_surr, alpha=self.alpha,
-                                        min_occ=self.min_occ,
-                                        psr_param=self.psr_param,
-                                        output_format='patterns')['patterns']
+        output_msip_min_occ = spade.spade(
+            self.msip,
+            self.bin_size,
+            self.winlen,
+            min_occ=self.min_occ,
+            approx_stab_pars=dict(
+                n_subsets=self.n_subset),
+            n_surr=self.n_surr,
+            alpha=self.alpha,
+            psr_param=self.psr_param,
+            stat_corr='no',
+            output_format='patterns')['patterns']
         # collect spade output
-        occ_msip_min_occ= []
+        occ_msip_min_occ = []
         for out in output_msip_min_occ:
             occ_msip_min_occ.append(list(out['times'].magnitude))
-        occ_msip_min_occ = sorted(occ_msip_min_occ, key=lambda d: len(d))
+        occ_msip_min_occ = sorted(occ_msip_min_occ, key=len)
         # test occurrences time
         assert_array_equal(occ_msip_min_occ, [
-            occ for occ in self.occ_msip if len(occ)>=self.min_occ])
+            occ for occ in self.occ_msip if len(occ) >= self.min_occ])
+
+        # test max_spikes parameter
+        output_msip_max_spikes = spade.spade(
+            self.msip,
+            self.bin_size,
+            self.winlen,
+            max_spikes=self.max_spikes,
+            approx_stab_pars=dict(
+                n_subsets=self.n_subset),
+            n_surr=self.n_surr,
+            alpha=self.alpha,
+            psr_param=self.psr_param,
+            stat_corr='no',
+            output_format='patterns')['patterns']
+        # collecting spade output
+        elements_msip_max_spikes = []
+        for out in output_msip_max_spikes:
+            elements_msip_max_spikes.append(out['neurons'])
+        elements_msip_max_spikes = sorted(
+            elements_msip_max_spikes, key=len)
+        lags_msip_max_spikes = []
+        for out in output_msip_max_spikes:
+            lags_msip_max_spikes.append(list(out['lags'].magnitude))
+        lags_msip_max_spikes = sorted(
+            lags_msip_max_spikes, key=len)
+        # check the lags
+        assert_array_equal(
+            [len(lags) < self.max_spikes
+             for lags in lags_msip_max_spikes],
+            [True] * len(lags_msip_max_spikes))
+
+        # test max_occ parameter
+        output_msip_max_occ = spade.spade(
+            self.msip,
+            self.bin_size,
+            self.winlen,
+            max_occ=self.max_occ,
+            approx_stab_pars=dict(
+                n_subsets=self.n_subset),
+            n_surr=self.n_surr,
+            alpha=self.alpha,
+            psr_param=self.psr_param,
+            stat_corr='no',
+            output_format='patterns')['patterns']
+        # collect spade output
+        occ_msip_max_occ = []
+        for out in output_msip_max_occ:
+            occ_msip_max_occ.append(list(out['times'].magnitude))
+        occ_msip_max_occ = sorted(occ_msip_max_occ, key=len)
+        # test occurrences time
+        assert_array_equal(occ_msip_max_occ, [
+            occ for occ in self.occ_msip if len(occ) <= self.max_occ])
 
     # test to compare the python and the C implementation of FIM
     # skip this test if C code not available
-    @unittest.skipIf(HAVE_FIM == False, 'Requires fim.so')
+    @unittest.skipIf(not HAVE_FIM, 'Requires fim.so')
     def test_fpgrowth_fca(self):
+        print("fim.so is found.")
         binary_matrix = conv.BinnedSpikeTrain(
-            self.patt1, self.binsize).to_bool_array()
+            self.patt1, self.bin_size).to_sparse_bool_array().tocoo()
         context, transactions, rel_matrix = spade._build_context(
             binary_matrix, self.winlen)
         # mining the data with python fast_fca
@@ -216,23 +320,407 @@ class SpadeTestCase(unittest.TestCase):
         assert_array_equal(sorted(mining_results_ffca[0][1]), sorted(
             mining_results_fpg[0][1]))
 
-    # test the errors raised
+    # Tests 3d spectrum
+    # Testing with multiple patterns input
+    def test_spade_msip_3d(self):
+        output_msip = spade.spade(self.msip, self.bin_size, self.winlen,
+                                  approx_stab_pars=dict(
+                                      n_subsets=self.n_subset,
+                                      stability_thresh=self.stability_thresh),
+                                  n_surr=self.n_surr, spectrum='3d#',
+                                  alpha=self.alpha, psr_param=self.psr_param,
+                                  stat_corr='no',
+                                  output_format='patterns')['patterns']
+        elements_msip = []
+        occ_msip = []
+        lags_msip = []
+        # collecting spade output
+        for out in output_msip:
+            elements_msip.append(out['neurons'])
+            occ_msip.append(list(out['times'].magnitude))
+            lags_msip.append(list(out['lags'].magnitude))
+        elements_msip = sorted(elements_msip, key=len)
+        occ_msip = sorted(occ_msip, key=len)
+        lags_msip = sorted(lags_msip, key=len)
+        # check neurons in the patterns
+        assert_array_equal(elements_msip, self.elements_msip)
+        # check the occurrences time of the patters
+        assert_array_equal(occ_msip, self.occ_msip)
+        # check the lags
+        assert_array_equal(lags_msip, self.lags_msip)
+
+    # test under different configuration of parameters than the default one
+    def test_parameters_3d(self):
+        # test min_spikes parameter
+        output_msip_min_spikes = spade.spade(
+            self.msip,
+            self.bin_size,
+            self.winlen,
+            min_spikes=self.min_spikes,
+            approx_stab_pars=dict(
+                n_subsets=self.n_subset),
+            n_surr=self.n_surr,
+            spectrum='3d#',
+            alpha=self.alpha,
+            psr_param=self.psr_param,
+            stat_corr='no',
+            output_format='patterns')['patterns']
+        # collecting spade output
+        elements_msip_min_spikes = []
+        for out in output_msip_min_spikes:
+            elements_msip_min_spikes.append(out['neurons'])
+        elements_msip_min_spikes = sorted(
+            elements_msip_min_spikes, key=len)
+        lags_msip_min_spikes = []
+        for out in output_msip_min_spikes:
+            lags_msip_min_spikes.append(list(out['lags'].magnitude))
+        lags_msip_min_spikes = sorted(
+            lags_msip_min_spikes, key=len)
+        # check the lags
+        assert_array_equal(lags_msip_min_spikes, [
+            l for l in self.lags_msip if len(l) + 1 >= self.min_spikes])
+        # check the neurons in the patterns
+        assert_array_equal(elements_msip_min_spikes, [
+            el for el in self.elements_msip if len(el) >= self.min_neu and len(
+                el) >= self.min_spikes])
+
+        # test min_occ parameter
+        output_msip_min_occ = spade.spade(
+            self.msip,
+            self.bin_size,
+            self.winlen,
+            min_occ=self.min_occ,
+            approx_stab_pars=dict(
+                n_subsets=self.n_subset),
+            n_surr=self.n_surr,
+            spectrum='3d#',
+            alpha=self.alpha,
+            psr_param=self.psr_param,
+            stat_corr='no',
+            output_format='patterns')['patterns']
+        # collect spade output
+        occ_msip_min_occ = []
+        for out in output_msip_min_occ:
+            occ_msip_min_occ.append(list(out['times'].magnitude))
+        occ_msip_min_occ = sorted(occ_msip_min_occ, key=len)
+        # test occurrences time
+        assert_array_equal(occ_msip_min_occ, [
+            occ for occ in self.occ_msip if len(occ) >= self.min_occ])
+
+    # Test computation spectrum
+    def test_spectrum(self):
+        # test 2d spectrum
+        spectrum = spade.concepts_mining(self.patt1, self.bin_size,
+                                         self.winlen, report='#')[0]
+        # test 3d spectrum
+        assert_array_equal(spectrum, [[len(self.lags1) + 1, self.n_occ1, 1]])
+        spectrum_3d = spade.concepts_mining(self.patt1, self.bin_size,
+                                            self.winlen, report='3d#')[0]
+        assert_array_equal(spectrum_3d, [
+            [len(self.lags1) + 1, self.n_occ1, max(self.lags1), 1]])
+
     def test_spade_raise_error(self):
-        self.assertRaises(TypeError, spade.spade, [[1,2,3],[3,4,5]], 1*pq.ms, 4)
-        self.assertRaises(AttributeError, spade.spade, [neo.SpikeTrain(
-            [1,2,3]*pq.s, t_stop=5*pq.s), neo.SpikeTrain(
-            [3,4,5]*pq.s, t_stop=6*pq.s)], 1*pq.ms, 4)
-        self.assertRaises(AttributeError, spade.spade, [neo.SpikeTrain(
-            [1, 2, 3] * pq.s, t_stop=5 * pq.s), neo.SpikeTrain(
-            [3, 4, 5] * pq.s, t_stop=5 * pq.s)], 1 * pq.ms, 4, min_neu=-3)
-        self.assertRaises(AttributeError, spade.pvalue_spectrum, [
-            neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=5 * pq.s), neo.SpikeTrain(
-            [3, 4, 5] * pq.s, t_stop=5 * pq.s)], 1 * pq.ms, 4, 3*pq.ms,
-            n_surr=-3)
-        self.assertRaises(AttributeError, spade.test_signature_significance, (
-            (2, 3, 0.2), (2, 4, 0.1)), 0.01, corr='try')
-        self.assertRaises(AttributeError, spade.approximate_stability, (),
-        np.array([]), n_subsets=-3)
+        # Test list not using neo.Spiketrain
+        self.assertRaises(TypeError, spade.spade, [
+            [1, 2, 3], [3, 4, 5]], 1 * pq.ms, 4, stat_corr='no')
+        self.assertRaises(TypeError, spade.concepts_mining, [
+            [1, 2, 3], [3, 4, 5]], 1 * pq.ms, 4)
+        # Test neo.Spiketrain with different t_stop
+        self.assertRaises(
+            ValueError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=5 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            1 * pq.ms, 4, stat_corr='no')
+        # Test bin_size not pq.Quantity
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1., winlen=4, stat_corr='no')
+        # Test winlen not int
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4.1, stat_corr='no')
+        # Test min_spikes not int
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, min_spikes=3.4, stat_corr='no')
+        # Test min_occ not int
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, min_occ=3.4, stat_corr='no')
+        # Test max_spikes not int
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, max_spikes=3.4, stat_corr='no')
+        # Test max_occ not int
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, max_occ=3.4, stat_corr='no')
+        # Test min_neu not int
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, min_neu=3.4, stat_corr='no')
+        # Test wrong stability params
+        self.assertRaises(
+            ValueError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, approx_stab_pars={'wrong key': 0},
+            stat_corr='no')
+        # Test n_surr not int
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, n_surr=3.4, stat_corr='no')
+        # Test dither not pq.Quantity
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, n_surr=100, alpha=0.05,
+            dither=15., stat_corr='no')
+        # Test wrong alpha
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, n_surr=100, alpha='5 %',
+            dither=15.*pq.ms, stat_corr='no')
+        # Test wrong statistical correction
+        self.assertRaises(
+            ValueError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, n_surr=100, alpha=0.05,
+            dither=15.*pq.ms, stat_corr='wrong correction')
+        # Test wrong psr_params
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, n_surr=100, alpha=0.05,
+            dither=15.*pq.ms, stat_corr='no', psr_param=(2.5, 3.4, 2.1))
+        # Test wrong psr_params
+        self.assertRaises(
+            TypeError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, n_surr=100, alpha=0.05,
+            dither=15.*pq.ms, stat_corr='no', psr_param=3.1)
+        # Test output format
+        self.assertRaises(
+            ValueError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            bin_size=1.*pq.ms, winlen=4, n_surr=100, alpha=0.05,
+            dither=15.*pq.ms, stat_corr='no', output_format='wrong_output')
+        # Test wrong spectrum parameter
+        self.assertRaises(
+            ValueError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            1 * pq.ms, 4, n_surr=1, stat_corr='no',
+            spectrum='invalid_key')
+        self.assertRaises(
+            ValueError, spade.concepts_mining,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            1 * pq.ms, 4, report='invalid_key')
+        self.assertRaises(
+            ValueError, spade.pvalue_spectrum,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=6 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=6 * pq.s)],
+            1 * pq.ms, 4, dither=10*pq.ms, n_surr=1,
+            spectrum='invalid_key')
+        # Test negative minimum number of spikes
+        self.assertRaises(
+            ValueError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=5 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=5 * pq.s)],
+            1 * pq.ms, 4, min_neu=-3, stat_corr='no')
+        # Test wrong dither method
+        self.assertRaises(
+            ValueError, spade.spade,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=5 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=5 * pq.s)],
+            1 * pq.ms, 4, surr_method='invalid_key', stat_corr='no')
+        # Test negative number of surrogates
+        self.assertRaises(
+            ValueError, spade.pvalue_spectrum,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=5 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=5 * pq.s)],
+            1 * pq.ms, 4, dither=10*pq.ms, n_surr=100,
+            surr_method='invalid_key')
+        # Test negative number of surrogates
+        self.assertRaises(
+            ValueError, spade.pvalue_spectrum,
+            [neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=5 * pq.s),
+             neo.SpikeTrain([3, 4, 5] * pq.s, t_stop=5 * pq.s)],
+            1 * pq.ms, 4, 3 * pq.ms, n_surr=-3)
+        # Test wrong correction parameter
+        self.assertRaises(ValueError, spade.test_signature_significance,
+                          pv_spec=((2, 3, 0.2), (2, 4, 0.1)),
+                          concepts=([[(2, 3), (1, 2, 3)]]),
+                          alpha=0.01,
+                          winlen=1,
+                          corr='invalid_key')
+        # Test negative number of subset for stability
+        self.assertRaises(ValueError, spade.approximate_stability, (),
+                          np.array([]), n_subsets=-3)
+
+    def test_pattern_set_reduction(self):
+        winlen = 6
+        # intent(concept1) is a superset of intent(concept2)
+        # extent(concept1) is a subset of extent(concept2)
+        # intent(concept2) is a subset of intent(concept3)
+        #     when taking into account the shift due to the window positions
+        # intent(concept1) has a non-empty intersection with intent(concept3)
+        #     when taking into account the shift due to the window positions
+        # intent(concept4) is disjoint from all others
+        concept1 = ((12, 19, 26), (2, 10, 18))
+        concept2 = ((12, 19), (2, 10, 18, 26))
+        concept3 = ((0, 7, 14, 21), (0, 8))
+        concept4 = ((1, 6), (0, 8))
+
+        # reject concept2 using min_occ
+        # make sure to keep concept1 by setting k_superset_filtering = 1
+        concepts = spade.pattern_set_reduction([concept1, concept2],
+                                               ns_signatures=[],
+                                               winlen=winlen, spectrum='#',
+                                               h_subset_filtering=0, min_occ=2,
+                                               k_superset_filtering=1)
+        self.assertEqual(concepts, [concept1])
+
+        # keep concept2 by increasing h_subset_filtering
+        concepts = spade.pattern_set_reduction([concept1, concept2],
+                                               ns_signatures=[],
+                                               winlen=winlen, spectrum='#',
+                                               h_subset_filtering=2, min_occ=2,
+                                               k_superset_filtering=1)
+        self.assertEqual(concepts, [concept1, concept2])
+
+        # reject concept1 using min_spikes
+        concepts = spade.pattern_set_reduction([concept1, concept2],
+                                               ns_signatures=[],
+                                               winlen=winlen, spectrum='#',
+                                               h_subset_filtering=2,
+                                               min_spikes=2,
+                                               k_superset_filtering=0)
+        self.assertEqual(concepts, [concept2])
+
+        # reject concept2 using ns_signatures
+        concepts = spade.pattern_set_reduction([concept1, concept2],
+                                               ns_signatures=[(2, 2)],
+                                               winlen=winlen, spectrum='#',
+                                               h_subset_filtering=1, min_occ=2,
+                                               k_superset_filtering=1)
+        self.assertEqual(concepts, [concept1])
+
+        # reject concept1 using ns_signatures
+        # make sure to keep concept2 by increasing h_subset_filtering
+        concepts = spade.pattern_set_reduction([concept1, concept2],
+                                               ns_signatures=[(2, 3)],
+                                               winlen=winlen, spectrum='#',
+                                               h_subset_filtering=3,
+                                               min_spikes=2,
+                                               min_occ=2,
+                                               k_superset_filtering=1)
+        self.assertEqual(concepts, [concept2])
+
+        # reject concept2 using the covered spikes criterion
+        concepts = spade.pattern_set_reduction([concept1, concept2],
+                                               ns_signatures=[(2, 2)],
+                                               winlen=winlen, spectrum='#',
+                                               h_subset_filtering=0,
+                                               min_occ=2,
+                                               k_superset_filtering=0,
+                                               l_covered_spikes=0)
+        self.assertEqual(concepts, [concept1])
+
+        # reject concept1 using superset filtering
+        # (case with non-empty intersection but no superset)
+        concepts = spade.pattern_set_reduction([concept1, concept3],
+                                               ns_signatures=[], min_spikes=2,
+                                               winlen=winlen, spectrum='#',
+                                               k_superset_filtering=0)
+        self.assertEqual(concepts, [concept3])
+
+        # keep concept1 by increasing k_superset_filtering
+        concepts = spade.pattern_set_reduction([concept1, concept3],
+                                               ns_signatures=[], min_spikes=2,
+                                               winlen=winlen, spectrum='#',
+                                               k_superset_filtering=1)
+        self.assertEqual(concepts, [concept1, concept3])
+
+        # reject concept3 using ns_signatures
+        concepts = spade.pattern_set_reduction([concept1, concept3],
+                                               ns_signatures=[(3, 2)],
+                                               min_spikes=2,
+                                               winlen=winlen, spectrum='#',
+                                               k_superset_filtering=1)
+        self.assertEqual(concepts, [concept1])
+
+        # reject concept3 using the covered spikes criterion
+        concepts = spade.pattern_set_reduction([concept1, concept3],
+                                               ns_signatures=[(3, 2), (2, 3)],
+                                               min_spikes=2,
+                                               winlen=winlen, spectrum='#',
+                                               k_superset_filtering=1,
+                                               l_covered_spikes=0)
+        self.assertEqual(concepts, [concept1])
+
+        # check that two concepts with disjoint intents are both kept
+        concepts = spade.pattern_set_reduction([concept3, concept4],
+                                               ns_signatures=[],
+                                               winlen=winlen, spectrum='#')
+        self.assertEqual(concepts, [concept3, concept4])
+
+    @unittest.skipUnless(HAVE_STATSMODELS,
+                         "'fdr_bh' stat corr requires statsmodels")
+    def test_signature_significance_fdr_bh_corr(self):
+        """
+        A typical corr='fdr_bh' scenario, that requires statsmodels.
+        """
+        sig_spectrum = spade.test_signature_significance(
+            pv_spec=((2, 3, 0.2), (2, 4, 0.05)),
+            concepts=([[(2, 3), (1, 2, 3)],
+                       [(2, 4), (1, 2, 3, 4)]]),
+            alpha=0.15, winlen=1, corr='fdr_bh')
+        self.assertEqual(sig_spectrum, [(2., 3., False), (2., 4., True)])
+
+    def test_different_surrogate_method(self):
+        np.random.seed(0)
+        random.seed(0)
+        spiketrains = [stg.homogeneous_poisson_process(rate=20*pq.Hz)
+                       for _ in range(2)]
+        surr_methods = ('dither_spikes', 'joint_isi_dithering',
+                        'bin_shuffling',
+                        'dither_spikes_with_refractory_period')
+        pv_specs = {'dither_spikes': [[2, 2, 0.8], [2, 3, 0.2]],
+                    'joint_isi_dithering': [[2, 2, 0.8]],
+                    'bin_shuffling': [[2, 2, 1.0], [2, 3, 0.2]],
+                    'dither_spikes_with_refractory_period':
+                        [[2, 2, 0.8]]}
+        for surr_method in surr_methods:
+            pv_spec = spade.pvalue_spectrum(
+                spiketrains, bin_size=self.bin_size,
+                winlen=self.winlen, dither=15*pq.ms,
+                n_surr=5, surr_method=surr_method)
+            self.assertEqual(pv_spec, pv_specs[surr_method])
 
 
 def suite():
@@ -243,4 +731,3 @@ def suite():
 if __name__ == "__main__":
     runner = unittest.TextTestRunner(verbosity=2)
     runner.run(suite())
-    globals()

+ 143 - 120
code/elephant/elephant/test/test_spectral.py

@@ -2,7 +2,7 @@
 """
 Unit tests for the spectral module.
 
-:copyright: Copyright 2015 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2015 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
@@ -12,6 +12,7 @@ import numpy as np
 import scipy.signal as spsig
 import quantities as pq
 import neo.core as n
+from numpy.testing import assert_array_almost_equal, assert_array_equal
 
 import elephant.spectral
 
@@ -19,7 +20,7 @@ import elephant.spectral
 class WelchPSDTestCase(unittest.TestCase):
     def test_welch_psd_errors(self):
         # generate a dummy data
-        data = n.AnalogSignal(np.zeros(5000), sampling_period=0.001*pq.s,
+        data = n.AnalogSignal(np.zeros(5000), sampling_period=0.001 * pq.s,
                               units='mV')
 
         # check for invalid parameter values
@@ -37,7 +38,7 @@ class WelchPSDTestCase(unittest.TestCase):
         self.assertRaises(ValueError, elephant.spectral.welch_psd, data,
                           freq_res=-1)
         self.assertRaises(ValueError, elephant.spectral.welch_psd, data,
-                          freq_res=data.sampling_rate/(data.shape[0]+1))
+                          freq_res=data.sampling_rate / (data.shape[0] + 1))
         # - overlap
         self.assertRaises(ValueError, elephant.spectral.welch_psd, data,
                           overlap=-1.0)
@@ -50,34 +51,43 @@ class WelchPSDTestCase(unittest.TestCase):
         sampling_period = 0.001
         signal_freq = 100.0
         noise = np.random.normal(size=data_length)
-        signal = [np.sin(2*np.pi*signal_freq*t)
-                  for t in np.arange(0, data_length*sampling_period,
+        signal = [np.sin(2 * np.pi * signal_freq * t)
+                  for t in np.arange(0, data_length * sampling_period,
                                      sampling_period)]
-        data = n.AnalogSignal(np.array(signal+noise),
-                                      sampling_period=sampling_period*pq.s,
-                                      units='mV')
+        data = n.AnalogSignal(np.array(signal + noise),
+                              sampling_period=sampling_period * pq.s,
+                              units='mV')
 
         # consistency between different ways of specifying segment length
-        freqs1, psd1 = elephant.spectral.welch_psd(data, len_seg=data_length//5, overlap=0)
-        freqs2, psd2 = elephant.spectral.welch_psd(data, num_seg=5, overlap=0)
-        self.assertTrue((psd1==psd2).all() and (freqs1==freqs2).all())
+        freqs1, psd1 = elephant.spectral.welch_psd(
+            data, len_segment=data_length // 5, overlap=0)
+        freqs2, psd2 = elephant.spectral.welch_psd(
+            data, n_segments=5, overlap=0)
+        self.assertTrue((psd1 == psd2).all() and (freqs1 == freqs2).all())
 
         # frequency resolution and consistency with data
         freq_res = 1.0 * pq.Hz
-        freqs, psd = elephant.spectral.welch_psd(data, freq_res=freq_res)
-        self.assertAlmostEqual(freq_res, freqs[1]-freqs[0])
+        freqs, psd = elephant.spectral.welch_psd(
+            data, frequency_resolution=freq_res)
+        self.assertAlmostEqual(freq_res, freqs[1] - freqs[0])
         self.assertEqual(freqs[psd.argmax()], signal_freq)
-        freqs_np, psd_np = elephant.spectral.welch_psd(data.magnitude.flatten(), fs=1/sampling_period, freq_res=freq_res)
-        self.assertTrue((freqs==freqs_np).all() and (psd==psd_np).all())
+        freqs_np, psd_np = elephant.spectral.welch_psd(
+            data.magnitude.flatten(), fs=1 / sampling_period,
+            frequency_resolution=freq_res)
+        self.assertTrue((freqs == freqs_np).all() and (psd == psd_np).all())
 
         # check of scipy.signal.welch() parameters
         params = {'window': 'hamming', 'nfft': 1024, 'detrend': 'linear',
                   'return_onesided': False, 'scaling': 'spectrum'}
         for key, val in params.items():
-            freqs, psd = elephant.spectral.welch_psd(data, len_seg=1000, overlap=0, **{key: val})
-            freqs_spsig, psd_spsig = spsig.welch(np.rollaxis(data, 0, len(data.shape)),
-                                                 fs=1/sampling_period, nperseg=1000, noverlap=0, **{key: val})
-            self.assertTrue((freqs==freqs_spsig).all() and (psd==psd_spsig).all())
+            freqs, psd = elephant.spectral.welch_psd(
+                data, len_segment=1000, overlap=0, **{key: val})
+            freqs_spsig, psd_spsig = spsig.welch(np.rollaxis(data, 0, len(
+                data.shape)), fs=1 / sampling_period, nperseg=1000,
+                                                 noverlap=0, **{key: val})
+            self.assertTrue(
+                (freqs == freqs_spsig).all() and (
+                    psd == psd_spsig).all())
 
         # - generate multidimensional data for check of parameter `axis`
         num_channel = 4
@@ -85,15 +95,15 @@ class WelchPSDTestCase(unittest.TestCase):
         data_multidim = np.random.normal(size=(num_channel, data_length))
         freqs, psd = elephant.spectral.welch_psd(data_multidim)
         freqs_T, psd_T = elephant.spectral.welch_psd(data_multidim.T, axis=0)
-        self.assertTrue(np.all(freqs==freqs_T))
-        self.assertTrue(np.all(psd==psd_T.T))
+        self.assertTrue(np.all(freqs == freqs_T))
+        self.assertTrue(np.all(psd == psd_T.T))
 
     def test_welch_psd_input_types(self):
         # generate a test data
         sampling_period = 0.001
         data = n.AnalogSignal(np.array(np.random.normal(size=5000)),
-                                   sampling_period=sampling_period*pq.s,
-                                   units='mV')
+                              sampling_period=sampling_period * pq.s,
+                              units='mV')
 
         # outputs from AnalogSignal input are of Quantity type (standard usage)
         freqs_neo, psd_neo = elephant.spectral.welch_psd(data)
@@ -101,18 +111,24 @@ class WelchPSDTestCase(unittest.TestCase):
         self.assertTrue(isinstance(psd_neo, pq.quantity.Quantity))
 
         # outputs from Quantity array input are of Quantity type
-        freqs_pq, psd_pq = elephant.spectral.welch_psd(data.magnitude.flatten()*data.units, fs=1/sampling_period)
+        freqs_pq, psd_pq = elephant.spectral.welch_psd(
+            data.magnitude.flatten() * data.units, fs=1 / sampling_period)
         self.assertTrue(isinstance(freqs_pq, pq.quantity.Quantity))
         self.assertTrue(isinstance(psd_pq, pq.quantity.Quantity))
 
         # outputs from Numpy ndarray input are NOT of Quantity type
-        freqs_np, psd_np = elephant.spectral.welch_psd(data.magnitude.flatten(), fs=1/sampling_period)
+        freqs_np, psd_np = elephant.spectral.welch_psd(
+            data.magnitude.flatten(), fs=1 / sampling_period)
         self.assertFalse(isinstance(freqs_np, pq.quantity.Quantity))
         self.assertFalse(isinstance(psd_np, pq.quantity.Quantity))
 
         # check if the results from different input types are identical
-        self.assertTrue((freqs_neo==freqs_pq).all() and (psd_neo==psd_pq).all())
-        self.assertTrue((freqs_neo==freqs_np).all() and (psd_neo==psd_np).all())
+        self.assertTrue(
+            (freqs_neo == freqs_pq).all() and (
+                psd_neo == psd_pq).all())
+        self.assertTrue(
+            (freqs_neo == freqs_np).all() and (
+                psd_neo == psd_np).all())
 
     def test_welch_psd_multidim_input(self):
         # generate multidimensional data
@@ -125,51 +141,52 @@ class WelchPSDTestCase(unittest.TestCase):
         # conventional one, `data_np` needs to be transposed when its used to
         # define an AnalogSignal
         data_neo = n.AnalogSignal(data_np.T,
-                                       sampling_period=sampling_period*pq.s,
-                                       units='mV')
+                                  sampling_period=sampling_period * pq.s,
+                                  units='mV')
         data_neo_1dim = n.AnalogSignal(data_np[0],
-                                       sampling_period=sampling_period*pq.s,
+                                       sampling_period=sampling_period * pq.s,
                                        units='mV')
 
         # check if the results from different input types are identical
         freqs_np, psd_np = elephant.spectral.welch_psd(data_np,
-                                                     fs=1/sampling_period)
+                                                       fs=1 / sampling_period)
         freqs_neo, psd_neo = elephant.spectral.welch_psd(data_neo)
-        freqs_neo_1dim, psd_neo_1dim = elephant.spectral.welch_psd(data_neo_1dim)
-        self.assertTrue(np.all(freqs_np==freqs_neo))
-        self.assertTrue(np.all(psd_np==psd_neo))
-        self.assertTrue(np.all(psd_neo_1dim==psd_neo[0]))
+        freqs_neo_1dim, psd_neo_1dim = elephant.spectral.welch_psd(
+            data_neo_1dim)
+        self.assertTrue(np.all(freqs_np == freqs_neo))
+        self.assertTrue(np.all(psd_np == psd_neo))
+        self.assertTrue(np.all(psd_neo_1dim == psd_neo[0]))
 
 
 class WelchCohereTestCase(unittest.TestCase):
     def test_welch_cohere_errors(self):
         # generate a dummy data
-        x = n.AnalogSignal(np.zeros(5000), sampling_period=0.001*pq.s,
-            units='mV')
-        y = n.AnalogSignal(np.zeros(5000), sampling_period=0.001*pq.s,
-            units='mV')
+        x = n.AnalogSignal(np.zeros(5000), sampling_period=0.001 * pq.s,
+                           units='mV')
+        y = n.AnalogSignal(np.zeros(5000), sampling_period=0.001 * pq.s,
+                           units='mV')
 
         # check for invalid parameter values
         # - length of segments
-        self.assertRaises(ValueError, elephant.spectral.welch_cohere, x, y,
-            len_seg=0)
-        self.assertRaises(ValueError, elephant.spectral.welch_cohere, x, y,
-            len_seg=x.shape[0] * 2)
+        self.assertRaises(ValueError, elephant.spectral.welch_coherence, x, y,
+                          len_seg=0)
+        self.assertRaises(ValueError, elephant.spectral.welch_coherence, x, y,
+                          len_seg=x.shape[0] * 2)
         # - number of segments
-        self.assertRaises(ValueError, elephant.spectral.welch_cohere, x, y,
-            num_seg=0)
-        self.assertRaises(ValueError, elephant.spectral.welch_cohere, x, y,
-            num_seg=x.shape[0] * 2)
+        self.assertRaises(ValueError, elephant.spectral.welch_coherence, x, y,
+                          num_seg=0)
+        self.assertRaises(ValueError, elephant.spectral.welch_coherence, x, y,
+                          num_seg=x.shape[0] * 2)
         # - frequency resolution
-        self.assertRaises(ValueError, elephant.spectral.welch_cohere, x, y,
-            freq_res=-1)
-        self.assertRaises(ValueError, elephant.spectral.welch_cohere, x, y,
-            freq_res=x.sampling_rate/(x.shape[0]+1))
+        self.assertRaises(ValueError, elephant.spectral.welch_coherence, x, y,
+                          freq_res=-1)
+        self.assertRaises(ValueError, elephant.spectral.welch_coherence, x, y,
+                          freq_res=x.sampling_rate / (x.shape[0] + 1))
         # - overlap
-        self.assertRaises(ValueError, elephant.spectral.welch_cohere, x, y,
-            overlap=-1.0)
-        self.assertRaises(ValueError, elephant.spectral.welch_cohere, x, y,
-            overlap=1.1)
+        self.assertRaises(ValueError, elephant.spectral.welch_coherence, x, y,
+                          overlap=-1.0)
+        self.assertRaises(ValueError, elephant.spectral.welch_coherence, x, y,
+                          overlap=1.1)
 
     def test_welch_cohere_behavior(self):
         # generate data by adding white noise and a sinusoid
@@ -178,41 +195,43 @@ class WelchCohereTestCase(unittest.TestCase):
         signal_freq = 100.0
         noise1 = np.random.normal(size=data_length) * 0.01
         noise2 = np.random.normal(size=data_length) * 0.01
-        signal1 = [np.cos(2*np.pi*signal_freq*t)
-                  for t in np.arange(0, data_length*sampling_period,
-                sampling_period)]
-        signal2 = [np.sin(2*np.pi*signal_freq*t)
-                   for t in np.arange(0, data_length*sampling_period,
-                sampling_period)]
-        x = n.AnalogSignal(np.array(signal1+noise1), units='mV',
-            sampling_period=sampling_period*pq.s)
-        y = n.AnalogSignal(np.array(signal2+noise2), units='mV',
-            sampling_period=sampling_period*pq.s)
+        signal1 = [np.cos(2 * np.pi * signal_freq * t)
+                   for t in np.arange(0, data_length * sampling_period,
+                                      sampling_period)]
+        signal2 = [np.sin(2 * np.pi * signal_freq * t)
+                   for t in np.arange(0, data_length * sampling_period,
+                                      sampling_period)]
+        x = n.AnalogSignal(np.array(signal1 + noise1), units='mV',
+                           sampling_period=sampling_period * pq.s)
+        y = n.AnalogSignal(np.array(signal2 + noise2), units='mV',
+                           sampling_period=sampling_period * pq.s)
 
         # consistency between different ways of specifying segment length
-        freqs1, coherency1, phase_lag1 = elephant.spectral.welch_cohere(x, y,
-            len_seg=data_length//5, overlap=0)
-        freqs2, coherency2, phase_lag2 = elephant.spectral.welch_cohere(x, y,
-            num_seg=5, overlap=0)
-        self.assertTrue((coherency1==coherency2).all() and
-                        (phase_lag1==phase_lag2).all() and
-                        (freqs1==freqs2).all())
+        freqs1, coherency1, phase_lag1 = elephant.spectral.welch_coherence(
+            x, y, len_segment=data_length // 5, overlap=0)
+        freqs2, coherency2, phase_lag2 = elephant.spectral.welch_coherence(
+            x, y, n_segments=5, overlap=0)
+        self.assertTrue((coherency1 == coherency2).all() and
+                        (phase_lag1 == phase_lag2).all() and
+                        (freqs1 == freqs2).all())
 
         # frequency resolution and consistency with data
         freq_res = 1.0 * pq.Hz
-        freqs, coherency, phase_lag = elephant.spectral.welch_cohere(x, y,
-            freq_res=freq_res)
-        self.assertAlmostEqual(freq_res, freqs[1]-freqs[0])
+        freqs, coherency, phase_lag = elephant.spectral.welch_coherence(
+            x, y, frequency_resolution=freq_res)
+        self.assertAlmostEqual(freq_res, freqs[1] - freqs[0])
         self.assertAlmostEqual(freqs[coherency.argmax()], signal_freq,
-            places=2)
-        self.assertAlmostEqual(phase_lag[coherency.argmax()], np.pi/2,
-            places=2)
+                               places=2)
+        self.assertAlmostEqual(phase_lag[coherency.argmax()], -np.pi / 2,
+                               places=2)
         freqs_np, coherency_np, phase_lag_np =\
-            elephant.spectral.welch_cohere(x.magnitude.flatten(), y.magnitude.flatten(),
-                fs=1/sampling_period, freq_res=freq_res)
-        self.assertTrue((freqs == freqs_np).all() and
-                        (coherency[:, 0] == coherency_np).all() and
-                        (phase_lag[:, 0] == phase_lag_np).all())
+            elephant.spectral.welch_coherence(x.magnitude.flatten(),
+                                              y.magnitude.flatten(),
+                                              fs=1 / sampling_period,
+                                              frequency_resolution=freq_res)
+        assert_array_equal(freqs.simplified.magnitude, freqs_np)
+        assert_array_equal(coherency[:, 0], coherency_np)
+        assert_array_equal(phase_lag[:, 0], phase_lag_np)
 
         # - check the behavior of parameter `axis` using multidimensional data
         num_channel = 4
@@ -220,51 +239,53 @@ class WelchCohereTestCase(unittest.TestCase):
         x_multidim = np.random.normal(size=(num_channel, data_length))
         y_multidim = np.random.normal(size=(num_channel, data_length))
         freqs, coherency, phase_lag =\
-            elephant.spectral.welch_cohere(x_multidim, y_multidim)
-        freqs_T, coherency_T, phase_lag_T =\
-            elephant.spectral.welch_cohere(x_multidim.T, y_multidim.T, axis=0)
-        self.assertTrue(np.all(freqs==freqs_T))
-        self.assertTrue(np.all(coherency==coherency_T.T))
-        self.assertTrue(np.all(phase_lag==phase_lag_T.T))
+            elephant.spectral.welch_coherence(x_multidim, y_multidim)
+        freqs_T, coherency_T, phase_lag_T = elephant.spectral.welch_coherence(
+            x_multidim.T, y_multidim.T, axis=0)
+        assert_array_equal(freqs, freqs_T)
+        assert_array_equal(coherency, coherency_T.T)
+        assert_array_equal(phase_lag, phase_lag_T.T)
 
     def test_welch_cohere_input_types(self):
         # generate a test data
         sampling_period = 0.001
         x = n.AnalogSignal(np.array(np.random.normal(size=5000)),
-            sampling_period=sampling_period*pq.s,
-            units='mV')
+                           sampling_period=sampling_period * pq.s,
+                           units='mV')
         y = n.AnalogSignal(np.array(np.random.normal(size=5000)),
-            sampling_period=sampling_period*pq.s,
-            units='mV')
+                           sampling_period=sampling_period * pq.s,
+                           units='mV')
 
         # outputs from AnalogSignal input are of Quantity type
         # (standard usage)
         freqs_neo, coherency_neo, phase_lag_neo =\
-            elephant.spectral.welch_cohere(x, y)
+            elephant.spectral.welch_coherence(x, y)
         self.assertTrue(isinstance(freqs_neo, pq.quantity.Quantity))
         self.assertTrue(isinstance(phase_lag_neo, pq.quantity.Quantity))
 
         # outputs from Quantity array input are of Quantity type
-        freqs_pq, coherency_pq, phase_lag_pq =\
-            elephant.spectral.welch_cohere(x.magnitude.flatten()*x.units,
-                y.magnitude.flatten()*y.units, fs=1/sampling_period)
+        freqs_pq, coherency_pq, phase_lag_pq = elephant.spectral\
+            .welch_coherence(x.magnitude.flatten() * x.units,
+                             y.magnitude.flatten() * y.units,
+                             fs=1 / sampling_period)
         self.assertTrue(isinstance(freqs_pq, pq.quantity.Quantity))
         self.assertTrue(isinstance(phase_lag_pq, pq.quantity.Quantity))
 
         # outputs from Numpy ndarray input are NOT of Quantity type
-        freqs_np, coherency_np, phase_lag_np =\
-            elephant.spectral.welch_cohere(x.magnitude.flatten(), y.magnitude.flatten(),
-                fs=1/sampling_period)
+        freqs_np, coherency_np, phase_lag_np = elephant.spectral\
+            .welch_coherence(x.magnitude.flatten(),
+                             y.magnitude.flatten(),
+                             fs=1 / sampling_period)
         self.assertFalse(isinstance(freqs_np, pq.quantity.Quantity))
         self.assertFalse(isinstance(phase_lag_np, pq.quantity.Quantity))
 
         # check if the results from different input types are identical
-        self.assertTrue((freqs_neo==freqs_pq).all() and
-                        (coherency_neo[:, 0]==coherency_pq).all() and
-                        (phase_lag_neo[:, 0]==phase_lag_pq).all())
-        self.assertTrue((freqs_neo==freqs_np).all() and
-                        (coherency_neo[:, 0]==coherency_np).all() and
-                        (phase_lag_neo[:, 0]==phase_lag_np).all())
+        self.assertTrue((freqs_neo == freqs_pq).all() and
+                        (coherency_neo[:, 0] == coherency_pq).all() and
+                        (phase_lag_neo[:, 0] == phase_lag_pq).all())
+        self.assertTrue((freqs_neo == freqs_np).all() and
+                        (coherency_neo[:, 0] == coherency_np).all() and
+                        (phase_lag_neo[:, 0] == phase_lag_np).all())
 
     def test_welch_cohere_multidim_input(self):
         # generate multidimensional data
@@ -277,26 +298,28 @@ class WelchCohereTestCase(unittest.TestCase):
         # convention in NumPy/SciPy, `data_np` needs to be transposed when its
         # used to define an AnalogSignal
         x_neo = n.AnalogSignal(x_np.T, units='mV',
-            sampling_period=sampling_period*pq.s)
+                               sampling_period=sampling_period * pq.s)
         y_neo = n.AnalogSignal(y_np.T, units='mV',
-            sampling_period=sampling_period*pq.s)
+                               sampling_period=sampling_period * pq.s)
         x_neo_1dim = n.AnalogSignal(x_np[0], units='mV',
-            sampling_period=sampling_period*pq.s)
+                                    sampling_period=sampling_period * pq.s)
         y_neo_1dim = n.AnalogSignal(y_np[0], units='mV',
-            sampling_period=sampling_period*pq.s)
+                                    sampling_period=sampling_period * pq.s)
 
         # check if the results from different input types are identical
-        freqs_np, coherency_np, phase_lag_np =\
-            elephant.spectral.welch_cohere(x_np, y_np, fs=1/sampling_period)
+        freqs_np, coherency_np, phase_lag_np = elephant.spectral\
+            .welch_coherence(x_np, y_np, fs=1 / sampling_period)
         freqs_neo, coherency_neo, phase_lag_neo =\
-            elephant.spectral.welch_cohere(x_neo, y_neo)
+            elephant.spectral.welch_coherence(x_neo, y_neo)
         freqs_neo_1dim, coherency_neo_1dim, phase_lag_neo_1dim =\
-            elephant.spectral.welch_cohere(x_neo_1dim, y_neo_1dim)
-        self.assertTrue(np.all(freqs_np==freqs_neo))
-        self.assertTrue(np.all(coherency_np.T==coherency_neo))
-        self.assertTrue(np.all(phase_lag_np.T==phase_lag_neo))
-        self.assertTrue(np.all(coherency_neo_1dim[:, 0]==coherency_neo[:, 0]))
-        self.assertTrue(np.all(phase_lag_neo_1dim[:, 0]==phase_lag_neo[:, 0]))
+            elephant.spectral.welch_coherence(x_neo_1dim, y_neo_1dim)
+        self.assertTrue(np.all(freqs_np == freqs_neo))
+        self.assertTrue(np.all(coherency_np.T == coherency_neo))
+        self.assertTrue(np.all(phase_lag_np.T == phase_lag_neo))
+        self.assertTrue(
+            np.all(coherency_neo_1dim[:, 0] == coherency_neo[:, 0]))
+        self.assertTrue(
+            np.all(phase_lag_neo_1dim[:, 0] == phase_lag_neo[:, 0]))
 
 
 def suite():
@@ -306,4 +329,4 @@ def suite():
 
 if __name__ == "__main__":
     runner = unittest.TextTestRunner(verbosity=2)
-    runner.run(suite())
+    runner.run(suite())

+ 317 - 87
code/elephant/elephant/test/test_spike_train_correlation.py

@@ -2,21 +2,28 @@
 """
 Unit tests for the spike_train_correlation module.
 
-:copyright: Copyright 2015-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2015-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
+import sys
 import unittest
 
+import neo
 import numpy as np
-from numpy.testing.utils import assert_array_equal, assert_array_almost_equal
 import quantities as pq
-import neo
+from numpy.testing.utils import assert_array_equal, assert_array_almost_equal
+
 import elephant.conversion as conv
 import elephant.spike_train_correlation as sc
+from elephant.spike_train_generation import homogeneous_poisson_process,\
+    homogeneous_gamma_process
+import math
+
+python_version_major = sys.version_info.major
 
 
-class covariance_TestCase(unittest.TestCase):
+class CovarianceTestCase(unittest.TestCase):
 
     def setUp(self):
         # These two arrays must be such that they do not have coincidences
@@ -35,7 +42,7 @@ class covariance_TestCase(unittest.TestCase):
         # And binned counterparts
         self.binned_st = conv.BinnedSpikeTrain(
             [self.st_0, self.st_1], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
+            bin_size=1 * pq.ms)
 
     def test_covariance_binned(self):
         '''
@@ -44,9 +51,9 @@ class covariance_TestCase(unittest.TestCase):
 
         # Calculate clipped and unclipped
         res_clipped = sc.covariance(
-            self.binned_st, binary=True)
+            self.binned_st, binary=True, fast=False)
         res_unclipped = sc.covariance(
-            self.binned_st, binary=False)
+            self.binned_st, binary=False, fast=False)
 
         # Check dimensions
         self.assertEqual(len(res_clipped), 2)
@@ -90,13 +97,13 @@ class covariance_TestCase(unittest.TestCase):
         # Calculate correlation
         binned_st = conv.BinnedSpikeTrain(
             [self.st_0, self.st_0], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
-        target = sc.covariance(binned_st)
+            bin_size=1 * pq.ms)
+        result = sc.covariance(binned_st, fast=False)
 
         # Check dimensions
-        self.assertEqual(len(target), 2)
+        self.assertEqual(len(result), 2)
         # Check result
-        assert_array_equal(target[0][0], target[1][1])
+        assert_array_equal(result[0][0], result[1][1])
 
     def test_covariance_binned_short_input(self):
         '''
@@ -106,19 +113,29 @@ class covariance_TestCase(unittest.TestCase):
         # Calculate correlation
         binned_st = conv.BinnedSpikeTrain(
             self.st_0, t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
-        target = sc.covariance(binned_st)
+            bin_size=1 * pq.ms)
+        result = sc.covariance(binned_st, binary=True, fast=False)
 
         # Check result unclipped against result calculated by numpy.corrcoef
         mat = binned_st.to_bool_array()
-        target_numpy = np.cov(mat)
+        target = np.cov(mat)
 
         # Check result and dimensionality of result
-        self.assertEqual(target.ndim, target_numpy.ndim)
-        self.assertAlmostEqual(target, target_numpy)
+        self.assertEqual(result.ndim, target.ndim)
+        assert_array_almost_equal(result, target)
+        assert_array_almost_equal(target,
+                                  sc.covariance(binned_st, binary=True,
+                                                fast=True))
 
+    def test_covariance_fast_mode(self):
+        np.random.seed(27)
+        st = homogeneous_poisson_process(rate=10 * pq.Hz, t_stop=10 * pq.s)
+        binned_st = conv.BinnedSpikeTrain(st, n_bins=10)
+        assert_array_almost_equal(sc.covariance(binned_st, fast=False),
+                                  sc.covariance(binned_st, fast=True))
 
-class corrcoeff_TestCase(unittest.TestCase):
+
+class CorrCoefTestCase(unittest.TestCase):
 
     def setUp(self):
         # These two arrays must be such that they do not have coincidences
@@ -127,17 +144,20 @@ class corrcoeff_TestCase(unittest.TestCase):
             1.3, 7.56, 15.87, 28.23, 30.9, 34.2, 38.2, 43.2]
         self.test_array_1d_1 = [
             1.02, 2.71, 18.82, 28.46, 28.79, 43.6]
+        self.test_array_1d_2 = []
 
         # Build spike trains
         self.st_0 = neo.SpikeTrain(
             self.test_array_1d_0, units='ms', t_stop=50.)
         self.st_1 = neo.SpikeTrain(
             self.test_array_1d_1, units='ms', t_stop=50.)
+        self.st_2 = neo.SpikeTrain(
+            self.test_array_1d_2, units='ms', t_stop=50.)
 
         # And binned counterparts
         self.binned_st = conv.BinnedSpikeTrain(
             [self.st_0, self.st_1], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
+            bin_size=1 * pq.ms)
 
     def test_corrcoef_binned(self):
         '''
@@ -145,9 +165,9 @@ class corrcoeff_TestCase(unittest.TestCase):
         '''
 
         # Calculate clipped and unclipped
-        res_clipped = sc.corrcoef(
+        res_clipped = sc.correlation_coefficient(
             self.binned_st, binary=True)
-        res_unclipped = sc.corrcoef(
+        res_unclipped = sc.correlation_coefficient(
             self.binned_st, binary=False)
 
         # Check dimensions
@@ -198,13 +218,17 @@ class corrcoeff_TestCase(unittest.TestCase):
         # Calculate correlation
         binned_st = conv.BinnedSpikeTrain(
             [self.st_0, self.st_0], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
-        target = sc.corrcoef(binned_st)
+            bin_size=1 * pq.ms)
+        result = sc.correlation_coefficient(binned_st, fast=False)
+        target = np.ones((2, 2))
 
         # Check dimensions
-        self.assertEqual(len(target), 2)
+        self.assertEqual(len(result), 2)
         # Check result
-        assert_array_equal(target, 1.)
+        assert_array_almost_equal(result, target)
+        assert_array_almost_equal(
+            result, sc.correlation_coefficient(
+                binned_st, fast=True))
 
     def test_corrcoef_binned_short_input(self):
         '''
@@ -213,15 +237,46 @@ class corrcoeff_TestCase(unittest.TestCase):
         # Calculate correlation
         binned_st = conv.BinnedSpikeTrain(
             self.st_0, t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
-        target = sc.corrcoef(binned_st)
+            bin_size=1 * pq.ms)
+        result = sc.correlation_coefficient(binned_st, fast=False)
+        target = np.array(1.)
 
         # Check result and dimensionality of result
-        self.assertEqual(target.ndim, 0)
-        self.assertEqual(target, 1.)
+        self.assertEqual(result.ndim, 0)
+        assert_array_almost_equal(result, target)
+        assert_array_almost_equal(
+            result, sc.correlation_coefficient(
+                binned_st, fast=True))
+
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
+    def test_empty_spike_train(self):
+        '''
+        Test whether a warning is yielded in case of empty spike train.
+        Also check correctness of the output array.
+        '''
+        # st_2 is empty
+        binned_12 = conv.BinnedSpikeTrain([self.st_1, self.st_2],
+                                          bin_size=1 * pq.ms)
+
+        with self.assertWarns(UserWarning):
+            result = sc.correlation_coefficient(binned_12, fast=False)
+
+        # test for NaNs in the output array
+        target = np.zeros((2, 2)) * np.NaN
+        target[0, 0] = 1.0
+        assert_array_almost_equal(result, target)
+
+    def test_corrcoef_fast_mode(self):
+        np.random.seed(27)
+        st = homogeneous_poisson_process(rate=10 * pq.Hz, t_stop=10 * pq.s)
+        binned_st = conv.BinnedSpikeTrain(st, n_bins=10)
+        assert_array_almost_equal(
+            sc.correlation_coefficient(
+                binned_st, fast=False), sc.correlation_coefficient(
+                binned_st, fast=True))
 
 
-class cross_correlation_histogram_TestCase(unittest.TestCase):
+class CrossCorrelationHistogramTest(unittest.TestCase):
 
     def setUp(self):
         # These two arrays must be such that they do not have coincidences
@@ -240,27 +295,27 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
         # And binned counterparts
         self.binned_st1 = conv.BinnedSpikeTrain(
             [self.st_1], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
+            bin_size=1 * pq.ms)
         self.binned_st2 = conv.BinnedSpikeTrain(
             [self.st_2], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
+            bin_size=1 * pq.ms)
         self.binned_sts = conv.BinnedSpikeTrain(
             [self.st_1, self.st_2], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
+            bin_size=1 * pq.ms)
 
         # Binned sts to check errors raising
-        self.st_check_binsize = conv.BinnedSpikeTrain(
+        self.st_check_bin_size = conv.BinnedSpikeTrain(
             [self.st_1], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=5 * pq.ms)
+            bin_size=5 * pq.ms)
         self.st_check_t_start = conv.BinnedSpikeTrain(
             [self.st_1], t_start=1 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
+            bin_size=1 * pq.ms)
         self.st_check_t_stop = conv.BinnedSpikeTrain(
             [self.st_1], t_start=0 * pq.ms, t_stop=40. * pq.ms,
-            binsize=1 * pq.ms)
+            bin_size=1 * pq.ms)
         self.st_check_dimension = conv.BinnedSpikeTrain(
             [self.st_1, self.st_2], t_start=0 * pq.ms, t_stop=50. * pq.ms,
-            binsize=1 * pq.ms)
+            bin_size=1 * pq.ms)
 
     def test_cross_correlation_histogram(self):
         '''
@@ -319,11 +374,11 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
             st2 = neo.SpikeTrain(self.st_2.magnitude, units='ms',
                                  t_start=t0 * pq.ms, t_stop=t1 * pq.ms)
             binned_sts = conv.BinnedSpikeTrain([st1, st2],
-                                               binsize=1 * pq.ms,
+                                               bin_size=1 * pq.ms,
                                                t_start=t0 * pq.ms,
                                                t_stop=t1 * pq.ms)
             # caluclate corrcoef
-            corrcoef = sc.corrcoef(binned_sts)[1, 0]
+            corrcoef = sc.correlation_coefficient(binned_sts)[1, 0]
 
             # expand t_stop to have two spike trains with same length as st1,
             # st2
@@ -335,16 +390,16 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
                                  t_stop=self.st_2.t_stop + np.abs(t) * pq.ms)
             binned_st1 = conv.BinnedSpikeTrain(
                 st1, t_start=0 * pq.ms, t_stop=(50 + np.abs(t)) * pq.ms,
-                binsize=1 * pq.ms)
+                bin_size=1 * pq.ms)
             binned_st2 = conv.BinnedSpikeTrain(
                 st2, t_start=0 * pq.ms, t_stop=(50 + np.abs(t)) * pq.ms,
-                binsize=1 * pq.ms)
+                bin_size=1 * pq.ms)
             # calculate CCHcoef and take value at t=tau
             CCHcoef, _ = sc.cch(binned_st1, binned_st2,
-                                cross_corr_coef=True)
-            left_edge = - binned_st1.num_bins + 1
-            tau_bin = int(t / float(binned_st1.binsize.magnitude))
-            assert_array_equal(
+                                cross_correlation_coefficient=True)
+            left_edge = - binned_st1.n_bins + 1
+            tau_bin = int(t / float(binned_st1.bin_size.magnitude))
+            assert_array_almost_equal(
                 corrcoef, CCHcoef[tau_bin - left_edge].magnitude)
 
         # Check correlation using binary spike trains
@@ -356,10 +411,10 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
 
         # Check the time axis and bin IDs of the resulting AnalogSignal
         assert_array_almost_equal(
-            (bin_ids_clipped - 0.5) * self.binned_st1.binsize,
+            (bin_ids_clipped - 0.5) * self.binned_st1.bin_size,
             cch_unclipped.times)
         assert_array_almost_equal(
-            (bin_ids_clipped - 0.5) * self.binned_st1.binsize,
+            (bin_ids_clipped - 0.5) * self.binned_st1.bin_size,
             cch_clipped.times)
 
         # Calculate CCH using Elephant (normal and binary version) with
@@ -413,41 +468,34 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
 
         # Check the time axis and bin IDs of the resulting AnalogSignal
         assert_array_equal(
-            (bin_ids_clipped - 0.5) * self.binned_st1.binsize,
+            (bin_ids_clipped - 0.5) * self.binned_st1.bin_size,
             cch_unclipped.times)
         assert_array_equal(
-            (bin_ids_clipped - 0.5) * self.binned_st1.binsize,
+            (bin_ids_clipped - 0.5) * self.binned_st1.bin_size,
             cch_clipped.times)
 
         # Check for wrong window parameter setting
         self.assertRaises(
-            KeyError, sc.cross_correlation_histogram, self.binned_st1,
+            ValueError, sc.cross_correlation_histogram, self.binned_st1,
             self.binned_st2, window='dsaij')
         self.assertRaises(
-            KeyError, sc.cross_correlation_histogram, self.binned_st1,
+            ValueError, sc.cross_correlation_histogram, self.binned_st1,
             self.binned_st2, window='dsaij', method='memory')
 
     def test_raising_error_wrong_inputs(self):
         '''Check that an exception is thrown if the two spike trains are not
         fullfilling the requirement of the function'''
-        # Check the binsizes are the same
+        # Check the bin_sizes are the same
         self.assertRaises(
-            AssertionError,
+            ValueError,
             sc.cross_correlation_histogram, self.binned_st1,
-            self.st_check_binsize)
-        # Check different t_start and t_stop
-        self.assertRaises(
-            AssertionError, sc.cross_correlation_histogram,
-            self.st_check_t_start, self.binned_st2)
-        self.assertRaises(
-            AssertionError, sc.cross_correlation_histogram,
-            self.st_check_t_stop, self.binned_st2)
+            self.st_check_bin_size)
         # Check input are one dimensional
         self.assertRaises(
-            AssertionError, sc.cross_correlation_histogram,
+            ValueError, sc.cross_correlation_histogram,
             self.st_check_dimension, self.binned_st2)
         self.assertRaises(
-            AssertionError, sc.cross_correlation_histogram,
+            ValueError, sc.cross_correlation_histogram,
             self.binned_st2, self.st_check_dimension)
 
     def test_window(self):
@@ -455,15 +503,17 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
         cch_win, bin_ids = sc.cch(
             self.binned_st1, self.binned_st2, window=[-30, 30])
         cch_win_mem, bin_ids_mem = sc.cch(
-            self.binned_st1, self.binned_st2, window=[-30, 30])
+            self.binned_st1, self.binned_st2, window=[-30, 30],
+            method='memory')
 
+        self.assertEqual(len(bin_ids), cch_win.shape[0])
         assert_array_equal(bin_ids, np.arange(-30, 31, 1))
         assert_array_equal(
-            (bin_ids - 0.5) * self.binned_st1.binsize, cch_win.times)
+            (bin_ids - 0.5) * self.binned_st1.bin_size, cch_win.times)
 
         assert_array_equal(bin_ids_mem, np.arange(-30, 31, 1))
         assert_array_equal(
-            (bin_ids_mem - 0.5) * self.binned_st1.binsize, cch_win.times)
+            (bin_ids_mem - 0.5) * self.binned_st1.bin_size, cch_win.times)
 
         assert_array_equal(cch_win, cch_win_mem)
         cch_unclipped, _ = sc.cross_correlation_histogram(
@@ -498,31 +548,52 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
             self.binned_st2, window=[-50, 60])
         # Test for no integer or wrong string in input
         self.assertRaises(
-            KeyError, sc.cross_correlation_histogram, self.binned_st1,
+            ValueError, sc.cross_correlation_histogram, self.binned_st1,
             self.binned_st2, window=[-25.5, 25.5])
         self.assertRaises(
-            KeyError, sc.cross_correlation_histogram, self.binned_st1,
+            ValueError, sc.cross_correlation_histogram, self.binned_st1,
             self.binned_st2, window='test')
 
     def test_border_correction(self):
         '''Test if the border correction for bins at the edges is correctly
         performed'''
-        cch_corrected, _ = sc.cross_correlation_histogram(
+
+        # check that nothing changes for valid lags
+        cch_valid, _ = sc.cross_correlation_histogram(
             self.binned_st1, self.binned_st2, window='full',
             border_correction=True, binary=False, kernel=None)
-        cch_corrected_mem, _ = sc.cross_correlation_histogram(
-            self.binned_st1, self.binned_st2, window='full',
-            border_correction=True, binary=False, kernel=None, method='memory')
-        cch, _ = sc.cross_correlation_histogram(
-            self.binned_st1, self.binned_st2, window='full',
-            border_correction=False, binary=False, kernel=None)
-        cch_mem, _ = sc.cross_correlation_histogram(
+        valid_lags = sc._CrossCorrHist.get_valid_lags(self.binned_st1,
+                                                      self.binned_st2)
+        left_edge, right_edge = valid_lags[(0, -1), ]
+        cch_builder = sc._CrossCorrHist(self.binned_st1, self.binned_st2,
+                                        window=(left_edge, right_edge))
+        cch_valid = cch_builder.correlate_speed(cch_mode='valid')
+        cch_corrected = cch_builder.border_correction(cch_valid)
+
+        np.testing.assert_array_equal(cch_valid, cch_corrected)
+
+        # test the border correction for lags without full overlap
+        cch_full, lags_full = sc.cross_correlation_histogram(
+            self.binned_st1, self.binned_st2, window='full')
+
+        cch_full_corrected, _ = sc.cross_correlation_histogram(
             self.binned_st1, self.binned_st2, window='full',
-            border_correction=False, binary=False, kernel=None,
-            method='memory')
+            border_correction=True)
 
-        self.assertNotEqual(cch.all(), cch_corrected.all())
-        self.assertNotEqual(cch_mem.all(), cch_corrected_mem.all())
+        n_bins_outside_window = np.min(np.abs(
+            np.subtract.outer(lags_full, valid_lags)), axis=1)
+
+        min_n_bins = min(self.binned_st1.n_bins, self.binned_st2.n_bins)
+
+        border_correction = (cch_full_corrected / cch_full).magnitude.flatten()
+
+        # exclude NaNs caused by zeros in the cch
+        mask = np.logical_not(np.isnan(border_correction))
+
+        np.testing.assert_array_almost_equal(
+            border_correction[mask],
+            (float(min_n_bins)
+             / (min_n_bins - n_bins_outside_window))[mask])
 
     def test_kernel(self):
         '''Test if the smoothing kernel is correctly defined, and wheter it is
@@ -548,12 +619,6 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
             ValueError, sc.cch, self.binned_st1, self.binned_st2,
             kernel=np.ones(100), method='memory')
 
-        self.assertRaises(
-            ValueError, sc.cch, self.binned_st1, self.binned_st2, kernel='BOX')
-        self.assertRaises(
-            ValueError, sc.cch, self.binned_st1, self.binned_st2, kernel='BOX',
-            method='memory')
-
     def test_exist_alias(self):
         '''
         Test if alias cch still exists.
@@ -561,6 +626,87 @@ class cross_correlation_histogram_TestCase(unittest.TestCase):
         self.assertEqual(sc.cross_correlation_histogram, sc.cch)
 
 
+@unittest.skipUnless(python_version_major == 3, "subTest requires 3.4")
+class CrossCorrelationHistDifferentTStartTStopTest(unittest.TestCase):
+
+    def _run_sub_tests(self, st1, st2, lags_true):
+        for window in ('valid', 'full'):
+            for method in ('speed', 'memory'):
+                with self.subTest(window=window, method=method):
+                    bin_size = 1 * pq.s
+                    st1_binned = conv.BinnedSpikeTrain(st1, bin_size=bin_size)
+                    st2_binned = conv.BinnedSpikeTrain(st2, bin_size=bin_size)
+                    left, right = lags_true[window][(0, -1), ]
+                    cch_window, lags_window = sc.cross_correlation_histogram(
+                        st1_binned, st2_binned, window=(left, right),
+                        method=method,
+                    )
+                    cch, lags = sc.cross_correlation_histogram(
+                        st1_binned, st2_binned, window=window)
+
+                    # target cross correlation
+                    cch_target = np.correlate(st1_binned.to_array()[0],
+                                              st2_binned.to_array()[0],
+                                              mode=window)
+
+                    self.assertEqual(len(lags_window), cch_window.shape[0])
+                    assert_array_almost_equal(cch.magnitude,
+                                              cch_window.magnitude)
+                    # the output is reversed since we cross-correlate
+                    # st2 with st1 rather than st1 with st2 (numpy behavior)
+                    assert_array_almost_equal(np.ravel(cch.magnitude),
+                                              cch_target[::-1])
+                    assert_array_equal(lags, lags_true[window])
+                    assert_array_equal(lags, lags_window)
+
+    def test_cross_correlation_histogram_valid_full_overlap(self):
+        # ex. 1 in the source code
+        st1 = neo.SpikeTrain([3.5, 4.5, 7.5] * pq.s, t_start=3 * pq.s,
+                             t_stop=8 * pq.s)
+        st2 = neo.SpikeTrain([1.5, 2.5, 4.5, 8.5, 9.5, 10.5]
+                             * pq.s, t_start=1 * pq.s, t_stop=13 * pq.s)
+        lags_true = {
+            'valid': np.arange(-2, 6, dtype=np.int32),
+            'full': np.arange(-6, 10, dtype=np.int32)
+        }
+        self._run_sub_tests(st1, st2, lags_true)
+
+    def test_cross_correlation_histogram_valid_partial_overlap(self):
+        # ex. 2 in the source code
+        st1 = neo.SpikeTrain([2.5, 3.5, 4.5, 6.5] * pq.s, t_start=1 * pq.s,
+                             t_stop=7 * pq.s)
+        st2 = neo.SpikeTrain([3.5, 5.5, 6.5, 7.5, 8.5] *
+                             pq.s, t_start=2 * pq.s, t_stop=9 * pq.s)
+        lags_true = {
+            'valid': np.arange(1, 3, dtype=np.int32),
+            'full': np.arange(-4, 8, dtype=np.int32)
+        }
+        self._run_sub_tests(st1, st2, lags_true)
+
+    def test_cross_correlation_histogram_valid_no_overlap(self):
+        st1 = neo.SpikeTrain([2.5, 3.5, 4.5, 6.5] * pq.s, t_start=1 * pq.s,
+                             t_stop=7 * pq.s)
+        st2 = neo.SpikeTrain([3.5, 5.5, 6.5, 7.5, 8.5] * pq.s + 6 * pq.s,
+                             t_start=8 * pq.s, t_stop=15 * pq.s)
+        lags_true = {
+            'valid': np.arange(7, 9, dtype=np.int32),
+            'full': np.arange(2, 14, dtype=np.int32)
+        }
+        self._run_sub_tests(st1, st2, lags_true)
+
+    def test_invalid_time_shift(self):
+        # time shift of 0.4 s is not multiple of bin_size=1 s
+        st1 = neo.SpikeTrain([2.5, 3.5] * pq.s, t_start=1 * pq.s,
+                             t_stop=7 * pq.s)
+        st2 = neo.SpikeTrain([3.5, 5.5] * pq.s, t_start=1.4 * pq.s,
+                             t_stop=7.4 * pq.s)
+        bin_size = 1 * pq.s
+        st1_binned = conv.BinnedSpikeTrain(st1, bin_size=bin_size)
+        st2_binned = conv.BinnedSpikeTrain(st2, bin_size=bin_size)
+        self.assertRaises(ValueError, sc.cross_correlation_histogram,
+                          st1_binned, st2_binned)
+
+
 class SpikeTimeTilingCoefficientTestCase(unittest.TestCase):
 
     def setUp(self):
@@ -579,9 +725,14 @@ class SpikeTimeTilingCoefficientTestCase(unittest.TestCase):
 
     def test_sttc(self):
         # test for result
-        target = 0.8748350567
+        target = 0.495860165593
         self.assertAlmostEqual(target, sc.sttc(self.st_1, self.st_2,
                                                0.005 * pq.s))
+
+        # test for same result with dt given in ms
+        self.assertAlmostEqual(target, sc.sttc(self.st_1, self.st_2,
+                                               5.0 * pq.ms))
+
         # test no spiketrains
         self.assertTrue(np.isnan(sc.sttc([], [])))
 
@@ -597,10 +748,89 @@ class SpikeTimeTilingCoefficientTestCase(unittest.TestCase):
         # test for high value of dt
         self.assertEqual(sc.sttc(self.st_1, self.st_2, dt=5 * pq.s), 1.0)
 
+        # test for TA = PB = 1 but TB /= PA /= 1 and vice versa
+        st3 = neo.SpikeTrain([1, 5, 9], units='ms', t_stop=10.)
+        target2 = 1. / 3.
+        self.assertAlmostEqual(target2, sc.sttc(st3, st2,
+                                                0.003 * pq.s))
+        self.assertAlmostEqual(target2, sc.sttc(st2, st3,
+                                                0.003 * pq.s))
+
     def test_exist_alias(self):
         # Test if alias cch still exists.
         self.assertEqual(sc.spike_time_tiling_coefficient, sc.sttc)
 
 
+class SpikeTrainTimescaleTestCase(unittest.TestCase):
+
+    def test_timescale_calculation(self):
+        '''
+        Test the timescale generation using an alpha-shaped ISI distribution,
+        see [1, eq. 1.68]. This is equivalent to a homogeneous gamma process
+        with alpha=2 and beta=2*nu where nu is the rate.
+
+        For this process, the autocorrelation function is given by a sum of a
+        delta peak and a (negative) exponential, see [1, eq. 1.69].
+        The exponential decays with \tau_corr = 1 / (4*nu), thus this fixes
+        timescale.
+
+        [1] Lindner, B. (2009). A brief introduction to some simple stochastic
+            processes. Stochastic Methods in Neuroscience, 1.
+        '''
+        nu = 25 / pq.s
+        T = 15 * pq.min
+        bin_size = 1 * pq.ms
+        timescale = 1 / (4 * nu)
+        np.random.seed(35)
+
+        timescale_num = []
+        for _ in range(10):
+            spikes = homogeneous_gamma_process(2, 2 * nu, 0 * pq.ms, T)
+            spikes_bin = conv.BinnedSpikeTrain(spikes, bin_size)
+            timescale_i = sc.spike_train_timescale(spikes_bin, 10 * timescale)
+            timescale_i.units = timescale.units
+            timescale_num.append(timescale_i.magnitude)
+        assert_array_almost_equal(timescale.magnitude, timescale_num,
+                                  decimal=3)
+
+    def test_timescale_errors(self):
+        spikes = neo.SpikeTrain([1, 5, 7, 8] * pq.ms, t_stop=10 * pq.ms)
+        binsize = 1 * pq.ms
+        spikes_bin = conv.BinnedSpikeTrain(spikes, binsize)
+
+        # Tau max with no units
+        tau_max = 1
+        self.assertRaises(ValueError,
+                          sc.spike_train_timescale, spikes_bin, tau_max)
+
+        # Tau max that is not a multiple of the binsize
+        tau_max = 1.1 * pq.ms
+        self.assertRaises(ValueError,
+                          sc.spike_train_timescale, spikes_bin, tau_max)
+
+    @unittest.skipUnless(python_version_major == 3,
+                         "assertWarns requires python 3.2")
+    def test_timescale_nan(self):
+        st0 = neo.SpikeTrain([] * pq.ms, t_stop=10 * pq.ms)
+        st1 = neo.SpikeTrain([1] * pq.ms, t_stop=10 * pq.ms)
+        st2 = neo.SpikeTrain([1, 5] * pq.ms, t_stop=10 * pq.ms)
+        st3 = neo.SpikeTrain([1, 5, 6] * pq.ms, t_stop=10 * pq.ms)
+        st4 = neo.SpikeTrain([1, 5, 6, 9] * pq.ms, t_stop=10 * pq.ms)
+
+        binsize = 1 * pq.ms
+        tau_max = 1 * pq.ms
+
+        for st in [st0, st1]:
+            bst = conv.BinnedSpikeTrain(st, binsize)
+            with self.assertWarns(UserWarning):
+                timescale = sc.spike_train_timescale(bst, tau_max)
+            self.assertTrue(math.isnan(timescale))
+
+        for st in [st2, st3, st4]:
+            bst = conv.BinnedSpikeTrain(st, binsize)
+            timescale = sc.spike_train_timescale(bst, tau_max)
+            self.assertFalse(math.isnan(timescale))
+
+
 if __name__ == '__main__':
     unittest.main()

+ 331 - 321
code/elephant/elephant/test/test_spike_train_dissimilarity.py

@@ -2,7 +2,7 @@
 """
 Tests for the spike train dissimilarity measures module.
 
-:copyright: Copyright 2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
@@ -16,6 +16,7 @@ import elephant.kernels as kernels
 import elephant.spike_train_generation as stg
 import elephant.spike_train_dissimilarity as stds
 
+
 class TimeScaleDependSpikeTrainDissimMeasures_TestCase(unittest.TestCase):
     def setUp(self):
         self.st00 = SpikeTrain([], units='ms', t_stop=1000.0)
@@ -36,9 +37,9 @@ class TimeScaleDependSpikeTrainDissimMeasures_TestCase(unittest.TestCase):
         self.st15 = SpikeTrain([0.01, 0.02, 0.03, 0.04, 0.05],
                                units='s', t_stop=1000.0)
         self.st16 = SpikeTrain([12, 16, 28, 30, 42], units='ms', t_stop=1000.0)
-        self.st21 = stg.homogeneous_poisson_process(50*Hz, 0*ms, 1000*ms)
-        self.st22 = stg.homogeneous_poisson_process(40*Hz, 0*ms, 1000*ms)
-        self.st23 = stg.homogeneous_poisson_process(30*Hz, 0*ms, 1000*ms)
+        self.st21 = stg.homogeneous_poisson_process(50 * Hz, 0 * ms, 1000 * ms)
+        self.st22 = stg.homogeneous_poisson_process(40 * Hz, 0 * ms, 1000 * ms)
+        self.st23 = stg.homogeneous_poisson_process(30 * Hz, 0 * ms, 1000 * ms)
         self.rd_st_list = [self.st21, self.st22, self.st23]
         self.st31 = SpikeTrain([12.0], units='ms', t_stop=1000.0)
         self.st32 = SpikeTrain([12.0, 12.0], units='ms', t_stop=1000.0)
@@ -67,136 +68,136 @@ class TimeScaleDependSpikeTrainDissimMeasures_TestCase(unittest.TestCase):
         self.t = np.linspace(0, 200, 20000001) * ms
 
     def test_wrong_input(self):
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.array1, self.array2], self.q3)
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.qarray1, self.qarray2], self.q3)
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.qarray1, self.qarray2], 5.0 * ms)
 
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.array1, self.array2], self.q3,
                           algorithm='intuitive')
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.qarray1, self.qarray2], self.q3,
                           algorithm='intuitive')
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.qarray1, self.qarray2], 5.0 * ms,
                           algorithm='intuitive')
 
-        self.assertRaises(TypeError, stds.van_rossum_dist,
+        self.assertRaises(TypeError, stds.van_rossum_distance,
                           [self.array1, self.array2], self.tau3)
-        self.assertRaises(TypeError, stds.van_rossum_dist,
+        self.assertRaises(TypeError, stds.van_rossum_distance,
                           [self.qarray1, self.qarray2], self.tau3)
-        self.assertRaises(TypeError, stds.van_rossum_dist,
+        self.assertRaises(TypeError, stds.van_rossum_distance,
                           [self.qarray1, self.qarray2], 5.0 * Hz)
 
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.st11, self.st13], self.tau2)
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.st11, self.st13], 5.0)
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.st11, self.st13], self.tau2,
                           algorithm='intuitive')
-        self.assertRaises(TypeError, stds.victor_purpura_dist,
+        self.assertRaises(TypeError, stds.victor_purpura_distance,
                           [self.st11, self.st13], 5.0,
                           algorithm='intuitive')
-        self.assertRaises(TypeError, stds.van_rossum_dist,
+        self.assertRaises(TypeError, stds.van_rossum_distance,
                           [self.st11, self.st13], self.q4)
-        self.assertRaises(TypeError, stds.van_rossum_dist,
+        self.assertRaises(TypeError, stds.van_rossum_distance,
                           [self.st11, self.st13], 5.0)
 
-        self.assertRaises(NotImplementedError, stds.victor_purpura_dist,
+        self.assertRaises(NotImplementedError, stds.victor_purpura_distance,
                           [self.st01, self.st02], self.q3,
                           kernel=kernels.Kernel(2.0 / self.q3))
-        self.assertRaises(NotImplementedError, stds.victor_purpura_dist,
+        self.assertRaises(NotImplementedError, stds.victor_purpura_distance,
                           [self.st01, self.st02], self.q3,
                           kernel=kernels.SymmetricKernel(2.0 / self.q3))
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st02], self.q1,
-                             kernel=kernels.TriangularKernel(
-                                 2.0 / (np.sqrt(6.0) * self.q2)))[0, 1],
-                         stds.victor_purpura_dist(
-                             [self.st01, self.st02], self.q3,
-                             kernel=kernels.TriangularKernel(
-                                 2.0 / (np.sqrt(6.0) * self.q2)))[0, 1])
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st02],
-                             kernel=kernels.TriangularKernel(
-                                 2.0 / (np.sqrt(6.0) * self.q2)))[0, 1], 1.0)
-        self.assertNotEqual(stds.victor_purpura_dist(
-                                [self.st01, self.st02],
-                                kernel=kernels.AlphaKernel(
-                                   2.0 / (np.sqrt(6.0) * self.q2)))[0, 1], 1.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st02], self.q1,
+            kernel=kernels.TriangularKernel(
+                2.0 / (np.sqrt(6.0) * self.q2)))[0, 1],
+            stds.victor_purpura_distance(
+            [self.st01, self.st02], self.q3,
+            kernel=kernels.TriangularKernel(
+                2.0 / (np.sqrt(6.0) * self.q2)))[0, 1])
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st02],
+            kernel=kernels.TriangularKernel(
+                2.0 / (np.sqrt(6.0) * self.q2)))[0, 1], 1.0)
+        self.assertNotEqual(stds.victor_purpura_distance(
+            [self.st01, self.st02],
+            kernel=kernels.AlphaKernel(
+                2.0 / (np.sqrt(6.0) * self.q2)))[0, 1], 1.0)
 
-        self.assertRaises(NameError, stds.victor_purpura_dist,
+        self.assertRaises(NameError, stds.victor_purpura_distance,
                           [self.st11, self.st13], self.q2, algorithm='slow')
 
     def test_victor_purpura_distance_fast(self):
         # Tests of distances of simplest spike trains:
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st00, self.st00], self.q2)[0, 1], 0.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st00, self.st01], self.q2)[0, 1], 1.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st00], self.q2)[0, 1], 1.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st01], self.q2)[0, 1], 0.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st00, self.st00], self.q2)[0, 1], 0.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st00, self.st01], self.q2)[0, 1], 1.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st00], self.q2)[0, 1], 1.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st01], self.q2)[0, 1], 0.0)
         # Tests of distances under elementary spike operations
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st02], self.q2)[0, 1], 1.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st03], self.q2)[0, 1], 1.9)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st04], self.q2)[0, 1], 2.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st05], self.q2)[0, 1], 2.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st00, self.st07], self.q2)[0, 1], 2.0)
-        self.assertAlmostEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st08], self.q4)[0, 1], 0.4)
-        self.assertAlmostEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st10], self.q3)[0, 1], 0.6 + 2)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st11, self.st14], self.q2)[0, 1], 1)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st02], self.q2)[0, 1], 1.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st03], self.q2)[0, 1], 1.9)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st04], self.q2)[0, 1], 2.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st05], self.q2)[0, 1], 2.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st00, self.st07], self.q2)[0, 1], 2.0)
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st07, self.st08], self.q4)[0, 1], 0.4)
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st07, self.st10], self.q3)[0, 1], 0.6 + 2)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st11, self.st14], self.q2)[0, 1], 1)
         # Tests on timescales
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st11, self.st14], self.q1)[0, 1],
-                         stds.victor_purpura_dist(
-                             [self.st11, self.st14], self.q5)[0, 1])
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st11], self.q0)[0, 1], 6.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st11], self.q1)[0, 1], 6.0)
-        self.assertAlmostEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st11], self.q5)[0, 1], 2.0, 5)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st11], self.q6)[0, 1], 2.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st11, self.st14], self.q1)[0, 1],
+            stds.victor_purpura_distance(
+            [self.st11, self.st14], self.q5)[0, 1])
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st07, self.st11], self.q0)[0, 1], 6.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st07, self.st11], self.q1)[0, 1], 6.0)
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st07, self.st11], self.q5)[0, 1], 2.0, 5)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st07, self.st11], self.q6)[0, 1], 2.0)
         # Tests on unordered spiketrains
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st11, self.st13], self.q4)[0, 1],
-                         stds.victor_purpura_dist(
-                             [self.st12, self.st13], self.q4)[0, 1])
-        self.assertNotEqual(stds.victor_purpura_dist(
-                                [self.st11, self.st13], self.q4,
-                                sort=False)[0, 1],
-                            stds.victor_purpura_dist(
-                                [self.st12, self.st13], self.q4,
-                                sort=False)[0, 1])
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st11, self.st13], self.q4)[0, 1],
+            stds.victor_purpura_distance(
+            [self.st12, self.st13], self.q4)[0, 1])
+        self.assertNotEqual(stds.victor_purpura_distance(
+            [self.st11, self.st13], self.q4,
+            sort=False)[0, 1],
+            stds.victor_purpura_distance(
+            [self.st12, self.st13], self.q4,
+            sort=False)[0, 1])
         # Tests on metric properties with random spiketrains
         # (explicit calculation of second metric axiom in particular case,
         # because from dist_matrix it is trivial)
-        dist_matrix = stds.victor_purpura_dist(
-                              [self.st21, self.st22, self.st23], self.q3)
+        dist_matrix = stds.victor_purpura_distance(
+            [self.st21, self.st22, self.st23], self.q3)
         for i in range(3):
             for j in range(3):
                 self.assertGreaterEqual(dist_matrix[i, j], 0)
                 if dist_matrix[i, j] == 0:
                     assert_array_equal(self.rd_st_list[i], self.rd_st_list[j])
-        assert_array_equal(stds.victor_purpura_dist(
-                               [self.st21, self.st22], self.q3), 
-                           stds.victor_purpura_dist(
-                               [self.st22, self.st21], self.q3))
+        assert_array_equal(stds.victor_purpura_distance(
+            [self.st21, self.st22], self.q3),
+            stds.victor_purpura_distance(
+            [self.st22, self.st21], self.q3))
         self.assertLessEqual(dist_matrix[0, 1],
                              dist_matrix[0, 2] + dist_matrix[1, 2])
         self.assertLessEqual(dist_matrix[0, 2],
@@ -205,119 +206,126 @@ class TimeScaleDependSpikeTrainDissimMeasures_TestCase(unittest.TestCase):
                              dist_matrix[0, 1] + dist_matrix[0, 2])
         # Tests on proper unit conversion
         self.assertAlmostEqual(
-              stds.victor_purpura_dist([self.st14, self.st16], self.q3)[0, 1],
-              stds.victor_purpura_dist([self.st15, self.st16], self.q3)[0, 1])
+            stds.victor_purpura_distance([self.st14, self.st16],
+                                         self.q3)[0, 1],
+            stds.victor_purpura_distance([self.st15, self.st16],
+                                         self.q3)[0, 1])
         self.assertAlmostEqual(
-              stds.victor_purpura_dist([self.st16, self.st14], self.q3)[0, 1],
-              stds.victor_purpura_dist([self.st16, self.st15], self.q3)[0, 1])
-        self.assertEqual(
-              stds.victor_purpura_dist([self.st01, self.st05], self.q3)[0, 1],
-              stds.victor_purpura_dist([self.st01, self.st05], self.q7)[0, 1])
+            stds.victor_purpura_distance([self.st16, self.st14],
+                                         self.q3)[0, 1],
+            stds.victor_purpura_distance([self.st16, self.st15],
+                                         self.q3)[0, 1])
+        self.assertAlmostEqual(
+            stds.victor_purpura_distance([self.st01, self.st05],
+                                         self.q3)[0, 1],
+            stds.victor_purpura_distance([self.st01, self.st05],
+                                         self.q7)[0, 1])
         # Tests on algorithmic behaviour for equal spike times
-        self.assertEqual(
-              stds.victor_purpura_dist([self.st31, self.st34], self.q3)[0, 1],
-              0.8 + 1.0)
-        self.assertEqual(
-              stds.victor_purpura_dist([self.st31, self.st34], self.q3)[0, 1],
-              stds.victor_purpura_dist([self.st32, self.st33], self.q3)[0, 1])
-        self.assertEqual(
-              stds.victor_purpura_dist(
-                  [self.st31, self.st33], self.q3)[0, 1] * 2.0,
-              stds.victor_purpura_dist(
-                  [self.st32, self.st34], self.q3)[0, 1])
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st31, self.st34], self.q3)[0, 1], 0.8 + 1.0)
+        self.assertAlmostEqual(
+            stds.victor_purpura_distance([self.st31, self.st34],
+                                         self.q3)[0, 1],
+            stds.victor_purpura_distance([self.st32, self.st33],
+                                         self.q3)[0, 1])
+        self.assertAlmostEqual(
+            stds.victor_purpura_distance(
+                [self.st31, self.st33], self.q3)[0, 1] * 2.0,
+            stds.victor_purpura_distance(
+                [self.st32, self.st34], self.q3)[0, 1])
         # Tests on spike train list lengthes smaller than 2
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st21], self.q3)[0, 0], 0)
-        self.assertEqual(len(stds.victor_purpura_dist([], self.q3)), 0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st21], self.q3)[0, 0], 0)
+        self.assertEqual(len(stds.victor_purpura_distance([], self.q3)), 0)
 
     def test_victor_purpura_distance_intuitive(self):
         # Tests of distances of simplest spike trains
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st00, self.st00], self.q2,
-                             algorithm='intuitive')[0, 1], 0.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st00, self.st01], self.q2,
-                             algorithm='intuitive')[0, 1], 1.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st00], self.q2,
-                             algorithm='intuitive')[0, 1], 1.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st01], self.q2,
-                             algorithm='intuitive')[0, 1], 0.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st00, self.st00], self.q2,
+            algorithm='intuitive')[0, 1], 0.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st00, self.st01], self.q2,
+            algorithm='intuitive')[0, 1], 1.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st00], self.q2,
+            algorithm='intuitive')[0, 1], 1.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st01], self.q2,
+            algorithm='intuitive')[0, 1], 0.0)
         # Tests of distances under elementary spike operations
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st02], self.q2,
-                             algorithm='intuitive')[0, 1], 1.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st03], self.q2,
-                             algorithm='intuitive')[0, 1], 1.9)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st04], self.q2,
-                             algorithm='intuitive')[0, 1], 2.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st05], self.q2,
-                             algorithm='intuitive')[0, 1], 2.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st00, self.st07], self.q2,
-                             algorithm='intuitive')[0, 1], 2.0)
-        self.assertAlmostEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st08], self.q4,
-                             algorithm='intuitive')[0, 1], 0.4)
-        self.assertAlmostEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st10], self.q3,
-                             algorithm='intuitive')[0, 1], 2.6)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st11, self.st14], self.q2,
-                             algorithm='intuitive')[0, 1], 1)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st02], self.q2,
+            algorithm='intuitive')[0, 1], 1.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st03], self.q2,
+            algorithm='intuitive')[0, 1], 1.9)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st04], self.q2,
+            algorithm='intuitive')[0, 1], 2.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st05], self.q2,
+            algorithm='intuitive')[0, 1], 2.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st00, self.st07], self.q2,
+            algorithm='intuitive')[0, 1], 2.0)
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st07, self.st08], self.q4,
+            algorithm='intuitive')[0, 1], 0.4)
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st07, self.st10], self.q3,
+            algorithm='intuitive')[0, 1], 2.6)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st11, self.st14], self.q2,
+            algorithm='intuitive')[0, 1], 1)
         # Tests on timescales
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st11, self.st14], self.q1,
-                             algorithm='intuitive')[0, 1],
-                         stds.victor_purpura_dist(
-                             [self.st11, self.st14], self.q5,
-                             algorithm='intuitive')[0, 1])
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st11], self.q0,
-                             algorithm='intuitive')[0, 1], 6.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st11], self.q1,
-                             algorithm='intuitive')[0, 1], 6.0)
-        self.assertAlmostEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st11], self.q5,
-                             algorithm='intuitive')[0, 1], 2.0, 5)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st07, self.st11], self.q6,
-                             algorithm='intuitive')[0, 1], 2.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st11, self.st14], self.q1,
+            algorithm='intuitive')[0, 1],
+            stds.victor_purpura_distance(
+            [self.st11, self.st14], self.q5,
+            algorithm='intuitive')[0, 1])
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st07, self.st11], self.q0,
+            algorithm='intuitive')[0, 1], 6.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st07, self.st11], self.q1,
+            algorithm='intuitive')[0, 1], 6.0)
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st07, self.st11], self.q5,
+            algorithm='intuitive')[0, 1], 2.0, 5)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st07, self.st11], self.q6,
+            algorithm='intuitive')[0, 1], 2.0)
         # Tests on unordered spiketrains
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st11, self.st13], self.q4,
-                             algorithm='intuitive')[0, 1],
-                         stds.victor_purpura_dist(
-                             [self.st12, self.st13], self.q4,
-                             algorithm='intuitive')[0, 1])
-        self.assertNotEqual(stds.victor_purpura_dist(
-                                [self.st11, self.st13], self.q4,
-                                sort=False, algorithm='intuitive')[0, 1],
-                            stds.victor_purpura_dist(
-                                [self.st12, self.st13], self.q4,
-                                sort=False, algorithm='intuitive')[0, 1])
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st11, self.st13], self.q4,
+            algorithm='intuitive')[0, 1],
+            stds.victor_purpura_distance(
+            [self.st12, self.st13], self.q4,
+            algorithm='intuitive')[0, 1])
+        self.assertNotEqual(stds.victor_purpura_distance(
+            [self.st11, self.st13], self.q4,
+            sort=False, algorithm='intuitive')[0, 1],
+            stds.victor_purpura_distance(
+            [self.st12, self.st13], self.q4,
+            sort=False, algorithm='intuitive')[0, 1])
         # Tests on metric properties with random spiketrains
         # (explicit calculation of second metric axiom in particular case,
         # because from dist_matrix it is trivial)
-        dist_matrix = stds.victor_purpura_dist(
-                          [self.st21, self.st22, self.st23],
-                          self.q3, algorithm='intuitive')
+        dist_matrix = stds.victor_purpura_distance(
+            [self.st21, self.st22, self.st23],
+            self.q3, algorithm='intuitive')
         for i in range(3):
             for j in range(3):
                 self.assertGreaterEqual(dist_matrix[i, j], 0)
                 if dist_matrix[i, j] == 0:
                     assert_array_equal(self.rd_st_list[i], self.rd_st_list[j])
-        assert_array_equal(stds.victor_purpura_dist(
-                               [self.st21, self.st22], self.q3,
-                               algorithm='intuitive'),
-                           stds.victor_purpura_dist(
-                               [self.st22, self.st21], self.q3,
-                               algorithm='intuitive'))
+        assert_array_equal(stds.victor_purpura_distance(
+            [self.st21, self.st22], self.q3,
+            algorithm='intuitive'),
+            stds.victor_purpura_distance(
+            [self.st22, self.st21], self.q3,
+            algorithm='intuitive'))
         self.assertLessEqual(dist_matrix[0, 1],
                              dist_matrix[0, 2] + dist_matrix[1, 2])
         self.assertLessEqual(dist_matrix[0, 2],
@@ -325,155 +333,155 @@ class TimeScaleDependSpikeTrainDissimMeasures_TestCase(unittest.TestCase):
         self.assertLessEqual(dist_matrix[1, 2],
                              dist_matrix[0, 1] + dist_matrix[0, 2])
         # Tests on proper unit conversion
-        self.assertAlmostEqual(stds.victor_purpura_dist(
-                                   [self.st14, self.st16], self.q3,
-                                   algorithm='intuitive')[0, 1],
-                               stds.victor_purpura_dist(
-                                   [self.st15, self.st16], self.q3,
-                                   algorithm='intuitive')[0, 1])
-        self.assertAlmostEqual(stds.victor_purpura_dist(
-                                   [self.st16, self.st14], self.q3,
-                                   algorithm='intuitive')[0, 1],
-                               stds.victor_purpura_dist(
-                                   [self.st16, self.st15], self.q3,
-                                   algorithm='intuitive')[0, 1])
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st01, self.st05], self.q3,
-                             algorithm='intuitive')[0, 1],
-                         stds.victor_purpura_dist(
-                             [self.st01, self.st05], self.q7,
-                             algorithm='intuitive')[0, 1])
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st14, self.st16], self.q3,
+            algorithm='intuitive')[0, 1],
+            stds.victor_purpura_distance(
+            [self.st15, self.st16], self.q3,
+            algorithm='intuitive')[0, 1])
+        self.assertAlmostEqual(stds.victor_purpura_distance(
+            [self.st16, self.st14], self.q3,
+            algorithm='intuitive')[0, 1],
+            stds.victor_purpura_distance(
+            [self.st16, self.st15], self.q3,
+            algorithm='intuitive')[0, 1])
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st01, self.st05], self.q3,
+            algorithm='intuitive')[0, 1],
+            stds.victor_purpura_distance(
+            [self.st01, self.st05], self.q7,
+            algorithm='intuitive')[0, 1])
         # Tests on algorithmic behaviour for equal spike times
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st31, self.st34], self.q3,
-                             algorithm='intuitive')[0, 1],
-                         0.8 + 1.0)
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st31, self.st34], self.q3,
-                             algorithm='intuitive')[0, 1],
-                         stds.victor_purpura_dist(
-                             [self.st32, self.st33], self.q3,
-                             algorithm='intuitive')[0, 1])
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st31, self.st33], self.q3,
-                             algorithm='intuitive')[0, 1] * 2.0,
-                         stds.victor_purpura_dist(
-                             [self.st32, self.st34], self.q3,
-                             algorithm='intuitive')[0, 1])
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st31, self.st34], self.q3,
+            algorithm='intuitive')[0, 1],
+            0.8 + 1.0)
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st31, self.st34], self.q3,
+            algorithm='intuitive')[0, 1],
+            stds.victor_purpura_distance(
+            [self.st32, self.st33], self.q3,
+            algorithm='intuitive')[0, 1])
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st31, self.st33], self.q3,
+            algorithm='intuitive')[0, 1] * 2.0,
+            stds.victor_purpura_distance(
+            [self.st32, self.st34], self.q3,
+            algorithm='intuitive')[0, 1])
         # Tests on spike train list lengthes smaller than 2
-        self.assertEqual(stds.victor_purpura_dist(
-                             [self.st21], self.q3,
-                             algorithm='intuitive')[0, 0], 0)
-        self.assertEqual(len(stds.victor_purpura_dist(
+        self.assertEqual(stds.victor_purpura_distance(
+            [self.st21], self.q3,
+            algorithm='intuitive')[0, 0], 0)
+        self.assertEqual(len(stds.victor_purpura_distance(
                              [], self.q3, algorithm='intuitive')), 0)
 
     def test_victor_purpura_algorithm_comparison(self):
         assert_array_almost_equal(
-                    stds.victor_purpura_dist([self.st21, self.st22, self.st23],
-                                             self.q3), 
-                    stds.victor_purpura_dist([self.st21, self.st22, self.st23],
-                                             self.q3, algorithm='intuitive'))
+            stds.victor_purpura_distance([self.st21, self.st22, self.st23],
+                                         self.q3),
+            stds.victor_purpura_distance([self.st21, self.st22, self.st23],
+                                         self.q3, algorithm='intuitive'))
 
     def test_van_rossum_distance(self):
         # Tests of distances of simplest spike trains
-        self.assertEqual(stds.van_rossum_dist(
-                             [self.st00, self.st00], self.tau2)[0, 1], 0.0)
-        self.assertEqual(stds.van_rossum_dist(
-                             [self.st00, self.st01], self.tau2)[0, 1], 1.0)
-        self.assertEqual(stds.van_rossum_dist(
-                             [self.st01, self.st00], self.tau2)[0, 1], 1.0)
-        self.assertEqual(stds.van_rossum_dist(
-                             [self.st01, self.st01], self.tau2)[0, 1], 0.0)
+        self.assertEqual(stds.van_rossum_distance(
+            [self.st00, self.st00], self.tau2)[0, 1], 0.0)
+        self.assertEqual(stds.van_rossum_distance(
+            [self.st00, self.st01], self.tau2)[0, 1], 1.0)
+        self.assertEqual(stds.van_rossum_distance(
+            [self.st01, self.st00], self.tau2)[0, 1], 1.0)
+        self.assertEqual(stds.van_rossum_distance(
+            [self.st01, self.st01], self.tau2)[0, 1], 0.0)
         # Tests of distances under elementary spike operations
-        self.assertAlmostEqual(stds.van_rossum_dist(
-                                   [self.st01, self.st02], self.tau2)[0, 1],
-                               float(np.sqrt(2*(1.0-np.exp(-np.absolute(
-                                         ((self.st01[0]-self.st02[0]) /
-                                             self.tau2).simplified))))))
-        self.assertAlmostEqual(stds.van_rossum_dist(
-                                   [self.st01, self.st05], self.tau2)[0, 1],
-                               float(np.sqrt(2*(1.0-np.exp(-np.absolute(
-                                         ((self.st01[0]-self.st05[0]) /
-                                             self.tau2).simplified))))))
-        self.assertAlmostEqual(stds.van_rossum_dist(
-                                   [self.st01, self.st05], self.tau2)[0, 1],
-                               np.sqrt(2.0), 1)
-        self.assertAlmostEqual(stds.van_rossum_dist(
-                                   [self.st01, self.st06], self.tau2)[0, 1],
-                               np.sqrt(2.0), 20)
-        self.assertAlmostEqual(stds.van_rossum_dist(
-                                   [self.st00, self.st07], self.tau1)[0, 1],
-                               np.sqrt(0 + 2))
-        self.assertAlmostEqual(stds.van_rossum_dist(
-                                   [self.st07, self.st08], self.tau4)[0, 1],
-                               float(np.sqrt(2*(1.0-np.exp(-np.absolute(
-                                         ((self.st07[0]-self.st08[-1]) /
-                                             self.tau4).simplified))))))
+        self.assertAlmostEqual(stds.van_rossum_distance(
+            [self.st01, self.st02], self.tau2)[0, 1],
+            float(np.sqrt(2 * (1.0 - np.exp(-np.absolute(
+                ((self.st01[0] - self.st02[0]) /
+                 self.tau2).simplified))))))
+        self.assertAlmostEqual(stds.van_rossum_distance(
+            [self.st01, self.st05], self.tau2)[0, 1],
+            float(np.sqrt(2 * (1.0 - np.exp(-np.absolute(
+                ((self.st01[0] - self.st05[0]) /
+                 self.tau2).simplified))))))
+        self.assertAlmostEqual(stds.van_rossum_distance(
+            [self.st01, self.st05], self.tau2)[0, 1],
+            np.sqrt(2.0), 1)
+        self.assertAlmostEqual(stds.van_rossum_distance(
+            [self.st01, self.st06], self.tau2)[0, 1],
+            np.sqrt(2.0), 20)
+        self.assertAlmostEqual(stds.van_rossum_distance(
+            [self.st00, self.st07], self.tau1)[0, 1],
+            np.sqrt(0 + 2))
+        self.assertAlmostEqual(stds.van_rossum_distance(
+            [self.st07, self.st08], self.tau4)[0, 1],
+            float(np.sqrt(2 * (1.0 - np.exp(-np.absolute(
+                ((self.st07[0] - self.st08[-1]) /
+                 self.tau4).simplified))))))
         f_minus_g_squared = (
-               (self.t > self.st08[0]) * np.exp(
-                            -((self.t-self.st08[0])/self.tau3).simplified) +
-               (self.t > self.st08[1]) * np.exp(
-                            -((self.t-self.st08[1])/self.tau3).simplified) -
-               (self.t > self.st09[0]) * np.exp(
-                            -((self.t-self.st09[0])/self.tau3).simplified))**2
+            (self.t > self.st08[0]) * np.exp(
+                -((self.t - self.st08[0]) / self.tau3).simplified) +
+            (self.t > self.st08[1]) * np.exp(
+                -((self.t - self.st08[1]) / self.tau3).simplified) -
+            (self.t > self.st09[0]) * np.exp(
+                -((self.t - self.st09[0]) / self.tau3).simplified))**2
         distance = np.sqrt(2.0 * spint.cumtrapz(
                            y=f_minus_g_squared, x=self.t.magnitude)[-1] /
                            self.tau3.rescale(self.t.units).magnitude)
-        self.assertAlmostEqual(stds.van_rossum_dist(
-                       [self.st08, self.st09], self.tau3)[0, 1], distance, 5)
-        self.assertAlmostEqual(stds.van_rossum_dist(
-                             [self.st11, self.st14], self.tau2)[0, 1], 1)
+        self.assertAlmostEqual(stds.van_rossum_distance(
+            [self.st08, self.st09], self.tau3)[0, 1], distance, 5)
+        self.assertAlmostEqual(stds.van_rossum_distance(
+            [self.st11, self.st14], self.tau2)[0, 1], 1)
         # Tests on timescales
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st11, self.st14], self.tau1)[0, 1],
-                stds.van_rossum_dist([self.st11, self.st14], self.tau5)[0, 1])
+            stds.van_rossum_distance([self.st11, self.st14], self.tau1)[0, 1],
+            stds.van_rossum_distance([self.st11, self.st14], self.tau5)[0, 1])
 
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st07, self.st11], self.tau0)[0, 1],
-                np.sqrt(len(self.st07) + len(self.st11)))
+            stds.van_rossum_distance([self.st07, self.st11], self.tau0)[0, 1],
+            np.sqrt(len(self.st07) + len(self.st11)))
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st07, self.st14], self.tau0)[0, 1],
-                np.sqrt(len(self.st07) + len(self.st14)))
+            stds.van_rossum_distance([self.st07, self.st14], self.tau0)[0, 1],
+            np.sqrt(len(self.st07) + len(self.st14)))
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st07, self.st11], self.tau1)[0, 1],
-                np.sqrt(len(self.st07) + len(self.st11)))
+            stds.van_rossum_distance([self.st07, self.st11], self.tau1)[0, 1],
+            np.sqrt(len(self.st07) + len(self.st11)))
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st07, self.st14], self.tau1)[0, 1],
-                np.sqrt(len(self.st07) + len(self.st14)))
+            stds.van_rossum_distance([self.st07, self.st14], self.tau1)[0, 1],
+            np.sqrt(len(self.st07) + len(self.st14)))
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st07, self.st11], self.tau5)[0, 1],
-                np.absolute(len(self.st07) - len(self.st11)))
+            stds.van_rossum_distance([self.st07, self.st11], self.tau5)[0, 1],
+            np.absolute(len(self.st07) - len(self.st11)))
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st07, self.st14], self.tau5)[0, 1],
-                np.absolute(len(self.st07) - len(self.st14)))
+            stds.van_rossum_distance([self.st07, self.st14], self.tau5)[0, 1],
+            np.absolute(len(self.st07) - len(self.st14)))
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st07, self.st11], self.tau6)[0, 1],
-                np.absolute(len(self.st07) - len(self.st11)))
+            stds.van_rossum_distance([self.st07, self.st11], self.tau6)[0, 1],
+            np.absolute(len(self.st07) - len(self.st11)))
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st07, self.st14], self.tau6)[0, 1],
-                np.absolute(len(self.st07) - len(self.st14)))
+            stds.van_rossum_distance([self.st07, self.st14], self.tau6)[0, 1],
+            np.absolute(len(self.st07) - len(self.st14)))
         # Tests on unordered spiketrains
         self.assertEqual(
-                stds.van_rossum_dist([self.st11, self.st13], self.tau4)[0, 1],
-                stds.van_rossum_dist([self.st12, self.st13], self.tau4)[0, 1])
+            stds.van_rossum_distance([self.st11, self.st13], self.tau4)[0, 1],
+            stds.van_rossum_distance([self.st12, self.st13], self.tau4)[0, 1])
         self.assertNotEqual(
-                stds.van_rossum_dist([self.st11, self.st13],
+            stds.van_rossum_distance([self.st11, self.st13],
                                      self.tau4, sort=False)[0, 1],
-                stds.van_rossum_dist([self.st12, self.st13],
+            stds.van_rossum_distance([self.st12, self.st13],
                                      self.tau4, sort=False)[0, 1])
-        # Tests on metric properties with random spiketrains 
+        # Tests on metric properties with random spiketrains
         # (explicit calculation of second metric axiom in particular case,
         # because from dist_matrix it is trivial)
-        dist_matrix = stds.van_rossum_dist(
-                          [self.st21, self.st22, self.st23], self.tau3)
+        dist_matrix = stds.van_rossum_distance(
+            [self.st21, self.st22, self.st23], self.tau3)
         for i in range(3):
             for j in range(3):
                 self.assertGreaterEqual(dist_matrix[i, j], 0)
                 if dist_matrix[i, j] == 0:
                     assert_array_equal(self.rd_st_list[i], self.rd_st_list[j])
         assert_array_equal(
-                stds.van_rossum_dist([self.st21, self.st22], self.tau3),
-                stds.van_rossum_dist([self.st22, self.st21], self.tau3))
+            stds.van_rossum_distance([self.st21, self.st22], self.tau3),
+            stds.van_rossum_distance([self.st22, self.st21], self.tau3))
         self.assertLessEqual(dist_matrix[0, 1],
                              dist_matrix[0, 2] + dist_matrix[1, 2])
         self.assertLessEqual(dist_matrix[0, 2],
@@ -482,39 +490,41 @@ class TimeScaleDependSpikeTrainDissimMeasures_TestCase(unittest.TestCase):
                              dist_matrix[0, 1] + dist_matrix[0, 2])
         # Tests on proper unit conversion
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st14, self.st16], self.tau3)[0, 1],
-                stds.van_rossum_dist([self.st15, self.st16], self.tau3)[0, 1])
+            stds.van_rossum_distance([self.st14, self.st16], self.tau3)[0, 1],
+            stds.van_rossum_distance([self.st15, self.st16], self.tau3)[0, 1])
         self.assertAlmostEqual(
-                stds.van_rossum_dist([self.st16, self.st14], self.tau3)[0, 1],
-                stds.van_rossum_dist([self.st16, self.st15], self.tau3)[0, 1])
+            stds.van_rossum_distance([self.st16, self.st14], self.tau3)[0, 1],
+            stds.van_rossum_distance([self.st16, self.st15], self.tau3)[0, 1])
         self.assertEqual(
-                stds.van_rossum_dist([self.st01, self.st05], self.tau3)[0, 1],
-                stds.van_rossum_dist([self.st01, self.st05], self.tau7)[0, 1])
+            stds.van_rossum_distance([self.st01, self.st05], self.tau3)[0, 1],
+            stds.van_rossum_distance([self.st01, self.st05], self.tau7)[0, 1])
         # Tests on algorithmic behaviour for equal spike times
         f_minus_g_squared = (
-               (self.t > self.st31[0]) * np.exp(
-                            -((self.t-self.st31[0])/self.tau3).simplified) -
-               (self.t > self.st34[0]) * np.exp(
-                            -((self.t-self.st34[0])/self.tau3).simplified) -
-               (self.t > self.st34[1]) * np.exp(
-                            -((self.t-self.st34[1])/self.tau3).simplified))**2
+            (self.t > self.st31[0]) * np.exp(
+                -((self.t - self.st31[0]) / self.tau3).simplified) -
+            (self.t > self.st34[0]) * np.exp(
+                -((self.t - self.st34[0]) / self.tau3).simplified) -
+            (self.t > self.st34[1]) * np.exp(
+                -((self.t - self.st34[1]) / self.tau3).simplified))**2
         distance = np.sqrt(2.0 * spint.cumtrapz(
                            y=f_minus_g_squared, x=self.t.magnitude)[-1] /
                            self.tau3.rescale(self.t.units).magnitude)
-        self.assertAlmostEqual(stds.van_rossum_dist([self.st31, self.st34],
-                                                    self.tau3)[0, 1],
+        self.assertAlmostEqual(stds.van_rossum_distance([self.st31, self.st34],
+                                                        self.tau3)[0, 1],
                                distance, 5)
-        self.assertEqual(stds.van_rossum_dist([self.st31, self.st34],
-                                              self.tau3)[0, 1],
-                         stds.van_rossum_dist([self.st32, self.st33],
-                                              self.tau3)[0, 1])
-        self.assertEqual(stds.van_rossum_dist([self.st31, self.st33],
-                                              self.tau3)[0, 1] * 2.0,
-                         stds.van_rossum_dist([self.st32, self.st34],
-                                              self.tau3)[0, 1])
+        self.assertEqual(stds.van_rossum_distance([self.st31, self.st34],
+                                                  self.tau3)[0, 1],
+                         stds.van_rossum_distance([self.st32, self.st33],
+                                                  self.tau3)[0, 1])
+        self.assertEqual(stds.van_rossum_distance([self.st31, self.st33],
+                                                  self.tau3)[0, 1] * 2.0,
+                         stds.van_rossum_distance([self.st32, self.st34],
+                                                  self.tau3)[0, 1])
         # Tests on spike train list lengthes smaller than 2
-        self.assertEqual(stds.van_rossum_dist([self.st21], self.tau3)[0, 0], 0)
-        self.assertEqual(len(stds.van_rossum_dist([], self.tau3)), 0)
+        self.assertEqual(stds.van_rossum_distance(
+            [self.st21], self.tau3)[0, 0], 0)
+        self.assertEqual(len(stds.van_rossum_distance([], self.tau3)), 0)
+
 
 if __name__ == '__main__':
     unittest.main()

Разница между файлами не показана из-за своего большого размера
+ 693 - 300
code/elephant/elephant/test/test_spike_train_generation.py


+ 552 - 164
code/elephant/elephant/test/test_spike_train_surrogates.py

@@ -2,64 +2,77 @@
 """
 unittests for spike_train_surrogates module.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
 import unittest
+import random
+
 import elephant.spike_train_surrogates as surr
+import elephant.spike_train_generation as stg
+import elephant.conversion as conv
 import numpy as np
+from numpy.testing import assert_array_almost_equal, assert_array_less
 import quantities as pq
 import neo
 
-np.random.seed(0)
-
 
 class SurrogatesTestCase(unittest.TestCase):
 
-    def test_dither_spikes_output_format(self):
+    def setUp(self):
+        np.random.seed(0)
+        random.seed(0)
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+    def test_dither_spikes_output_format(self):
 
-        nr_surr = 2
+        spiketrain = neo.SpikeTrain([90, 93, 97, 100, 105,
+                                     150, 180, 350] * pq.ms, t_stop=.5 * pq.s)
+        spiketrain.t_stop = .5 * pq.s
+        n_surrogates = 2
         dither = 10 * pq.ms
-        surrs = surr.dither_spikes(st, dither=dither, n=nr_surr)
+        surrogate_trains = surr.dither_spikes(
+            spiketrain, dither=dither, n_surrogates=n_surrogates)
 
-        self.assertIsInstance(surrs, list)
-        self.assertEqual(len(surrs), nr_surr)
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
 
-        for surrog in surrs:
-            self.assertIsInstance(surrs[0], neo.SpikeTrain)
-            self.assertEqual(surrog.units, st.units)
-            self.assertEqual(surrog.t_start, st.t_start)
-            self.assertEqual(surrog.t_stop, st.t_stop)
-            self.assertEqual(len(surrog), len(st))
+        self.assertIsInstance(surrogate_trains[0], neo.SpikeTrain)
+        for surrogate_train in surrogate_trains:
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
+            assert_array_less(0., np.diff(surrogate_train))  # check ordering
 
     def test_dither_spikes_empty_train(self):
 
         st = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
 
         dither = 10 * pq.ms
-        surrog = surr.dither_spikes(st, dither=dither, n=1)[0]
-        self.assertEqual(len(surrog), 0)
+        surrogate_train = surr.dither_spikes(
+            st, dither=dither, n_surrogates=1)[0]
+        self.assertEqual(len(surrogate_train), 0)
 
     def test_dither_spikes_output_decimals(self):
 
         st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
+        n_surrogates = 2
         dither = 10 * pq.ms
         np.random.seed(42)
-        surrs = surr.dither_spikes(st, dither=dither, decimals=3, n=nr_surr)
+        surrogate_trains = surr.dither_spikes(
+            st, dither=dither, decimals=3, n_surrogates=n_surrogates)
 
         np.random.seed(42)
-        dither_values = np.random.random_sample((nr_surr, len(st)))
-        expected_non_dithered = np.sum(dither_values==0)
+        dither_values = np.random.random_sample((n_surrogates, len(st)))
+        expected_non_dithered = np.sum(dither_values == 0)
 
         observed_non_dithered = 0
-        for surrog in surrs:
-            for i in range(len(surrog)):
-                if surrog[i] - int(surrog[i]) * pq.ms == surrog[i] - surrog[i]:
+        for surrogate_train in surrogate_trains:
+            for i in range(len(surrogate_train)):
+                if surrogate_train[i] - int(surrogate_train[i]) * \
+                        pq.ms == surrogate_train[i] - surrogate_train[i]:
                     observed_non_dithered += 1
 
         self.assertEqual(observed_non_dithered, expected_non_dithered)
@@ -68,252 +81,627 @@ class SurrogatesTestCase(unittest.TestCase):
 
         st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
+        n_surrogates = 2
+        dither = 10 * pq.ms
+        surrogate_trains = surr.dither_spikes(
+            st, dither=dither, n_surrogates=n_surrogates, edges=False)
+
+        for surrogate_train in surrogate_trains:
+            for i in range(len(surrogate_train)):
+                self.assertLessEqual(surrogate_train[i], st.t_stop)
+
+    def test_dither_spikes_with_refractory_period_output_format(self):
+
+        spiketrain = neo.SpikeTrain([90, 93, 97, 100, 105,
+                                     150, 180, 350] * pq.ms, t_stop=.5 * pq.s)
+        n_surrogates = 2
         dither = 10 * pq.ms
-        surrs = surr.dither_spikes(st, dither=dither, n=nr_surr, edges=False)
+        surrogate_trains = surr.dither_spikes(
+            spiketrain, dither=dither, n_surrogates=n_surrogates,
+            refractory_period=4 * pq.ms)
+
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
+
+        self.assertIsInstance(surrogate_trains[0], neo.SpikeTrain)
+        for surrogate_train in surrogate_trains:
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
+            # Check that refractory period is conserved
+            self.assertLessEqual(np.min(np.diff(spiketrain)),
+                                 np.min(np.diff(surrogate_train)))
+            sigma_displacement = np.std(surrogate_train - spiketrain)
+            # Check that spikes are moved
+            self.assertLessEqual(dither / 10, sigma_displacement)
+            # Spikes are not moved more than dither
+            self.assertLessEqual(sigma_displacement, dither)
+
+        self.assertRaises(ValueError, surr.dither_spikes,
+                          spiketrain, dither=dither, refractory_period=3)
+
+    def test_dither_spikes_with_refractory_period_empty_train(self):
+
+        spiketrain = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
 
-        for surrog in surrs:
-            for i in range(len(surrog)):
-                self.assertLessEqual(surrog[i], st.t_stop)
+        dither = 10 * pq.ms
+        surrogate_train = surr.dither_spikes(
+            spiketrain, dither=dither, n_surrogates=1,
+            refractory_period=4 * pq.ms)[0]
+        self.assertEqual(len(surrogate_train), 0)
 
     def test_randomise_spikes_output_format(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
-        surrs = surr.randomise_spikes(st, n=nr_surr)
+        n_surrogates = 2
+        surrogate_trains = surr.randomise_spikes(
+            spiketrain, n_surrogates=n_surrogates)
 
-        self.assertIsInstance(surrs, list)
-        self.assertEqual(len(surrs), nr_surr)
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
 
-        for surrog in surrs:
-            self.assertIsInstance(surrs[0], neo.SpikeTrain)
-            self.assertEqual(surrog.units, st.units)
-            self.assertEqual(surrog.t_start, st.t_start)
-            self.assertEqual(surrog.t_stop, st.t_stop)
-            self.assertEqual(len(surrog), len(st))
+        self.assertIsInstance(surrogate_trains[0], neo.SpikeTrain)
+        for surrogate_train in surrogate_trains:
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
 
     def test_randomise_spikes_empty_train(self):
 
-        st = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
 
-        surrog = surr.randomise_spikes(st, n=1)[0]
-        self.assertEqual(len(surrog), 0)
+        surrogate_train = surr.randomise_spikes(spiketrain, n_surrogates=1)[0]
+        self.assertEqual(len(surrogate_train), 0)
 
     def test_randomise_spikes_output_decimals(self):
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
-        surrs = surr.randomise_spikes(st, n=nr_surr, decimals=3)
+        n_surrogates = 2
+        surrogate_trains = surr.randomise_spikes(
+            spiketrain, n_surrogates=n_surrogates, decimals=3)
 
-        for surrog in surrs:
-            for i in range(len(surrog)):
-                self.assertNotEqual(surrog[i] - int(surrog[i]) * pq.ms,
-                                    surrog[i] - surrog[i])
+        for surrogate_train in surrogate_trains:
+            for i in range(len(surrogate_train)):
+                self.assertNotEqual(surrogate_train[i] -
+                                    int(surrogate_train[i]) *
+                                    pq.ms, surrogate_train[i] -
+                                    surrogate_train[i])
 
     def test_shuffle_isis_output_format(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
-        surrs = surr.shuffle_isis(st, n=nr_surr)
+        n_surrogates = 2
+        surrogate_trains = surr.shuffle_isis(
+            spiketrain, n_surrogates=n_surrogates)
 
-        self.assertIsInstance(surrs, list)
-        self.assertEqual(len(surrs), nr_surr)
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
 
-        for surrog in surrs:
-            self.assertIsInstance(surrs[0], neo.SpikeTrain)
-            self.assertEqual(surrog.units, st.units)
-            self.assertEqual(surrog.t_start, st.t_start)
-            self.assertEqual(surrog.t_stop, st.t_stop)
-            self.assertEqual(len(surrog), len(st))
+        self.assertIsInstance(surrogate_trains[0], neo.SpikeTrain)
+        for surrogate_train in surrogate_trains:
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
 
     def test_shuffle_isis_empty_train(self):
 
-        st = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
 
-        surrog = surr.shuffle_isis(st, n=1)[0]
-        self.assertEqual(len(surrog), 0)
+        surrogate_train = surr.shuffle_isis(spiketrain, n_surrogates=1)[0]
+        self.assertEqual(len(surrogate_train), 0)
 
     def test_shuffle_isis_same_isis(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        surrog = surr.shuffle_isis(st, n=1)[0]
+        surrogate_train = surr.shuffle_isis(spiketrain, n_surrogates=1)[0]
 
-        st_pq = st.view(pq.Quantity)
-        surr_pq = surrog.view(pq.Quantity)
+        st_pq = spiketrain.view(pq.Quantity)
+        surr_pq = surrogate_train.view(pq.Quantity)
 
-        isi0_orig = st[0] - st.t_start
+        isi0_orig = spiketrain[0] - spiketrain.t_start
         ISIs_orig = np.sort([isi0_orig] + [isi for isi in np.diff(st_pq)])
 
-        isi0_surr = surrog[0] - surrog.t_start
+        isi0_surr = surrogate_train[0] - surrogate_train.t_start
         ISIs_surr = np.sort([isi0_surr] + [isi for isi in np.diff(surr_pq)])
 
         self.assertTrue(np.all(ISIs_orig == ISIs_surr))
 
     def test_shuffle_isis_output_decimals(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        surrog = surr.shuffle_isis(st, n=1, decimals=95)[0]
+        surrogate_train = surr.shuffle_isis(
+            spiketrain, n_surrogates=1, decimals=95)[0]
 
-        st_pq = st.view(pq.Quantity)
-        surr_pq = surrog.view(pq.Quantity)
+        st_pq = spiketrain.view(pq.Quantity)
+        surr_pq = surrogate_train.view(pq.Quantity)
 
-        isi0_orig = st[0] - st.t_start
+        isi0_orig = spiketrain[0] - spiketrain.t_start
         ISIs_orig = np.sort([isi0_orig] + [isi for isi in np.diff(st_pq)])
 
-        isi0_surr = surrog[0] - surrog.t_start
+        isi0_surr = surrogate_train[0] - surrogate_train.t_start
         ISIs_surr = np.sort([isi0_surr] + [isi for isi in np.diff(surr_pq)])
 
         self.assertTrue(np.all(ISIs_orig == ISIs_surr))
 
+    def test_shuffle_isis_with_wrongly_ordered_spikes(self):
+        surr_method = 'shuffle_isis'
+        n_surr = 30
+        dither = 15 * pq.ms
+        spiketrain = neo.SpikeTrain(
+            [39.65696411,  98.93868274, 120.2417674,  134.70971166,
+             154.20788924,
+             160.29077989, 179.19884034, 212.86773029, 247.59488061,
+             273.04095041,
+             297.56437605, 344.99204215, 418.55696486, 460.54298334,
+             482.82299125,
+             524.236052,   566.38966742, 597.87562722, 651.26965293,
+             692.39802855,
+             740.90285815, 849.45874695, 974.57724848,   8.79247605],
+            t_start=0.*pq.ms, t_stop=1000.*pq.ms, units=pq.ms)
+        surr.surrogates(spiketrain, n_surrogates=n_surr, method=surr_method,
+                        dt=dither)
+
     def test_dither_spike_train_output_format(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
+        n_surrogates = 2
         shift = 10 * pq.ms
-        surrs = surr.dither_spike_train(st, shift=shift, n=nr_surr)
+        surrogate_trains = surr.dither_spike_train(
+            spiketrain, shift=shift, n_surrogates=n_surrogates)
 
-        self.assertIsInstance(surrs, list)
-        self.assertEqual(len(surrs), nr_surr)
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
 
-        for surrog in surrs:
-            self.assertIsInstance(surrs[0], neo.SpikeTrain)
-            self.assertEqual(surrog.units, st.units)
-            self.assertEqual(surrog.t_start, st.t_start)
-            self.assertEqual(surrog.t_stop, st.t_stop)
-            self.assertEqual(len(surrog), len(st))
+        self.assertIsInstance(surrogate_trains[0], neo.SpikeTrain)
+        for surrogate_train in surrogate_trains:
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
 
     def test_dither_spike_train_empty_train(self):
 
-        st = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
 
         shift = 10 * pq.ms
-        surrog = surr.dither_spike_train(st, shift=shift, n=1)[0]
-        self.assertEqual(len(surrog), 0)
+        surrogate_train = surr.dither_spike_train(
+            spiketrain, shift=shift, n_surrogates=1)[0]
+        self.assertEqual(len(surrogate_train), 0)
 
     def test_dither_spike_train_output_decimals(self):
         st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
+        n_surrogates = 2
         shift = 10 * pq.ms
-        surrs = surr.dither_spike_train(st, shift=shift, n=nr_surr, decimals=3)
+        surrogate_trains = surr.dither_spike_train(
+            st, shift=shift, n_surrogates=n_surrogates, decimals=3)
 
-        for surrog in surrs:
-            for i in range(len(surrog)):
-                self.assertNotEqual(surrog[i] - int(surrog[i]) * pq.ms,
-                                    surrog[i] - surrog[i])
+        for surrogate_train in surrogate_trains:
+            for i in range(len(surrogate_train)):
+                self.assertNotEqual(surrogate_train[i] -
+                                    int(surrogate_train[i]) *
+                                    pq.ms, surrogate_train[i] -
+                                    surrogate_train[i])
 
     def test_dither_spike_train_false_edges(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
+        n_surrogates = 2
         shift = 10 * pq.ms
-        surrs = surr.dither_spike_train(
-            st, shift=shift, n=nr_surr, edges=False)
+        surrogate_trains = surr.dither_spike_train(
+            spiketrain, shift=shift, n_surrogates=n_surrogates, edges=False)
 
-        for surrog in surrs:
-            for i in range(len(surrog)):
-                self.assertLessEqual(surrog[i], st.t_stop)
+        for surrogate_train in surrogate_trains:
+            for i in range(len(surrogate_train)):
+                self.assertLessEqual(surrogate_train[i], spiketrain.t_stop)
 
     def test_jitter_spikes_output_format(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
 
-        nr_surr = 2
-        binsize = 100 * pq.ms
-        surrs = surr.jitter_spikes(st, binsize=binsize, n=nr_surr)
+        n_surrogates = 2
+        bin_size = 100 * pq.ms
+        surrogate_trains = surr.jitter_spikes(
+            spiketrain, bin_size=bin_size, n_surrogates=n_surrogates)
 
-        self.assertIsInstance(surrs, list)
-        self.assertEqual(len(surrs), nr_surr)
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
 
-        for surrog in surrs:
-            self.assertIsInstance(surrs[0], neo.SpikeTrain)
-            self.assertEqual(surrog.units, st.units)
-            self.assertEqual(surrog.t_start, st.t_start)
-            self.assertEqual(surrog.t_stop, st.t_stop)
-            self.assertEqual(len(surrog), len(st))
+        self.assertIsInstance(surrogate_trains[0], neo.SpikeTrain)
+        for surrogate_train in surrogate_trains:
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
 
     def test_jitter_spikes_empty_train(self):
 
-        st = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
+        spiketrain = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
 
-        binsize = 75 * pq.ms
-        surrog = surr.jitter_spikes(st, binsize=binsize, n=1)[0]
-        self.assertEqual(len(surrog), 0)
+        bin_size = 75 * pq.ms
+        surrogate_train = surr.jitter_spikes(
+            spiketrain, bin_size=bin_size, n_surrogates=1)[0]
+        self.assertEqual(len(surrogate_train), 0)
 
     def test_jitter_spikes_same_bins(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
-
-        binsize = 100 * pq.ms
-        surrog = surr.jitter_spikes(st, binsize=binsize, n=1)[0]
-
-        bin_ids_orig = np.array((st.view(pq.Quantity) / binsize).rescale(
-            pq.dimensionless).magnitude, dtype=int)
-        bin_ids_surr = np.array((surrog.view(pq.Quantity) / binsize).rescale(
-            pq.dimensionless).magnitude, dtype=int)
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+
+        bin_size = 100 * pq.ms
+        surrogate_train = surr.jitter_spikes(
+            spiketrain, bin_size=bin_size, n_surrogates=1)[0]
+
+        bin_ids_orig = np.array(
+            (spiketrain.view(
+                pq.Quantity) /
+                bin_size).rescale(
+                pq.dimensionless).magnitude,
+            dtype=int)
+        bin_ids_surr = np.array(
+            (surrogate_train.view(
+                pq.Quantity) /
+                bin_size).rescale(
+                pq.dimensionless).magnitude,
+            dtype=int)
         self.assertTrue(np.all(bin_ids_orig == bin_ids_surr))
 
         # Bug encountered when the original and surrogate trains have
         # different number of spikes
-        self.assertEqual(len(st), len(surrog))
+        self.assertEqual(len(spiketrain), len(surrogate_train))
+
+    def test_jitter_spikes_unequal_bin_size(self):
+
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 480] * pq.ms, t_stop=500 * pq.ms)
+
+        bin_size = 75 * pq.ms
+        surrogate_train = surr.jitter_spikes(
+            spiketrain, bin_size=bin_size, n_surrogates=1)[0]
+
+        bin_ids_orig = np.array(
+            (spiketrain.view(
+                pq.Quantity) /
+                bin_size).rescale(
+                pq.dimensionless).magnitude,
+            dtype=int)
+        bin_ids_surr = np.array(
+            (surrogate_train.view(
+                pq.Quantity) /
+                bin_size).rescale(
+                pq.dimensionless).magnitude,
+            dtype=int)
 
-    def test_jitter_spikes_unequal_binsize(self):
+        self.assertTrue(np.all(bin_ids_orig == bin_ids_surr))
 
-        st = neo.SpikeTrain([90, 150, 180, 480] * pq.ms, t_stop=500 * pq.ms)
+    def test_surr_method(self):
 
-        binsize = 75 * pq.ms
-        surrog = surr.jitter_spikes(st, binsize=binsize, n=1)[0]
+        surr_methods = \
+            ('dither_spike_train', 'dither_spikes', 'jitter_spikes',
+             'randomise_spikes', 'shuffle_isis', 'joint_isi_dithering',
+             'dither_spikes_with_refractory_period', 'trial_shifting',
+             'bin_shuffling', 'isi_dithering')
+
+        surr_method_kwargs = \
+            {'dither_spikes': {},
+             'dither_spikes_with_refractory_period': {'refractory_period':
+                                                      3*pq.ms},
+             'randomise_spikes': {},
+             'shuffle_isis': {},
+             'dither_spike_train': {},
+             'jitter_spikes': {},
+             'bin_shuffling': {'bin_size': 3*pq.ms},
+             'joint_isi_dithering': {},
+             'isi_dithering': {},
+             'trial_shifting': {'trial_length': 200*pq.ms,
+                                'trial_separation': 50*pq.ms}}
+
+        dt = 15 * pq.ms
+        spiketrain = neo.SpikeTrain(
+            [90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
+        n_surrogates = 3
+        for method in surr_methods:
+            surrogates = surr.surrogates(
+                spiketrain,
+                dt=dt,
+                n_surrogates=n_surrogates,
+                method=method,
+                **surr_method_kwargs[method]
+            )
+            self.assertTrue(len(surrogates) == n_surrogates)
+
+            for surrogate_train in surrogates:
+                self.assertTrue(
+                    isinstance(surrogates[0], neo.SpikeTrain))
+                self.assertEqual(surrogate_train.units, spiketrain.units)
+                self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+                self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+                self.assertEqual(len(surrogate_train), len(spiketrain))
+            self.assertTrue(len(surrogates) == n_surrogates)
+
+        self.assertRaises(ValueError, surr.surrogates, spiketrain,
+                          n_surrogates=1,
+                          method='spike_shifting',
+                          dt=None, decimals=None, edges=True)
 
-        bin_ids_orig = np.array((st.view(pq.Quantity) / binsize).rescale(
-            pq.dimensionless).magnitude, dtype=int)
-        bin_ids_surr = np.array((surrog.view(pq.Quantity) / binsize).rescale(
-            pq.dimensionless).magnitude, dtype=int)
+        self.assertRaises(ValueError, surr.surrogates, spiketrain,
+                          method='dither_spikes', dt=None)
 
-        self.assertTrue(np.all(bin_ids_orig == bin_ids_surr))
+        self.assertRaises(TypeError, surr.surrogates, spiketrain.magnitude,
+                          method='dither_spikes', dt=10*pq.ms)
 
-    def test_surr_method(self):
+    def test_joint_isi_dithering_format(self):
 
-        st = neo.SpikeTrain([90, 150, 180, 350] * pq.ms, t_stop=500 * pq.ms)
-        nr_surr = 2
-        surrs = surr.surrogates(st, dt=3 * pq.ms, n=nr_surr,
-                                surr_method='shuffle_isis', edges=False)
+        rate = 100. * pq.Hz
+        t_stop = 1. * pq.s
+        spiketrain = stg.homogeneous_poisson_process(rate, t_stop=t_stop)
+        n_surrogates = 2
+        dither = 10 * pq.ms
 
-        self.assertRaises(ValueError, surr.surrogates, st, n=1,
-                          surr_method='spike_shifting',
-                          dt=None, decimals=None, edges=True)
-        self.assertTrue(len(surrs) == nr_surr)
+        # Test fast version
+        joint_isi_instance = surr.JointISI(spiketrain, dither=dither,
+                                           method='fast')
+        surrogate_trains = joint_isi_instance.dithering(
+            n_surrogates=n_surrogates)
+
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
+        self.assertEqual(joint_isi_instance.method, 'fast')
+
+        for surrogate_train in surrogate_trains:
+            self.assertIsInstance(surrogate_train, neo.SpikeTrain)
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
+
+        # Test window_version
+        joint_isi_instance = surr.JointISI(spiketrain,
+                                           method='window',
+                                           dither=2 * dither,
+                                           n_bins=50)
+        surrogate_trains = joint_isi_instance.dithering(
+            n_surrogates=n_surrogates)
+
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
+        self.assertEqual(joint_isi_instance.method, 'window')
+
+        for surrogate_train in surrogate_trains:
+            self.assertIsInstance(surrogate_train, neo.SpikeTrain)
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
+
+        # Test isi_dithering
+        joint_isi_instance = surr.JointISI(spiketrain,
+                                           method='window',
+                                           dither=2 * dither,
+                                           n_bins=50,
+                                           isi_dithering=True,
+                                           use_sqrt=True,
+                                           cutoff=False)
+        surrogate_trains = joint_isi_instance.dithering(
+            n_surrogates=n_surrogates)
+
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
+        self.assertEqual(joint_isi_instance.method, 'window')
+
+        for surrogate_train in surrogate_trains:
+            self.assertIsInstance(surrogate_train, neo.SpikeTrain)
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
+
+        # Test surrogate methods wrapper
+        surrogate_trains = surr.surrogates(
+            spiketrain,
+            dt=15*pq.ms,
+            n_surrogates=n_surrogates,
+            method='joint_isi_dithering')
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
+
+        for surrogate_train in surrogate_trains:
+            self.assertIsInstance(surrogate_train, neo.SpikeTrain)
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
+        with self.assertRaises(ValueError):
+            joint_isi_instance = surr.JointISI(spiketrain,
+                                               method='wrong method',
+                                               dither=2 * dither,
+                                               n_bins=50)
+
+    def test_joint_isi_dithering_empty_train(self):
+        spiketrain = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
+        surrogate_train = surr.JointISI(spiketrain).dithering()[0]
+        self.assertEqual(len(surrogate_train), 0)
+
+    def test_joint_isi_dithering_output(self):
+        spiketrain = stg.homogeneous_poisson_process(
+            rate=100. * pq.Hz,
+            refractory_period=3 * pq.ms,
+            t_stop=0.1 * pq.s)
+        surrogate_train = surr.JointISI(spiketrain).dithering()[0]
+        ground_truth = [0.005571, 0.018363, 0.026825, 0.036336, 0.045193,
+                        0.05146, 0.058489, 0.078053]
+        assert_array_almost_equal(surrogate_train.magnitude, ground_truth)
+
+    def test_joint_isi_with_wrongly_ordered_spikes(self):
+        surr_method = 'joint_isi_dithering'
+        n_surr = 30
+        dither = 15 * pq.ms
+        spiketrain = neo.SpikeTrain(
+            [39.65696411,  98.93868274, 120.2417674,  134.70971166,
+             154.20788924,
+             160.29077989, 179.19884034, 212.86773029, 247.59488061,
+             273.04095041,
+             297.56437605, 344.99204215, 418.55696486, 460.54298334,
+             482.82299125,
+             524.236052,   566.38966742, 597.87562722, 651.26965293,
+             692.39802855,
+             740.90285815, 849.45874695, 974.57724848,   8.79247605],
+            t_start=0.*pq.ms, t_stop=1000.*pq.ms, units=pq.ms)
+        surr.surrogates(spiketrain, n_surrogates=n_surr, method=surr_method,
+                        dt=dither)
+
+    def test_joint_isi_spikes_at_border(self):
+        surr_method = 'joint_isi_dithering'
+        n_surr = 30
+        dither = 15 * pq.ms
+        spiketrain = neo.SpikeTrain(
+            [4.,   28.,   45.,  51.,   83.,   87.,   96., 111., 126.,  131.,
+             138.,  150.,
+             209.,  232.,  253.,  275.,  279.,  303.,  320.,  371.,  396.,
+             401.,  429.,  447.,
+             479.,  511.,  535.,  549.,  581.,  585.,  605.,  607.,  626.,
+             630.,  644.,  714.,
+             832.,  835.,  853.,  858.,  878.,  905.,  909.,  932.,  950.,
+             961.,  999.,  1000.],
+            t_start=0.*pq.ms, t_stop=1000.*pq.ms, units=pq.ms)
+        surr.surrogates(
+            spiketrain, n_surrogates=n_surr, method=surr_method, dt=dither)
+
+    def test_bin_shuffling_output_format(self):
+
+        self.bin_size = 3*pq.ms
+        self.max_displacement = 10
+        spiketrain = neo.SpikeTrain([90, 93, 97, 100, 105,
+                                     150, 180, 350] * pq.ms, t_stop=.5 * pq.s)
+        binned_spiketrain = conv.BinnedSpikeTrain(spiketrain, self.bin_size)
+        n_surrogates = 2
+
+        for sliding in (True, False):
+            surrogate_trains = surr.bin_shuffling(
+                binned_spiketrain, max_displacement=self.max_displacement,
+                n_surrogates=n_surrogates, sliding=sliding)
+
+            self.assertIsInstance(surrogate_trains, list)
+            self.assertEqual(len(surrogate_trains), n_surrogates)
+
+            self.assertIsInstance(surrogate_trains[0], conv.BinnedSpikeTrain)
+            for surrogate_train in surrogate_trains:
+                self.assertEqual(surrogate_train.t_start,
+                                 binned_spiketrain.t_start)
+                self.assertEqual(surrogate_train.t_stop,
+                                 binned_spiketrain.t_stop)
+                self.assertEqual(surrogate_train.n_bins,
+                                 binned_spiketrain.n_bins)
+                self.assertEqual(surrogate_train.bin_size,
+                                 binned_spiketrain.bin_size)
+
+    def test_bin_shuffling_empty_train(self):
+
+        self.bin_size = 3 * pq.ms
+        self.max_displacement = 10
+        empty_spiketrain = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
+
+        binned_spiketrain = conv.BinnedSpikeTrain(empty_spiketrain,
+                                                  self.bin_size)
+        surrogate_train = surr.bin_shuffling(
+            binned_spiketrain, max_displacement=self.max_displacement,
+            n_surrogates=1)[0]
+        self.assertEqual(np.sum(surrogate_train.to_bool_array()), 0)
+
+    def test_trial_shuffling_output_format(self):
+        spiketrain = \
+            [neo.SpikeTrain([90, 93, 97, 100, 105, 150, 180, 190] * pq.ms,
+                            t_stop=.2 * pq.s),
+             neo.SpikeTrain([90, 93, 97, 100, 105, 150, 180, 190] * pq.ms,
+                            t_stop=.2 * pq.s)]
+        # trial_length = 200 * pq.ms
+        # trial_separation = 50 * pq.ms
+        n_surrogates = 2
+        dither = 10 * pq.ms
+        surrogate_trains = surr.trial_shifting(
+            spiketrain, dither=dither, n_surrogates=n_surrogates)
 
-        nr_surr2 = 4
-        surrs2 = surr.surrogates(st, dt=5 * pq.ms, n=nr_surr2,
-                                 surr_method='dither_spike_train', edges=True)
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
 
-        for surrog in surrs:
-            self.assertTrue(isinstance(surrs[0], neo.SpikeTrain))
-            self.assertEqual(surrog.units, st.units)
-            self.assertEqual(surrog.t_start, st.t_start)
-            self.assertEqual(surrog.t_stop, st.t_stop)
-            self.assertEqual(len(surrog), len(st))
-        self.assertTrue(len(surrs) == nr_surr)
+        self.assertIsInstance(surrogate_trains[0], list)
+        self.assertIsInstance(surrogate_trains[0][0], neo.SpikeTrain)
+        for surrogate_train in surrogate_trains[0]:
+            self.assertEqual(surrogate_train.units, spiketrain[0].units)
+            self.assertEqual(surrogate_train.t_start, spiketrain[0].t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain[0].t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain[0]))
+            assert_array_less(0., np.diff(surrogate_train))  # check ordering
 
-        for surrog in surrs2:
-            self.assertTrue(isinstance(surrs2[0], neo.SpikeTrain))
-            self.assertEqual(surrog.units, st.units)
-            self.assertEqual(surrog.t_start, st.t_start)
-            self.assertEqual(surrog.t_stop, st.t_stop)
-            self.assertEqual(len(surrog), len(st))
-        self.assertTrue(len(surrs2) == nr_surr2)
+    def test_trial_shuffling_empty_train(self):
+
+        empty_spiketrain = [neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms),
+                            neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)]
+
+        dither = 10 * pq.ms
+        surrogate_train = surr.trial_shifting(
+            empty_spiketrain, dither=dither, n_surrogates=1)[0]
+
+        self.assertEqual(len(surrogate_train), 2)
+        self.assertEqual(len(surrogate_train[0]), 0)
+
+    def test_trial_shuffling_output_format_concatenated(self):
+        spiketrain = neo.SpikeTrain([90, 93, 97, 100, 105,
+                                     150, 180, 350] * pq.ms, t_stop=.5 * pq.s)
+        trial_length = 200 * pq.ms
+        trial_separation = 50 * pq.ms
+        n_surrogates = 2
+        dither = 10 * pq.ms
+        surrogate_trains = surr._trial_shifting_of_concatenated_spiketrain(
+            spiketrain, dither=dither, n_surrogates=n_surrogates,
+            trial_length=trial_length, trial_separation=trial_separation)
+
+        self.assertIsInstance(surrogate_trains, list)
+        self.assertEqual(len(surrogate_trains), n_surrogates)
+
+        self.assertIsInstance(surrogate_trains[0], neo.SpikeTrain)
+        for surrogate_train in surrogate_trains:
+            self.assertEqual(surrogate_train.units, spiketrain.units)
+            self.assertEqual(surrogate_train.t_start, spiketrain.t_start)
+            self.assertEqual(surrogate_train.t_stop, spiketrain.t_stop)
+            self.assertEqual(len(surrogate_train), len(spiketrain))
+            assert_array_less(0., np.diff(surrogate_train))  # check ordering
+
+    def test_trial_shuffling_empty_train_concatenated(self):
+
+        empty_spiketrain = neo.SpikeTrain([] * pq.ms, t_stop=500 * pq.ms)
+        trial_length = 200 * pq.ms
+        trial_separation = 50 * pq.ms
+
+        dither = 10 * pq.ms
+        surrogate_train = surr._trial_shifting_of_concatenated_spiketrain(
+            empty_spiketrain, dither=dither, n_surrogates=1,
+            trial_length=trial_length, trial_separation=trial_separation)[0]
+        self.assertEqual(len(surrogate_train), 0)
 
 
 def suite():
     suite = unittest.makeSuite(SurrogatesTestCase, 'test')
     return suite
 
+
 if __name__ == "__main__":
     runner = unittest.TextTestRunner(verbosity=2)
     runner.run(suite())

+ 130 - 92
code/elephant/elephant/test/test_sta.py

@@ -2,7 +2,7 @@
 """
 Tests for the function sta module
 
-:copyright: Copyright 2015-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2015-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
@@ -20,31 +20,32 @@ from quantities import ms, mV, Hz
 import elephant.sta as sta
 import warnings
 
+
 class sta_TestCase(unittest.TestCase):
 
     def setUp(self):
         self.asiga0 = AnalogSignal(np.array([
-            np.sin(np.arange(0, 20 * math.pi, 0.1))]).T, 
+            np.sin(np.arange(0, 20 * math.pi, 0.1))]).T,
             units='mV', sampling_rate=10 / ms)
         self.asiga1 = AnalogSignal(np.array([
-            np.sin(np.arange(0, 20 * math.pi, 0.1)), 
-            np.cos(np.arange(0, 20 * math.pi, 0.1))]).T, 
+            np.sin(np.arange(0, 20 * math.pi, 0.1)),
+            np.cos(np.arange(0, 20 * math.pi, 0.1))]).T,
             units='mV', sampling_rate=10 / ms)
         self.asiga2 = AnalogSignal(np.array([
-            np.sin(np.arange(0, 20 * math.pi, 0.1)), 
-            np.cos(np.arange(0, 20 * math.pi, 0.1)), 
-            np.tan(np.arange(0, 20 * math.pi, 0.1))]).T, 
+            np.sin(np.arange(0, 20 * math.pi, 0.1)),
+            np.cos(np.arange(0, 20 * math.pi, 0.1)),
+            np.tan(np.arange(0, 20 * math.pi, 0.1))]).T,
             units='mV', sampling_rate=10 / ms)
         self.st0 = SpikeTrain(
-            [9 * math.pi, 10 * math.pi, 11 * math.pi, 12 * math.pi], 
+            [9 * math.pi, 10 * math.pi, 11 * math.pi, 12 * math.pi],
             units='ms', t_stop=self.asiga0.t_stop)
         self.lst = [SpikeTrain(
-            [9 * math.pi, 10 * math.pi, 11 * math.pi, 12 * math.pi], 
-            units='ms', t_stop=self.asiga1.t_stop), 
+            [9 * math.pi, 10 * math.pi, 11 * math.pi, 12 * math.pi],
+            units='ms', t_stop=self.asiga1.t_stop),
             SpikeTrain([30, 35, 40], units='ms', t_stop=self.asiga1.t_stop)]
 
-    #***********************************************************************
-    #************************ Test for typical values **********************
+    # ***********************************************************************
+    # ************************ Test for typical values **********************
 
     def test_spike_triggered_average_with_n_spikes_on_constant_function(self):
         '''Signal should average to the input'''
@@ -58,7 +59,7 @@ class sta_TestCase(unittest.TestCase):
         STA = sta.spike_triggered_average(
             asiga, st, (window_starttime, window_endtime))
         a = int(((window_endtime - window_starttime) *
-                asiga.sampling_rate).simplified)
+                 asiga.sampling_rate).simplified)
         cutout = asiga[0: a]
         cutout.t_start = window_starttime
         assert_array_almost_equal(STA, cutout, 12)
@@ -68,7 +69,7 @@ class sta_TestCase(unittest.TestCase):
         STA = sta.spike_triggered_average(
             self.asiga0, self.st0, (-4 * ms, 4 * ms))
         target = 5e-2 * mV
-        self.assertEqual(np.abs(STA).max().dimensionality.simplified, 
+        self.assertEqual(np.abs(STA).max().dimensionality.simplified,
                          pq.Quantity(1, "V").dimensionality.simplified)
         self.assertLess(np.abs(STA).max(), target)
 
@@ -85,107 +86,124 @@ class sta_TestCase(unittest.TestCase):
         window_endtime = 5 * ms
         STA = sta.spike_triggered_average(
             z, st, (window_starttime, window_endtime))
-        cutout = z[int(((spiketime + window_starttime) * sr).simplified): 
-            int(((spiketime + window_endtime) * sr).simplified)]
+        cutout = z[int(((spiketime + window_starttime) * sr).simplified):
+                   int(((spiketime + window_endtime) * sr).simplified)]
         cutout.t_start = window_starttime
         assert_array_equal(STA, cutout)
 
     def test_usage_of_spikes(self):
-        st = SpikeTrain([16.5 * math.pi, 17.5 * math.pi, 
-            18.5 * math.pi, 19.5 * math.pi], units='ms', t_stop=20 * math.pi)
+        st = SpikeTrain([16.5 * math.pi,
+                         17.5 * math.pi,
+                         18.5 * math.pi,
+                         19.5 * math.pi],
+                        units='ms',
+                        t_stop=20 * math.pi)
         STA = sta.spike_triggered_average(
             self.asiga0, st, (-math.pi * ms, math.pi * ms))
         self.assertEqual(STA.annotations['used_spikes'], 3)
         self.assertEqual(STA.annotations['unused_spikes'], 1)
 
-
-    #***********************************************************************
-    #**** Test for an invalid value, to check that the function raises *****
-    #********* an exception or returns an error code ***********************
+    # ***********************************************************************
+    # **** Test for an invalid value, to check that the function raises *****
+    # ********* an exception or returns an error code ***********************
 
     def test_analog_signal_of_wrong_type(self):
         '''Analog signal given as list, but must be AnalogSignal'''
         asiga = [0, 1, 2, 3, 4]
-        self.assertRaises(TypeError, sta.spike_triggered_average, 
-            asiga, self.st0, (-2 * ms, 2 * ms))
+        self.assertRaises(TypeError, sta.spike_triggered_average,
+                          asiga, self.st0, (-2 * ms, 2 * ms))
 
     def test_spiketrain_of_list_type_in_wrong_sense(self):
         st = [10, 11, 12]
-        self.assertRaises(TypeError, sta.spike_triggered_average, 
-            self.asiga0, st, (1 * ms, 2 * ms))
+        self.assertRaises(TypeError, sta.spike_triggered_average,
+                          self.asiga0, st, (1 * ms, 2 * ms))
 
     def test_spiketrain_of_nonlist_and_nonspiketrain_type(self):
         st = (10, 11, 12)
-        self.assertRaises(TypeError, sta.spike_triggered_average, 
-            self.asiga0, st, (1 * ms, 2 * ms))
+        self.assertRaises(TypeError, sta.spike_triggered_average,
+                          self.asiga0, st, (1 * ms, 2 * ms))
 
     def test_forgotten_AnalogSignal_argument(self):
-        self.assertRaises(TypeError, sta.spike_triggered_average, 
-            self.st0, (-2 * ms, 2 * ms))
+        self.assertRaises(TypeError, sta.spike_triggered_average,
+                          self.st0, (-2 * ms, 2 * ms))
 
     def test_one_smaller_nrspiketrains_smaller_nranalogsignals(self):
         '''Number of spiketrains between 1 and number of analogsignals'''
-        self.assertRaises(ValueError, sta.spike_triggered_average, 
-            self.asiga2, self.lst, (-2 * ms, 2 * ms))
+        self.assertRaises(ValueError, sta.spike_triggered_average,
+                          self.asiga2, self.lst, (-2 * ms, 2 * ms))
 
     def test_more_spiketrains_than_analogsignals_forbidden(self):
-        self.assertRaises(ValueError, sta.spike_triggered_average, 
-            self.asiga0, self.lst, (-2 * ms, 2 * ms))
+        self.assertRaises(ValueError, sta.spike_triggered_average,
+                          self.asiga0, self.lst, (-2 * ms, 2 * ms))
 
     def test_spike_earlier_than_analogsignal(self):
         st = SpikeTrain([-1 * math.pi, 2 * math.pi],
-            units='ms', t_start=-2 * math.pi, t_stop=20 * math.pi)
-        self.assertRaises(ValueError, sta.spike_triggered_average, 
-            self.asiga0, st, (-2 * ms, 2 * ms))
+                        units='ms', t_start=-2 * math.pi, t_stop=20 * math.pi)
+        self.assertRaises(ValueError, sta.spike_triggered_average,
+                          self.asiga0, st, (-2 * ms, 2 * ms))
 
     def test_spike_later_than_analogsignal(self):
         st = SpikeTrain(
             [math.pi, 21 * math.pi], units='ms', t_stop=25 * math.pi)
-        self.assertRaises(ValueError, sta.spike_triggered_average, 
-            self.asiga0, st, (-2 * ms, 2 * ms))
+        self.assertRaises(ValueError, sta.spike_triggered_average,
+                          self.asiga0, st, (-2 * ms, 2 * ms))
 
     def test_impossible_window(self):
-        self.assertRaises(ValueError, sta.spike_triggered_average, 
-            self.asiga0, self.st0, (-2 * ms, -5 * ms))
+        self.assertRaises(ValueError, sta.spike_triggered_average,
+                          self.asiga0, self.st0, (-2 * ms, -5 * ms))
 
     def test_window_larger_than_signal(self):
-        self.assertRaises(ValueError, sta.spike_triggered_average,
-            self.asiga0, self.st0, (-15 * math.pi * ms, 15 * math.pi * ms))
+        self.assertRaises(
+            ValueError,
+            sta.spike_triggered_average,
+            self.asiga0,
+            self.st0,
+            (-15 * math.pi * ms,
+             15 * math.pi * ms))
 
     def test_wrong_window_starttime_unit(self):
-        self.assertRaises(TypeError, sta.spike_triggered_average, 
-            self.asiga0, self.st0, (-2 * mV, 2 * ms))
+        self.assertRaises(TypeError, sta.spike_triggered_average,
+                          self.asiga0, self.st0, (-2 * mV, 2 * ms))
 
     def test_wrong_window_endtime_unit(self):
-        self.assertRaises(TypeError, sta.spike_triggered_average, 
-            self.asiga0, self.st0, (-2 * ms, 2 * Hz))
+        self.assertRaises(TypeError, sta.spike_triggered_average,
+                          self.asiga0, self.st0, (-2 * ms, 2 * Hz))
 
     def test_window_borders_as_complex_numbers(self):
-        self.assertRaises(TypeError, sta.spike_triggered_average, self.asiga0,
-            self.st0, ((-2 * math.pi + 3j) * ms, (2 * math.pi + 3j) * ms))
-
-    #***********************************************************************
-    #**** Test for an empty value (where the argument is a list, array, ****
-    #********* vector or other container datatype). ************************
+        self.assertRaises(
+            TypeError,
+            sta.spike_triggered_average,
+            self.asiga0,
+            self.st0,
+            ((-2 * math.pi + 3j) * ms,
+             (2 * math.pi + 3j) * ms))
+
+    # ***********************************************************************
+    # **** Test for an empty value (where the argument is a list, array, ****
+    # ********* vector or other container datatype). ************************
 
     def test_empty_analogsignal(self):
         asiga = AnalogSignal([], units='mV', sampling_rate=10 / ms)
         st = SpikeTrain([5], units='ms', t_stop=10)
-        self.assertRaises(ValueError, sta.spike_triggered_average, 
-            asiga, st, (-1 * ms, 1 * ms))
+        self.assertRaises(ValueError, sta.spike_triggered_average,
+                          asiga, st, (-1 * ms, 1 * ms))
 
     def test_one_spiketrain_empty(self):
         '''Test for one empty SpikeTrain, but existing spikes in other'''
         st = [SpikeTrain(
-            [9 * math.pi, 10 * math.pi, 11 * math.pi, 12 * math.pi], 
-            units='ms', t_stop=self.asiga1.t_stop), 
+            [9 * math.pi, 10 * math.pi, 11 * math.pi, 12 * math.pi],
+            units='ms', t_stop=self.asiga1.t_stop),
             SpikeTrain([], units='ms', t_stop=self.asiga1.t_stop)]
-        STA = sta.spike_triggered_average(self.asiga1, st, (-1 * ms, 1 * ms))
-        cmp_array = AnalogSignal(np.array([np.zeros(20, dtype=float)]).T,
-            units='mV', sampling_rate=10 / ms)
-        cmp_array = cmp_array / 0.
-        cmp_array.t_start = -1 * ms
-        assert_array_equal(STA.magnitude[:, 1], cmp_array.magnitude[:, 0])
+        with warnings.catch_warnings():
+            warnings.simplefilter("ignore")
+            """
+            Ignore the RuntimeWarning: invalid value encountered in true_divide
+            new_signal = f(other, *args) for the empty SpikeTrain.
+            """
+            STA = sta.spike_triggered_average(self.asiga1,
+                                              spiketrains=st,
+                                              window=(-1 * ms, 1 * ms))
+        assert np.isnan(STA.magnitude[:, 1]).all()
 
     def test_all_spiketrains_empty(self):
         st = SpikeTrain([], units='ms', t_stop=self.asiga1.t_stop)
@@ -200,7 +218,7 @@ class sta_TestCase(unittest.TestCase):
             nan_array = np.empty(20)
             nan_array.fill(np.nan)
             cmp_array = AnalogSignal(np.array([nan_array, nan_array]).T,
-                units='mV', sampling_rate=10 / ms)
+                                     units='mV', sampling_rate=10 / ms)
             assert_array_equal(STA.magnitude, cmp_array.magnitude)
 
 
@@ -209,7 +227,7 @@ class sta_TestCase(unittest.TestCase):
 # =========================================================================
 
 @unittest.skipIf(not hasattr(scipy.signal, 'coherence'), "Please update scipy "
-                                                        "to a version >= 0.16")
+                 "to a version >= 0.16")
 class sfc_TestCase_new_scipy(unittest.TestCase):
 
     def setUp(self):
@@ -226,7 +244,7 @@ class sfc_TestCase_new_scipy(unittest.TestCase):
         self.st0 = SpikeTrain(
             np.arange(0, tlen0.rescale(pq.ms).magnitude, 50) * pq.ms,
             t_start=0 * pq.ms, t_stop=tlen0)
-        self.bst0 = BinnedSpikeTrain(self.st0, binsize=fs0)
+        self.bst0 = BinnedSpikeTrain(self.st0, bin_size=fs0)
 
         # shortened analogsignals
         self.anasig1 = self.anasig0.time_slice(1 * pq.s, None)
@@ -239,7 +257,7 @@ class sfc_TestCase_new_scipy(unittest.TestCase):
             units=pq.mV, t_start=0 * pq.ms, sampling_period=fs1)
         self.bst1 = BinnedSpikeTrain(
             self.st0.time_slice(self.anasig3.t_start, self.anasig3.t_stop),
-            binsize=fs1)
+            bin_size=fs1)
 
         # analogsignal containing multiple traces
         self.anasig4 = AnalogSignal(
@@ -255,21 +273,24 @@ class sfc_TestCase_new_scipy(unittest.TestCase):
                 (tlen0.rescale(pq.ms).magnitude * .25),
                 (tlen0.rescale(pq.ms).magnitude * .75), 50) * pq.ms,
             t_start=0 * pq.ms, t_stop=tlen0)
-        self.bst3 = BinnedSpikeTrain(self.st3, binsize=fs0)
+        self.bst3 = BinnedSpikeTrain(self.st3, bin_size=fs0)
 
         self.st4 = SpikeTrain(np.arange(
             (tlen0.rescale(pq.ms).magnitude * .25),
             (tlen0.rescale(pq.ms).magnitude * .75), 50) * pq.ms,
             t_start=5 * fs0, t_stop=tlen0 - 5 * fs0)
-        self.bst4 = BinnedSpikeTrain(self.st4, binsize=fs0)
+        self.bst4 = BinnedSpikeTrain(self.st4, bin_size=fs0)
 
-        # spike train with incompatible binsize
-        self.bst5 = BinnedSpikeTrain(self.st3, binsize=fs0 * 2.)
+        # spike train with incompatible bin_size
+        self.bst5 = BinnedSpikeTrain(self.st3, bin_size=fs0 * 2.)
 
-        # spike train with same binsize as the analog signal, but with
+        # spike train with same bin_size as the analog signal, but with
         # bin edges not aligned to the time axis of the analog signal
         self.bst6 = BinnedSpikeTrain(
-            self.st3, binsize=fs0, t_start=4.5 * fs0, t_stop=tlen0 - 4.5 * fs0)
+            self.st3,
+            bin_size=fs0,
+            t_start=4.5 * fs0,
+            t_stop=tlen0 - 4.5 * fs0)
 
     # =========================================================================
     # Tests for correct input handling
@@ -284,7 +305,7 @@ class sfc_TestCase_new_scipy(unittest.TestCase):
                           self.anasig0, [1, 2, 3])
         self.assertRaises(ValueError,
                           sta.spike_field_coherence,
-                          self.anasig0.duplicate_with_new_array([]), self.bst0)
+                          self.anasig0.duplicate_with_new_data([]), self.bst0)
 
     def test_start_stop_times_out_of_range(self):
         self.assertRaises(ValueError,
@@ -301,8 +322,8 @@ class sfc_TestCase_new_scipy(unittest.TestCase):
                           self.anasig0, self.bst1)
 
     def test_incompatible_spiketrain_analogsignal(self):
-        # These spike trains have incompatible binning (binsize or alignment to
-        # time axis of analog signal)
+        # These spike trains have incompatible binning (bin_size or alignment
+        # to time axis of analog signal)
         self.assertRaises(ValueError,
                           sta.spike_field_coherence,
                           self.anasig0, self.bst5)
@@ -345,8 +366,15 @@ class sfc_TestCase_new_scipy(unittest.TestCase):
 
     def test_spike_field_coherence_perfect_coherence(self):
         # check for detection of 20Hz peak in anasig0/bst0
-        s, f = sta.spike_field_coherence(
-            self.anasig0, self.bst0, window='boxcar')
+        with warnings.catch_warnings():
+            warnings.simplefilter("ignore")
+            """
+            When the spiketrain is a vector with zero values, ignore the
+            warning RuntimeWarning: invalid value encountered in true_divide
+              Cxy = np.abs(Pxy)**2 / Pxx / Pyy.
+            """
+            s, f = sta.spike_field_coherence(
+                self.anasig0, self.bst0, window='boxcar')
 
         f_ind = np.where(f >= 19.)[0][0]
         max_ind = np.argmax(s[1:]) + 1
@@ -361,21 +389,30 @@ class sfc_TestCase_new_scipy(unittest.TestCase):
         # check number of frequency samples
         self.assertEqual(len(f), nfft / 2 + 1)
 
+        f_max = self.anasig3.sampling_rate.rescale('Hz').magnitude / 2
+        f_ground_truth = np.linspace(start=0,
+                                     stop=f_max,
+                                     num=nfft // 2 + 1) * pq.Hz
+
         # check values of frequency samples
-        assert_array_almost_equal(
-            f, np.linspace(
-                0, self.anasig3.sampling_rate.rescale('Hz').magnitude / 2,
-                nfft / 2 + 1) * pq.Hz)
+        assert_array_almost_equal(f, f_ground_truth)
 
     def test_short_spiketrain(self):
         # this spike train has the same length as anasig0
-        s1, f1 = sta.spike_field_coherence(
-            self.anasig0, self.bst3, window='boxcar')
-
-        # this spike train has the same spikes as above, but is shorter than
-        # anasig0
-        s2, f2 = sta.spike_field_coherence(
-            self.anasig0, self.bst4, window='boxcar')
+        with warnings.catch_warnings():
+            warnings.simplefilter("ignore")
+            """
+            When the spiketrain is a vector with zero values, ignore the
+            warning RuntimeWarning: invalid value encountered in true_divide
+              Cxy = np.abs(Pxy)**2 / Pxx / Pyy.
+            """
+            s1, f1 = sta.spike_field_coherence(
+                self.anasig0, self.bst3, window='boxcar')
+
+            # this spike train has the same spikes as above,
+            # but it's shorter than anasig0
+            s2, f2 = sta.spike_field_coherence(
+                self.anasig0, self.bst4, window='boxcar')
 
         # the results above should be the same, nevertheless
         assert_array_equal(s1.magnitude, s2.magnitude)
@@ -404,11 +441,12 @@ class sfc_TestCase_old_scipy(unittest.TestCase):
         self.st0 = SpikeTrain(
             np.arange(0, tlen0.rescale(pq.ms).magnitude, 50) * pq.ms,
             t_start=0 * pq.ms, t_stop=tlen0)
-        self.bst0 = BinnedSpikeTrain(self.st0, binsize=fs0)
+        self.bst0 = BinnedSpikeTrain(self.st0, bin_size=fs0)
 
         def test_old_scipy_version(self):
-            self.assertRaises(AttributeError,  sta.spike_field_coherence,
-                    self.anasig0, self.bst0)
+            self.assertRaises(AttributeError, sta.spike_field_coherence,
+                              self.anasig0, self.bst0)
+
 
 if __name__ == '__main__':
     unittest.main()

+ 431 - 213
code/elephant/elephant/test/test_statistics.py

@@ -2,22 +2,26 @@
 """
 Unit tests for the statistics module.
 
-:copyright: Copyright 2014-2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2014-2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 from __future__ import division
 
+import math
+import sys
 import unittest
 
 import neo
 import numpy as np
-from numpy.testing.utils import assert_array_almost_equal, assert_array_equal
 import quantities as pq
 import scipy.integrate as spint
+from numpy.testing.utils import assert_array_almost_equal, assert_array_equal
 
-import elephant.statistics as es
 import elephant.kernels as kernels
-import warnings
+from elephant import statistics
+from elephant.spike_train_generation import homogeneous_poisson_process
+
+python_version_major = sys.version_info.major
 
 
 class isi_TestCase(unittest.TestCase):
@@ -25,7 +29,7 @@ class isi_TestCase(unittest.TestCase):
         self.test_array_2d = np.array([[0.3, 0.56, 0.87, 1.23],
                                        [0.02, 0.71, 1.82, 8.46],
                                        [0.03, 0.14, 0.15, 0.92]])
-        self.targ_array_2d_0 = np.array([[-0.28,  0.15,  0.95,  7.23],
+        self.targ_array_2d_0 = np.array([[-0.28, 0.15, 0.95, 7.23],
                                          [0.01, -0.57, -1.67, -7.54]])
         self.targ_array_2d_1 = np.array([[0.26, 0.31, 0.36],
                                          [0.69, 1.11, 6.64],
@@ -39,40 +43,40 @@ class isi_TestCase(unittest.TestCase):
         st = neo.SpikeTrain(
             self.test_array_1d, units='ms', t_stop=10.0, t_start=0.29)
         target = pq.Quantity(self.targ_array_1d, 'ms')
-        res = es.isi(st)
+        res = statistics.isi(st)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_isi_with_quantities_1d(self):
         st = pq.Quantity(self.test_array_1d, units='ms')
         target = pq.Quantity(self.targ_array_1d, 'ms')
-        res = es.isi(st)
+        res = statistics.isi(st)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_isi_with_plain_array_1d(self):
         st = self.test_array_1d
         target = self.targ_array_1d
-        res = es.isi(st)
+        res = statistics.isi(st)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_isi_with_plain_array_2d_default(self):
         st = self.test_array_2d
         target = self.targ_array_2d_default
-        res = es.isi(st)
+        res = statistics.isi(st)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_isi_with_plain_array_2d_0(self):
         st = self.test_array_2d
         target = self.targ_array_2d_0
-        res = es.isi(st, axis=0)
+        res = statistics.isi(st, axis=0)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_isi_with_plain_array_2d_1(self):
         st = self.test_array_2d
         target = self.targ_array_2d_1
-        res = es.isi(st, axis=1)
+        res = statistics.isi(st, axis=1)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
@@ -82,15 +86,15 @@ class isi_cv_TestCase(unittest.TestCase):
         self.test_array_regular = np.arange(1, 6)
 
     def test_cv_isi_regular_spiketrain_is_zero(self):
-        st = neo.SpikeTrain(self.test_array_regular,  units='ms', t_stop=10.0)
+        st = neo.SpikeTrain(self.test_array_regular, units='ms', t_stop=10.0)
         targ = 0.0
-        res = es.cv(es.isi(st))
+        res = statistics.cv(statistics.isi(st))
         self.assertEqual(res, targ)
 
     def test_cv_isi_regular_array_is_zero(self):
         st = self.test_array_regular
         targ = 0.0
-        res = es.cv(es.isi(st))
+        res = statistics.cv(statistics.isi(st))
         self.assertEqual(res, targ)
 
 
@@ -115,120 +119,144 @@ class mean_firing_rate_TestCase(unittest.TestCase):
         self.targ_array_1d = self.targ_array_2d_1[0]
         self.max_array_1d = self.max_array_2d_1[0]
 
+    def test_invalid_input_spiketrain(self):
+        # empty spiketrain
+        self.assertRaises(ValueError, statistics.mean_firing_rate, [])
+        for st_invalid in (None, 0.1):
+            self.assertRaises(TypeError, statistics.mean_firing_rate,
+                              st_invalid)
+
     def test_mean_firing_rate_with_spiketrain(self):
         st = neo.SpikeTrain(self.test_array_1d, units='ms', t_stop=10.0)
-        target = pq.Quantity(self.targ_array_1d/10., '1/ms')
-        res = es.mean_firing_rate(st)
+        target = pq.Quantity(self.targ_array_1d / 10., '1/ms')
+        res = statistics.mean_firing_rate(st)
         assert_array_almost_equal(res, target, decimal=9)
 
+    def test_mean_firing_rate_typical_use_case(self):
+        np.random.seed(92)
+        st = homogeneous_poisson_process(rate=100 * pq.Hz, t_stop=100 * pq.s)
+        rate1 = statistics.mean_firing_rate(st)
+        rate2 = statistics.mean_firing_rate(st, t_start=st.t_start,
+                                            t_stop=st.t_stop)
+        self.assertEqual(rate1.units, rate2.units)
+        self.assertAlmostEqual(rate1.item(), rate2.item())
+
     def test_mean_firing_rate_with_spiketrain_set_ends(self):
         st = neo.SpikeTrain(self.test_array_1d, units='ms', t_stop=10.0)
-        target = pq.Quantity(2/0.5, '1/ms')
-        res = es.mean_firing_rate(st, t_start=0.4, t_stop=0.9)
+        target = pq.Quantity(2 / 0.5, '1/ms')
+        res = statistics.mean_firing_rate(st, t_start=0.4 * pq.ms,
+                                          t_stop=0.9 * pq.ms)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_quantities_1d(self):
         st = pq.Quantity(self.test_array_1d, units='ms')
-        target = pq.Quantity(self.targ_array_1d/self.max_array_1d, '1/ms')
-        res = es.mean_firing_rate(st)
+        target = pq.Quantity(self.targ_array_1d / self.max_array_1d, '1/ms')
+        res = statistics.mean_firing_rate(st)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_quantities_1d_set_ends(self):
         st = pq.Quantity(self.test_array_1d, units='ms')
-        target = pq.Quantity(2/0.6, '1/ms')
-        res = es.mean_firing_rate(st, t_start=400*pq.us, t_stop=1.)
-        assert_array_almost_equal(res, target, decimal=9)
+
+        # t_stop is not a Quantity
+        self.assertRaises(TypeError, statistics.mean_firing_rate, st,
+                          t_start=400 * pq.us, t_stop=1.)
+
+        # t_start is not a Quantity
+        self.assertRaises(TypeError, statistics.mean_firing_rate, st,
+                          t_start=0.4, t_stop=1. * pq.ms)
 
     def test_mean_firing_rate_with_plain_array_1d(self):
         st = self.test_array_1d
-        target = self.targ_array_1d/self.max_array_1d
-        res = es.mean_firing_rate(st)
+        target = self.targ_array_1d / self.max_array_1d
+        res = statistics.mean_firing_rate(st)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_1d_set_ends(self):
         st = self.test_array_1d
-        target = self.targ_array_1d/(1.23-0.3)
-        res = es.mean_firing_rate(st, t_start=0.3, t_stop=1.23)
+        target = self.targ_array_1d / (1.23 - 0.3)
+        res = statistics.mean_firing_rate(st, t_start=0.3, t_stop=1.23)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_2d_default(self):
         st = self.test_array_2d
-        target = self.targ_array_2d_default/self.max_array_2d_default
-        res = es.mean_firing_rate(st)
+        target = self.targ_array_2d_default / self.max_array_2d_default
+        res = statistics.mean_firing_rate(st)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_2d_0(self):
         st = self.test_array_2d
-        target = self.targ_array_2d_0/self.max_array_2d_0
-        res = es.mean_firing_rate(st, axis=0)
+        target = self.targ_array_2d_0 / self.max_array_2d_0
+        res = statistics.mean_firing_rate(st, axis=0)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_2d_1(self):
         st = self.test_array_2d
-        target = self.targ_array_2d_1/self.max_array_2d_1
-        res = es.mean_firing_rate(st, axis=1)
+        target = self.targ_array_2d_1 / self.max_array_2d_1
+        res = statistics.mean_firing_rate(st, axis=1)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_3d_None(self):
         st = self.test_array_3d
-        target = np.sum(self.test_array_3d, None)/5.
-        res = es.mean_firing_rate(st, axis=None, t_stop=5.)
+        target = np.sum(self.test_array_3d, None) / 5.
+        res = statistics.mean_firing_rate(st, axis=None, t_stop=5.)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_3d_0(self):
         st = self.test_array_3d
-        target = np.sum(self.test_array_3d, 0)/5.
-        res = es.mean_firing_rate(st, axis=0, t_stop=5.)
+        target = np.sum(self.test_array_3d, 0) / 5.
+        res = statistics.mean_firing_rate(st, axis=0, t_stop=5.)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_3d_1(self):
         st = self.test_array_3d
-        target = np.sum(self.test_array_3d, 1)/5.
-        res = es.mean_firing_rate(st, axis=1, t_stop=5.)
+        target = np.sum(self.test_array_3d, 1) / 5.
+        res = statistics.mean_firing_rate(st, axis=1, t_stop=5.)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_3d_2(self):
         st = self.test_array_3d
-        target = np.sum(self.test_array_3d, 2)/5.
-        res = es.mean_firing_rate(st, axis=2, t_stop=5.)
+        target = np.sum(self.test_array_3d, 2) / 5.
+        res = statistics.mean_firing_rate(st, axis=2, t_stop=5.)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_2d_1_set_ends(self):
         st = self.test_array_2d
-        target = np.array([4, 1, 3])/(1.23-0.14)
-        res = es.mean_firing_rate(st, axis=1, t_start=0.14, t_stop=1.23)
+        target = np.array([4, 1, 3]) / (1.23 - 0.14)
+        res = statistics.mean_firing_rate(st, axis=1, t_start=0.14,
+                                          t_stop=1.23)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
     def test_mean_firing_rate_with_plain_array_2d_None(self):
         st = self.test_array_2d
-        target = self.targ_array_2d_None/self.max_array_2d_None
-        res = es.mean_firing_rate(st, axis=None)
+        target = self.targ_array_2d_None / self.max_array_2d_None
+        res = statistics.mean_firing_rate(st, axis=None)
         assert not isinstance(res, pq.Quantity)
         assert_array_almost_equal(res, target, decimal=9)
 
-    def test_mean_firing_rate_with_plain_array_and_units_start_stop_typeerror(self):
+    def test_mean_firing_rate_with_plain_array_and_units_start_stop_typeerror(
+            self):
         st = self.test_array_2d
-        self.assertRaises(TypeError, es.mean_firing_rate, st,
+        self.assertRaises(TypeError, statistics.mean_firing_rate, st,
                           t_start=pq.Quantity(0, 'ms'))
-        self.assertRaises(TypeError, es.mean_firing_rate, st,
+        self.assertRaises(TypeError, statistics.mean_firing_rate, st,
                           t_stop=pq.Quantity(10, 'ms'))
-        self.assertRaises(TypeError, es.mean_firing_rate, st,
+        self.assertRaises(TypeError, statistics.mean_firing_rate, st,
                           t_start=pq.Quantity(0, 'ms'),
                           t_stop=pq.Quantity(10, 'ms'))
-        self.assertRaises(TypeError, es.mean_firing_rate, st,
+        self.assertRaises(TypeError, statistics.mean_firing_rate, st,
                           t_start=pq.Quantity(0, 'ms'),
                           t_stop=10.)
-        self.assertRaises(TypeError, es.mean_firing_rate, st,
+        self.assertRaises(TypeError, statistics.mean_firing_rate, st,
                           t_start=0.,
                           t_stop=pq.Quantity(10, 'ms'))
 
@@ -258,121 +286,196 @@ class FanoFactorTestCase(unittest.TestCase):
         # Test with list of spiketrains
         self.assertEqual(
             np.var(self.sp_counts) / np.mean(self.sp_counts),
-            es.fanofactor(self.test_spiketrains))
+            statistics.fanofactor(self.test_spiketrains))
 
         # One spiketrain in list
         st = self.test_spiketrains[0]
-        self.assertEqual(es.fanofactor([st]), 0.0)
+        self.assertEqual(statistics.fanofactor([st]), 0.0)
 
     def test_fanofactor_empty(self):
         # Test with empty list
-        self.assertTrue(np.isnan(es.fanofactor([])))
-        self.assertTrue(np.isnan(es.fanofactor([[]])))
+        self.assertTrue(np.isnan(statistics.fanofactor([])))
+        self.assertTrue(np.isnan(statistics.fanofactor([[]])))
 
         # Test with empty quantity
-        self.assertTrue(np.isnan(es.fanofactor([] * pq.ms)))
+        self.assertTrue(np.isnan(statistics.fanofactor([] * pq.ms)))
 
         # Empty spiketrain
         st = neo.core.SpikeTrain([] * pq.ms, t_start=0 * pq.ms,
                                  t_stop=1.5 * pq.ms)
-        self.assertTrue(np.isnan(es.fanofactor(st)))
+        self.assertTrue(np.isnan(statistics.fanofactor(st)))
 
     def test_fanofactor_spiketrains_same(self):
         # Test with same spiketrains in list
         sts = [self.test_spiketrains[0]] * 3
-        self.assertEqual(es.fanofactor(sts), 0.0)
+        self.assertEqual(statistics.fanofactor(sts), 0.0)
 
     def test_fanofactor_array(self):
-        self.assertEqual(es.fanofactor(self.test_array),
+        self.assertEqual(statistics.fanofactor(self.test_array),
                          np.var(self.sp_counts) / np.mean(self.sp_counts))
 
     def test_fanofactor_array_same(self):
         lst = [self.test_array[0]] * 3
-        self.assertEqual(es.fanofactor(lst), 0.0)
+        self.assertEqual(statistics.fanofactor(lst), 0.0)
 
     def test_fanofactor_quantity(self):
-        self.assertEqual(es.fanofactor(self.test_quantity),
+        self.assertEqual(statistics.fanofactor(self.test_quantity),
                          np.var(self.sp_counts) / np.mean(self.sp_counts))
 
     def test_fanofactor_quantity_same(self):
         lst = [self.test_quantity[0]] * 3
-        self.assertEqual(es.fanofactor(lst), 0.0)
+        self.assertEqual(statistics.fanofactor(lst), 0.0)
 
     def test_fanofactor_list(self):
-        self.assertEqual(es.fanofactor(self.test_list),
+        self.assertEqual(statistics.fanofactor(self.test_list),
                          np.var(self.sp_counts) / np.mean(self.sp_counts))
 
     def test_fanofactor_list_same(self):
         lst = [self.test_list[0]] * 3
-        self.assertEqual(es.fanofactor(lst), 0.0)
+        self.assertEqual(statistics.fanofactor(lst), 0.0)
+
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
+    def test_fanofactor_different_durations(self):
+        st1 = neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=4 * pq.s)
+        st2 = neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=4.5 * pq.s)
+        self.assertWarns(UserWarning, statistics.fanofactor, (st1, st2))
+
+    def test_fanofactor_wrong_type(self):
+        # warn_tolerance is not a quantity
+        st1 = neo.SpikeTrain([1, 2, 3] * pq.s, t_stop=4 * pq.s)
+        self.assertRaises(TypeError, statistics.fanofactor, [st1],
+                          warn_tolerance=1e-4)
 
 
 class LVTestCase(unittest.TestCase):
     def setUp(self):
-        self.test_seq = [1, 28,  4, 47,  5, 16,  2,  5, 21, 12,
-                         4, 12, 59,  2,  4, 18, 33, 25,  2, 34,
-                         4,  1,  1, 14,  8,  1, 10,  1,  8, 20,
-                         5,  1,  6,  5, 12,  2,  8,  8,  2,  8,
-                         2, 10,  2,  1,  1,  2, 15,  3, 20,  6,
-                         11, 6, 18,  2,  5, 17,  4,  3, 13,  6,
-                         1, 18,  1, 16, 12,  2, 52,  2,  5,  7,
-                         6, 25,  6,  5,  3, 15,  4,  3, 16,  3,
-                         6,  5, 24, 21,  3,  3,  4,  8,  4, 11,
-                         5,  7,  5,  6,  8, 11, 33, 10,  7,  4]
+        self.test_seq = [1, 28, 4, 47, 5, 16, 2, 5, 21, 12,
+                         4, 12, 59, 2, 4, 18, 33, 25, 2, 34,
+                         4, 1, 1, 14, 8, 1, 10, 1, 8, 20,
+                         5, 1, 6, 5, 12, 2, 8, 8, 2, 8,
+                         2, 10, 2, 1, 1, 2, 15, 3, 20, 6,
+                         11, 6, 18, 2, 5, 17, 4, 3, 13, 6,
+                         1, 18, 1, 16, 12, 2, 52, 2, 5, 7,
+                         6, 25, 6, 5, 3, 15, 4, 3, 16, 3,
+                         6, 5, 24, 21, 3, 3, 4, 8, 4, 11,
+                         5, 7, 5, 6, 8, 11, 33, 10, 7, 4]
 
         self.target = 0.971826029994
 
     def test_lv_with_quantities(self):
         seq = pq.Quantity(self.test_seq, units='ms')
-        assert_array_almost_equal(es.lv(seq), self.target, decimal=9)
+        assert_array_almost_equal(statistics.lv(seq), self.target, decimal=9)
 
     def test_lv_with_plain_array(self):
         seq = np.array(self.test_seq)
-        assert_array_almost_equal(es.lv(seq), self.target, decimal=9)
+        assert_array_almost_equal(statistics.lv(seq), self.target, decimal=9)
 
     def test_lv_with_list(self):
         seq = self.test_seq
-        assert_array_almost_equal(es.lv(seq), self.target, decimal=9)
+        assert_array_almost_equal(statistics.lv(seq), self.target, decimal=9)
 
     def test_lv_raise_error(self):
         seq = self.test_seq
-        self.assertRaises(AttributeError, es.lv, [])
-        self.assertRaises(AttributeError, es.lv, 1)
-        self.assertRaises(ValueError, es.lv, np.array([seq, seq]))
+        self.assertRaises(ValueError, statistics.lv, [])
+        self.assertRaises(ValueError, statistics.lv, 1)
+        self.assertRaises(ValueError, statistics.lv, np.array([seq, seq]))
+
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
+    def test_2short_spike_train(self):
+        seq = [1]
+        with self.assertWarns(UserWarning):
+            """
+            Catches UserWarning: Input size is too small. Please provide
+            an input with more than 1 entry.
+            """
+            self.assertTrue(math.isnan(statistics.lv(seq, with_nan=True)))
+
+
+class LVRTestCase(unittest.TestCase):
+    def setUp(self):
+        self.test_seq = [1, 28, 4, 47, 5, 16, 2, 5, 21, 12,
+                         4, 12, 59, 2, 4, 18, 33, 25, 2, 34,
+                         4, 1, 1, 14, 8, 1, 10, 1, 8, 20,
+                         5, 1, 6, 5, 12, 2, 8, 8, 2, 8,
+                         2, 10, 2, 1, 1, 2, 15, 3, 20, 6,
+                         11, 6, 18, 2, 5, 17, 4, 3, 13, 6,
+                         1, 18, 1, 16, 12, 2, 52, 2, 5, 7,
+                         6, 25, 6, 5, 3, 15, 4, 3, 16, 3,
+                         6, 5, 24, 21, 3, 3, 4, 8, 4, 11,
+                         5, 7, 5, 6, 8, 11, 33, 10, 7, 4]
+
+        self.target = 2.1845363464753134
+
+    def test_lvr_with_quantities(self):
+        seq = pq.Quantity(self.test_seq, units='ms')
+        assert_array_almost_equal(statistics.lvr(seq), self.target, decimal=9)
+
+    def test_lvr_with_plain_array(self):
+        seq = np.array(self.test_seq)
+        assert_array_almost_equal(statistics.lvr(seq), self.target, decimal=9)
+
+    def test_lvr_with_list(self):
+        seq = self.test_seq
+        assert_array_almost_equal(statistics.lvr(seq), self.target, decimal=9)
+
+    def test_lvr_raise_error(self):
+        seq = self.test_seq
+        self.assertRaises(ValueError, statistics.lvr, [])
+        self.assertRaises(ValueError, statistics.lvr, 1)
+        self.assertRaises(ValueError, statistics.lvr, np.array([seq, seq]))
+        self.assertRaises(ValueError, statistics.lvr, seq, -1)
+
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
+    def test_lvr_refractoriness_kwarg(self):
+        seq = np.array(self.test_seq)
+        with self.assertWarns(UserWarning):
+            assert_array_almost_equal(statistics.lvr(seq, R=5),
+                                      self.target, decimal=9)
+
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
+    def test_2short_spike_train(self):
+        seq = [1]
+        with self.assertWarns(UserWarning):
+            """
+            Catches UserWarning: Input size is too small. Please provide
+            an input with more than 1 entry.
+            """
+            self.assertTrue(math.isnan(statistics.lvr(seq, with_nan=True)))
+
 
 
 class CV2TestCase(unittest.TestCase):
     def setUp(self):
-        self.test_seq = [1, 28,  4, 47,  5, 16,  2,  5, 21, 12,
-                         4, 12, 59,  2,  4, 18, 33, 25,  2, 34,
-                         4,  1,  1, 14,  8,  1, 10,  1,  8, 20,
-                         5,  1,  6,  5, 12,  2,  8,  8,  2,  8,
-                         2, 10,  2,  1,  1,  2, 15,  3, 20,  6,
-                         11, 6, 18,  2,  5, 17,  4,  3, 13,  6,
-                         1, 18,  1, 16, 12,  2, 52,  2,  5,  7,
-                         6, 25,  6,  5,  3, 15,  4,  3, 16,  3,
-                         6,  5, 24, 21,  3,  3,  4,  8,  4, 11,
-                         5,  7,  5,  6,  8, 11, 33, 10,  7,  4]
+        self.test_seq = [1, 28, 4, 47, 5, 16, 2, 5, 21, 12,
+                         4, 12, 59, 2, 4, 18, 33, 25, 2, 34,
+                         4, 1, 1, 14, 8, 1, 10, 1, 8, 20,
+                         5, 1, 6, 5, 12, 2, 8, 8, 2, 8,
+                         2, 10, 2, 1, 1, 2, 15, 3, 20, 6,
+                         11, 6, 18, 2, 5, 17, 4, 3, 13, 6,
+                         1, 18, 1, 16, 12, 2, 52, 2, 5, 7,
+                         6, 25, 6, 5, 3, 15, 4, 3, 16, 3,
+                         6, 5, 24, 21, 3, 3, 4, 8, 4, 11,
+                         5, 7, 5, 6, 8, 11, 33, 10, 7, 4]
 
         self.target = 1.0022235296529176
 
     def test_cv2_with_quantities(self):
         seq = pq.Quantity(self.test_seq, units='ms')
-        assert_array_almost_equal(es.cv2(seq), self.target, decimal=9)
+        assert_array_almost_equal(statistics.cv2(seq), self.target, decimal=9)
 
     def test_cv2_with_plain_array(self):
         seq = np.array(self.test_seq)
-        assert_array_almost_equal(es.cv2(seq), self.target, decimal=9)
+        assert_array_almost_equal(statistics.cv2(seq), self.target, decimal=9)
 
     def test_cv2_with_list(self):
         seq = self.test_seq
-        assert_array_almost_equal(es.cv2(seq), self.target, decimal=9)
+        assert_array_almost_equal(statistics.cv2(seq), self.target, decimal=9)
 
     def test_cv2_raise_error(self):
         seq = self.test_seq
-        self.assertRaises(AttributeError, es.cv2, [])
-        self.assertRaises(AttributeError, es.cv2, 1)
-        self.assertRaises(AttributeError, es.cv2, np.array([seq, seq]))
+        self.assertRaises(ValueError, statistics.cv2, [])
+        self.assertRaises(ValueError, statistics.cv2, 1)
+        self.assertRaises(ValueError, statistics.cv2, np.array([seq, seq]))
 
 
 class RateEstimationTestCase(unittest.TestCase):
@@ -384,34 +487,31 @@ class RateEstimationTestCase(unittest.TestCase):
         self.st_margin = 5.0  # seconds
         self.st_rate = 10.0  # Hertz
 
-        st_num_spikes = np.random.poisson(
-            self.st_rate*(self.st_dur-2*self.st_margin))
-        spike_train = np.random.rand(
-            st_num_spikes) * (self.st_dur-2*self.st_margin) + self.st_margin
-        spike_train.sort()
+        np.random.seed(19)
+        duration_effective = self.st_dur - 2 * self.st_margin
+        st_num_spikes = np.random.poisson(self.st_rate * duration_effective)
+        spike_train = sorted(
+            np.random.rand(st_num_spikes) *
+            duration_effective +
+            self.st_margin)
 
         # convert spike train into neo objects
-        self.spike_train = neo.SpikeTrain(spike_train*pq.s,
-                                          t_start=self.st_tr[0]*pq.s,
-                                          t_stop=self.st_tr[1]*pq.s)
+        self.spike_train = neo.SpikeTrain(spike_train * pq.s,
+                                          t_start=self.st_tr[0] * pq.s,
+                                          t_stop=self.st_tr[1] * pq.s)
 
         # generation of a multiply used specific kernel
-        self.kernel = kernels.TriangularKernel(sigma=0.03*pq.s)
+        self.kernel = kernels.TriangularKernel(sigma=0.03 * pq.s)
 
+    @unittest.skipUnless(python_version_major == 3, "assertWarns requires 3.2")
     def test_instantaneous_rate_and_warnings(self):
         st = self.spike_train
-        sampling_period = 0.01*pq.s
-        with warnings.catch_warnings(record=True) as w:
-            inst_rate = es.instantaneous_rate(
+        sampling_period = 0.01 * pq.s
+        with self.assertWarns(UserWarning):
+            # Catches warning: The width of the kernel was adjusted to a
+            # minimally allowed width.
+            inst_rate = statistics.instantaneous_rate(
                 st, sampling_period, self.kernel, cutoff=0)
-            message1 = "The width of the kernel was adjusted to a minimally " \
-                       "allowed width."
-            message2 = "Instantaneous firing rate approximation contains " \
-                       "negative values, possibly caused due to machine " \
-                       "precision errors."
-            warning_message = [str(m.message) for m in w]
-            self.assertTrue(message1 in warning_message)
-            self.assertTrue(message2 in warning_message)
         self.assertIsInstance(inst_rate, neo.core.AnalogSignal)
         self.assertEqual(
             inst_rate.sampling_period.simplified, sampling_period.simplified)
@@ -421,108 +521,191 @@ class RateEstimationTestCase(unittest.TestCase):
 
     def test_error_instantaneous_rate(self):
         self.assertRaises(
-            TypeError, es.instantaneous_rate, spiketrain=[1, 2, 3]*pq.s,
-            sampling_period=0.01*pq.ms, kernel=self.kernel)
+            TypeError, statistics.instantaneous_rate,
+            spiketrain=[1, 2, 3] * pq.s,
+            sampling_period=0.01 * pq.ms, kernel=self.kernel)
         self.assertRaises(
-            TypeError, es.instantaneous_rate, spiketrain=[1, 2, 3],
-            sampling_period=0.01*pq.ms, kernel=self.kernel)
+            TypeError, statistics.instantaneous_rate, spiketrain=[1, 2, 3],
+            sampling_period=0.01 * pq.ms, kernel=self.kernel)
         st = self.spike_train
         self.assertRaises(
-            TypeError, es.instantaneous_rate, spiketrain=st,
+            TypeError, statistics.instantaneous_rate, spiketrain=st,
             sampling_period=0.01, kernel=self.kernel)
         self.assertRaises(
-            ValueError, es.instantaneous_rate, spiketrain=st,
-            sampling_period=-0.01*pq.ms, kernel=self.kernel)
+            ValueError, statistics.instantaneous_rate, spiketrain=st,
+            sampling_period=-0.01 * pq.ms, kernel=self.kernel)
         self.assertRaises(
-            TypeError, es.instantaneous_rate, spiketrain=st,
-            sampling_period=0.01*pq.ms, kernel='NONE')
-        self.assertRaises(TypeError, es.instantaneous_rate, self.spike_train,
-                          sampling_period=0.01*pq.s, kernel='wrong_string',
-                          t_start=self.st_tr[0]*pq.s, t_stop=self.st_tr[1]*pq.s,
+            TypeError, statistics.instantaneous_rate, spiketrain=st,
+            sampling_period=0.01 * pq.ms, kernel='NONE')
+        self.assertRaises(TypeError, statistics.instantaneous_rate,
+                          self.spike_train,
+                          sampling_period=0.01 * pq.s, kernel='wrong_string',
+                          t_start=self.st_tr[0] * pq.s,
+                          t_stop=self.st_tr[1] * pq.s,
                           trim=False)
         self.assertRaises(
-            TypeError, es.instantaneous_rate, spiketrain=st,
-            sampling_period=0.01*pq.ms, kernel=self.kernel, cutoff=20*pq.ms)
+            TypeError, statistics.instantaneous_rate, spiketrain=st,
+            sampling_period=0.01 * pq.ms, kernel=self.kernel,
+            cutoff=20 * pq.ms)
         self.assertRaises(
-            TypeError, es.instantaneous_rate, spiketrain=st,
-            sampling_period=0.01*pq.ms, kernel=self.kernel, t_start=2)
+            TypeError, statistics.instantaneous_rate, spiketrain=st,
+            sampling_period=0.01 * pq.ms, kernel=self.kernel, t_start=2)
         self.assertRaises(
-            TypeError, es.instantaneous_rate, spiketrain=st,
-            sampling_period=0.01*pq.ms, kernel=self.kernel, t_stop=20*pq.mV)
+            TypeError, statistics.instantaneous_rate, spiketrain=st,
+            sampling_period=0.01 * pq.ms, kernel=self.kernel,
+            t_stop=20 * pq.mV)
         self.assertRaises(
-            TypeError, es.instantaneous_rate, spiketrain=st,
-            sampling_period=0.01*pq.ms, kernel=self.kernel, trim=1)
+            TypeError, statistics.instantaneous_rate, spiketrain=st,
+            sampling_period=0.01 * pq.ms, kernel=self.kernel, trim=1)
 
     def test_rate_estimation_consistency(self):
         """
         Test, whether the integral of the rate estimation curve is (almost)
         equal to the number of spikes of the spike train.
         """
-        kernel_types = [obj for obj in kernels.__dict__.values()
-                        if isinstance(obj, type) and
-                        issubclass(obj, kernels.Kernel) and
-                        hasattr(obj, "_evaluate") and
-                        obj is not kernels.Kernel and
-                        obj is not kernels.SymmetricKernel]
-        kernel_list = [kernel_type(sigma=0.5*pq.s, invert=False)
-                       for kernel_type in kernel_types]
-        kernel_resolution = 0.01*pq.s
-        for kernel in kernel_list:
-            rate_estimate_a0 = es.instantaneous_rate(self.spike_train,
-                                                     sampling_period=kernel_resolution,
-                                                     kernel='auto',
-                                                     t_start=self.st_tr[0]*pq.s,
-                                                     t_stop=self.st_tr[1]*pq.s,
-                                                     trim=False)
-
-            rate_estimate0 = es.instantaneous_rate(self.spike_train,
-                                                   sampling_period=kernel_resolution,
-                                                   kernel=kernel)
-
-            rate_estimate1 = es.instantaneous_rate(self.spike_train,
-                                                   sampling_period=kernel_resolution,
-                                                   kernel=kernel,
-                                                   t_start=self.st_tr[0]*pq.s,
-                                                   t_stop=self.st_tr[1]*pq.s,
-                                                   trim=False)
-
-            rate_estimate2 = es.instantaneous_rate(self.spike_train,
-                                                   sampling_period=kernel_resolution,
-                                                   kernel=kernel,
-                                                   t_start=self.st_tr[0]*pq.s,
-                                                   t_stop=self.st_tr[1]*pq.s,
-                                                   trim=True)
-            # test consistency
-            rate_estimate_list = [rate_estimate0, rate_estimate1,
-                                  rate_estimate2, rate_estimate_a0]
-
-            for rate_estimate in rate_estimate_list:
+        kernel_types = tuple(
+            kern_cls for kern_cls in kernels.__dict__.values()
+            if isinstance(kern_cls, type) and
+            issubclass(kern_cls, kernels.Kernel) and
+            kern_cls is not kernels.Kernel and
+            kern_cls is not kernels.SymmetricKernel)
+        kernels_available = [kern_cls(sigma=0.5 * pq.s, invert=False)
+                             for kern_cls in kernel_types]
+        kernels_available.append('auto')
+        kernel_resolution = 0.01 * pq.s
+        for kernel in kernels_available:
+            for center_kernel in (False, True):
+                rate_estimate = statistics.instantaneous_rate(
+                    self.spike_train,
+                    sampling_period=kernel_resolution,
+                    kernel=kernel,
+                    t_start=self.st_tr[0] * pq.s,
+                    t_stop=self.st_tr[1] * pq.s,
+                    trim=False,
+                    center_kernel=center_kernel)
                 num_spikes = len(self.spike_train)
-                auc = spint.cumtrapz(y=rate_estimate.magnitude[:, 0],
-                                     x=rate_estimate.times.rescale('s').magnitude)[-1]
-                self.assertAlmostEqual(num_spikes, auc, delta=0.05*num_spikes)
+                auc = spint.cumtrapz(
+                    y=rate_estimate.magnitude.squeeze(),
+                    x=rate_estimate.times.simplified.magnitude)[-1]
+                self.assertAlmostEqual(num_spikes, auc,
+                                       delta=0.01 * num_spikes)
+
+    def test_not_center_kernel(self):
+        # issue 107
+        t_spike = 1 * pq.s
+        st = neo.SpikeTrain([t_spike], t_start=0 * pq.s, t_stop=2 * pq.s,
+                            units=pq.s)
+        kernel = kernels.AlphaKernel(200 * pq.ms)
+        fs = 0.1 * pq.ms
+        rate = statistics.instantaneous_rate(st,
+                                             sampling_period=fs,
+                                             kernel=kernel,
+                                             center_kernel=False)
+        rate_nonzero_index = np.nonzero(rate > 1e-6)[0]
+        # where the mass is concentrated
+        rate_mass = rate.times.rescale(t_spike.units)[rate_nonzero_index]
+        all_after_response_onset = (rate_mass >= t_spike).all()
+        self.assertTrue(all_after_response_onset)
+
+    def test_regression_288(self):
+        np.random.seed(9)
+        sampling_period = 200 * pq.ms
+        spiketrain = homogeneous_poisson_process(10 * pq.Hz,
+                                                 t_start=0 * pq.s,
+                                                 t_stop=10 * pq.s)
+        kernel = kernels.AlphaKernel(sigma=5 * pq.ms, invert=True)
+        rate = statistics.instantaneous_rate(spiketrain,
+                                             sampling_period=sampling_period,
+                                             kernel=kernel)
+        self.assertEqual(
+            len(rate), (spiketrain.t_stop / sampling_period).simplified.item())
+
+        # 3 Hz is not a target - it's meant to test the non-negativity of the
+        # result rate; ideally, for smaller sampling rates, the integral
+        # should match the num. of spikes in the spiketrain
+        self.assertGreater(rate.mean(), 3 * pq.Hz)
+
+    def test_spikes_on_edges(self):
+        # this test demonstrates that the trimming (convolve valid mode)
+        # removes the edge spikes, underestimating the true firing rate and
+        # thus is not able to reconstruct the number of spikes in a
+        # spiketrain (see test_rate_estimation_consistency)
+        cutoff = 5
+        sampling_period = 0.01 * pq.s
+        t_spikes = np.array([-cutoff, cutoff]) * pq.s
+        spiketrain = neo.SpikeTrain(t_spikes, t_start=t_spikes[0],
+                                    t_stop=t_spikes[-1])
+        kernel_types = tuple(
+            kern_cls for kern_cls in kernels.__dict__.values()
+            if isinstance(kern_cls, type) and
+            issubclass(kern_cls, kernels.Kernel) and
+            kern_cls is not kernels.Kernel and
+            kern_cls is not kernels.SymmetricKernel)
+        kernels_available = [kern_cls(sigma=1 * pq.s, invert=False)
+                             for kern_cls in kernel_types]
+        for kernel in kernels_available:
+            for center_kernel in (False, True):
+                rate = statistics.instantaneous_rate(
+                    spiketrain,
+                    sampling_period=sampling_period,
+                    kernel=kernel,
+                    cutoff=cutoff, trim=True,
+                    center_kernel=center_kernel)
+                assert_array_almost_equal(rate.magnitude, 0, decimal=3)
+
+    def test_trim_as_convolve_mode(self):
+        cutoff = 5
+        sampling_period = 0.01 * pq.s
+        t_spikes = np.linspace(-cutoff, cutoff, num=(2 * cutoff + 1)) * pq.s
+        spiketrain = neo.SpikeTrain(t_spikes, t_start=t_spikes[0],
+                                    t_stop=t_spikes[-1])
+        kernel = kernels.RectangularKernel(sigma=1 * pq.s)
+        assert cutoff > kernel.min_cutoff, "Choose larger cutoff"
+        kernel_types = tuple(
+            kern_cls for kern_cls in kernels.__dict__.values()
+            if isinstance(kern_cls, type) and
+            issubclass(kern_cls, kernels.SymmetricKernel) and
+            kern_cls is not kernels.SymmetricKernel)
+        kernels_symmetric = [kern_cls(sigma=1 * pq.s, invert=False)
+                             for kern_cls in kernel_types]
+        for kernel in kernels_symmetric:
+            for trim in (False, True):
+                rate_centered = statistics.instantaneous_rate(
+                    spiketrain, sampling_period=sampling_period,
+                    kernel=kernel, cutoff=cutoff, trim=trim)
+
+                rate_convolve = statistics.instantaneous_rate(
+                    spiketrain, sampling_period=sampling_period,
+                    kernel=kernel, cutoff=cutoff, trim=trim,
+                    center_kernel=False)
+                assert_array_almost_equal(rate_centered, rate_convolve)
 
     def test_instantaneous_rate_spiketrainlist(self):
-        st_num_spikes = np.random.poisson(
-            self.st_rate*(self.st_dur-2*self.st_margin))
-        spike_train2 = np.random.rand(
-            st_num_spikes) * (self.st_dur - 2 * self.st_margin) + self.st_margin
-        spike_train2.sort()
+        np.random.seed(19)
+        duration_effective = self.st_dur - 2 * self.st_margin
+        st_num_spikes = np.random.poisson(self.st_rate * duration_effective)
+        spike_train2 = sorted(
+            np.random.rand(st_num_spikes) *
+            duration_effective +
+            self.st_margin)
         spike_train2 = neo.SpikeTrain(spike_train2 * pq.s,
                                       t_start=self.st_tr[0] * pq.s,
                                       t_stop=self.st_tr[1] * pq.s)
-        st_rate_1 = es.instantaneous_rate(self.spike_train,
-                                          sampling_period=0.01*pq.s,
-                                          kernel=self.kernel)
-        st_rate_2 = es.instantaneous_rate(spike_train2,
-                                          sampling_period=0.01*pq.s,
-                                          kernel=self.kernel)
-        combined_rate = es.instantaneous_rate([self.spike_train, spike_train2],
-                                              sampling_period=0.01*pq.s,
-                                              kernel=self.kernel)
+        st_rate_1 = statistics.instantaneous_rate(self.spike_train,
+                                                  sampling_period=0.01 * pq.s,
+                                                  kernel=self.kernel)
+        st_rate_2 = statistics.instantaneous_rate(spike_train2,
+                                                  sampling_period=0.01 * pq.s,
+                                                  kernel=self.kernel)
+        combined_rate = statistics.instantaneous_rate(
+            [self.spike_train, spike_train2],
+            sampling_period=0.01 * pq.s,
+            kernel=self.kernel)
         summed_rate = st_rate_1 + st_rate_2  # equivalent for identical kernels
-        for a, b in zip(combined_rate.magnitude, summed_rate.magnitude):
-            self.assertAlmostEqual(a, b, delta=0.0001)
+        # 'time_vector.dtype' in instantaneous_rate() is changed from float64
+        # to float32 which results in 3e-6 abs difference
+        assert_array_almost_equal(combined_rate.magnitude,
+                                  summed_rate.magnitude, decimal=5)
 
     # Regression test for #144
     def test_instantaneous_rate_regression_144(self):
@@ -530,7 +713,37 @@ class RateEstimationTestCase(unittest.TestCase):
         # other, that the optimal kernel cannot be detected. Therefore, the
         # function should react with a ValueError.
         st = neo.SpikeTrain([2.12, 2.13, 2.15] * pq.s, t_stop=10 * pq.s)
-        self.assertRaises(ValueError, es.instantaneous_rate, st, 1 * pq.ms)
+        self.assertRaises(ValueError, statistics.instantaneous_rate, st,
+                          1 * pq.ms)
+
+    # Regression test for #245
+    def test_instantaneous_rate_regression_245(self):
+        # This test makes sure that the correct kernel width is chosen when
+        # selecting 'auto' as kernel
+        spiketrain = neo.SpikeTrain(
+            range(1, 30) * pq.ms, t_start=0 * pq.ms, t_stop=30 * pq.ms)
+
+        # This is the correct procedure to attain the kernel: first, the result
+        # of sskernel retrieves the kernel bandwidth of an optimal Gaussian
+        # kernel in terms of its standard deviation sigma, then uses this value
+        # directly in the function for creating the Gaussian kernel
+        kernel_width_sigma = statistics.optimal_kernel_bandwidth(
+            spiketrain.magnitude, times=None, bootstrap=False)['optw']
+        kernel = kernels.GaussianKernel(kernel_width_sigma * spiketrain.units)
+        result_target = statistics.instantaneous_rate(
+            spiketrain, 10 * pq.ms, kernel=kernel)
+
+        # Here, we check if the 'auto' argument leads to the same operation. In
+        # the regression, it was incorrectly assumed that the sskernel()
+        # function returns the actual bandwidth of the kernel, which is defined
+        # as approximately bandwidth = sigma * 5.5 = sigma * (2 * 2.75).
+        # factor 2.0 connects kernel width with its half width,
+        # factor 2.7 connects half width of Gaussian distribution with
+        #            99% probability mass with its standard deviation.
+        result_automatic = statistics.instantaneous_rate(
+            spiketrain, 10 * pq.ms, kernel='auto')
+
+        assert_array_almost_equal(result_target, result_automatic)
 
 
 class TimeHistogramTestCase(unittest.TestCase):
@@ -549,49 +762,53 @@ class TimeHistogramTestCase(unittest.TestCase):
 
     def test_time_histogram(self):
         targ = np.array([4, 2, 1, 1, 2, 2, 1, 0, 1, 0])
-        histogram = es.time_histogram(self.spiketrains, binsize=pq.s)
+        histogram = statistics.time_histogram(self.spiketrains, bin_size=pq.s)
         assert_array_equal(targ, histogram.magnitude[:, 0])
 
     def test_time_histogram_binary(self):
         targ = np.array([2, 2, 1, 1, 2, 2, 1, 0, 1, 0])
-        histogram = es.time_histogram(self.spiketrains, binsize=pq.s,
-                                      binary=True)
+        histogram = statistics.time_histogram(self.spiketrains, bin_size=pq.s,
+                                              binary=True)
         assert_array_equal(targ, histogram.magnitude[:, 0])
 
     def test_time_histogram_tstart_tstop(self):
         # Start, stop short range
         targ = np.array([2, 1])
-        histogram = es.time_histogram(self.spiketrains, binsize=pq.s,
-                                      t_start=5 * pq.s, t_stop=7 * pq.s)
+        histogram = statistics.time_histogram(self.spiketrains, bin_size=pq.s,
+                                              t_start=5 * pq.s,
+                                              t_stop=7 * pq.s)
         assert_array_equal(targ, histogram.magnitude[:, 0])
 
         # Test without t_stop
         targ = np.array([4, 2, 1, 1, 2, 2, 1, 0, 1, 0])
-        histogram = es.time_histogram(self.spiketrains, binsize=1 * pq.s,
-                                      t_start=0 * pq.s)
+        histogram = statistics.time_histogram(self.spiketrains,
+                                              bin_size=1 * pq.s,
+                                              t_start=0 * pq.s)
         assert_array_equal(targ, histogram.magnitude[:, 0])
 
         # Test without t_start
-        histogram = es.time_histogram(self.spiketrains, binsize=1 * pq.s,
-                                      t_stop=10 * pq.s)
+        histogram = statistics.time_histogram(self.spiketrains,
+                                              bin_size=1 * pq.s,
+                                              t_stop=10 * pq.s)
         assert_array_equal(targ, histogram.magnitude[:, 0])
 
     def test_time_histogram_output(self):
         # Normalization mean
-        histogram = es.time_histogram(self.spiketrains, binsize=pq.s,
-                                      output='mean')
+        histogram = statistics.time_histogram(self.spiketrains, bin_size=pq.s,
+                                              output='mean')
         targ = np.array([4, 2, 1, 1, 2, 2, 1, 0, 1, 0], dtype=float) / 2
         assert_array_equal(targ.reshape(targ.size, 1), histogram.magnitude)
 
         # Normalization rate
-        histogram = es.time_histogram(self.spiketrains, binsize=pq.s,
-                                      output='rate')
+        histogram = statistics.time_histogram(self.spiketrains, bin_size=pq.s,
+                                              output='rate')
         assert_array_equal(histogram.view(pq.Quantity),
                            targ.reshape(targ.size, 1) * 1 / pq.s)
 
         # Normalization unspecified, raises error
-        self.assertRaises(ValueError, es.time_histogram, self.spiketrains,
-                          binsize=pq.s, output=' ')
+        self.assertRaises(ValueError, statistics.time_histogram,
+                          self.spiketrains,
+                          bin_size=pq.s, output=' ')
 
 
 class ComplexityPdfTestCase(unittest.TestCase):
@@ -613,12 +830,13 @@ class ComplexityPdfTestCase(unittest.TestCase):
 
     def test_complexity_pdf(self):
         targ = np.array([0.92, 0.01, 0.01, 0.06])
-        complexity = es.complexity_pdf(self.spiketrains, binsize=0.1*pq.s)
+        complexity = statistics.complexity_pdf(self.spiketrains,
+                                               bin_size=0.1 * pq.s)
         assert_array_equal(targ, complexity.magnitude[:, 0])
         self.assertEqual(1, complexity.magnitude[:, 0].sum())
-        self.assertEqual(len(self.spiketrains)+1, len(complexity))
+        self.assertEqual(len(self.spiketrains) + 1, len(complexity))
         self.assertIsInstance(complexity, neo.AnalogSignal)
-        self.assertEqual(complexity.units, 1*pq.dimensionless)
+        self.assertEqual(complexity.units, 1 * pq.dimensionless)
 
 
 if __name__ == '__main__':

+ 279 - 278
code/elephant/elephant/test/test_unitary_event_analysis.py

@@ -1,106 +1,113 @@
 """
 Unit tests for the Unitary Events analysis
 
-:copyright: Copyright 2016 by the Elephant team, see AUTHORS.txt.
+:copyright: Copyright 2016 by the Elephant team, see `doc/authors.rst`.
 :license: Modified BSD, see LICENSE.txt for details.
 """
 
+import os
+import shutil
+import ssl
+import sys
+import types
 import unittest
+
+import neo
 import numpy as np
 import quantities as pq
-import types
-import elephant.unitary_event_analysis as ue
-import neo
-import sys
-import os
+from neo.test.rawiotest.tools import create_local_temp_dir
+from numpy.testing import assert_array_equal
 
-from distutils.version import StrictVersion
+try:
+    from urllib2 import urlopen
+except ImportError:
+    from urllib.request import urlopen
 
 
-def _check_for_incompatibilty():
-    smaller_version = StrictVersion(np.__version__) < '1.10.0'
-    return sys.version_info >= (3, 0) and smaller_version
+import elephant.unitary_event_analysis as ue
+
+python_version_major = sys.version_info.major
 
 
 class UETestCase(unittest.TestCase):
 
     def setUp(self):
-        sts1_with_trial = [[  26.,   48.,   78.,  144.,  178.],
-                           [   4.,   45.,   85.,  123.,  156.,  185.],
-                           [  22.,   53.,   73.,   88.,  120.,  147.,  167.,  193.],
-                           [  23.,   49.,   74.,  116.,  142.,  166.,  189.],
-                           [   5.,   34.,   54.,   80.,  108.,  128.,  150.,  181.],
-                           [  18.,   61.,  107.,  170.],
-                           [  62.,   98.,  131.,  161.],
-                           [  37.,   63.,   86.,  131.,  168.],
-                           [  39.,   76.,  100.,  127.,  153.,  198.],
-                           [   3.,   35.,   60.,   88.,  108.,  141.,  171.,  184.],
-                           [  39.,  170.],
-                           [  25.,   68.,  170.],
-                           [  19.,   57.,   84.,  116.,  157.,  192.],
-                           [  17.,   80.,  131.,  172.],
-                           [  33.,   65.,  124.,  162.,  192.],
-                           [  58.,   87.,  185.],
-                           [  19.,  101.,  174.],
-                           [  84.,  118.,  156.,  198.,  199.],
-                           [   5.,   55.,   67.,   96.,  114.,  148.,  172.,  199.],
-                           [  61.,  105.,  131.,  169.,  195.],
-                           [  26.,   96.,  129.,  157.],
-                           [  41.,   85.,  157.,  199.],
-                           [   6.,   30.,   53.,   76.,  109.,  142.,  167.,  194.],
-                           [ 159.],
-                           [   6.,   51.,   78.,  113.,  154.,  183.],
-                           [ 138.],
-                           [  23.,   59.,  154.,  185.],
-                           [  12.,   14.,   52.,   54.,  109.,  145.,  192.],
-                           [  29.,   61.,   84.,  122.,  145.,  168.],
-                           [ 26.,  99.],
-                           [   3.,   31.,   55.,   85.,  108.,  158.,  191.],
-                           [   5.,   37.,   70.,  119.,  170.],
-                           [  38.,   79.,  117.,  157.,  192.],
-                           [ 174.],
-                           [ 114.],
+        sts1_with_trial = [[26., 48., 78., 144., 178.],
+                           [4., 45., 85., 123., 156., 185.],
+                           [22., 53., 73., 88., 120., 147., 167., 193.],
+                           [23., 49., 74., 116., 142., 166., 189.],
+                           [5., 34., 54., 80., 108., 128., 150., 181.],
+                           [18., 61., 107., 170.],
+                           [62., 98., 131., 161.],
+                           [37., 63., 86., 131., 168.],
+                           [39., 76., 100., 127., 153., 198.],
+                           [3., 35., 60., 88., 108., 141., 171., 184.],
+                           [39., 170.],
+                           [25., 68., 170.],
+                           [19., 57., 84., 116., 157., 192.],
+                           [17., 80., 131., 172.],
+                           [33., 65., 124., 162., 192.],
+                           [58., 87., 185.],
+                           [19., 101., 174.],
+                           [84., 118., 156., 198., 199.],
+                           [5., 55., 67., 96., 114., 148., 172., 199.],
+                           [61., 105., 131., 169., 195.],
+                           [26., 96., 129., 157.],
+                           [41., 85., 157., 199.],
+                           [6., 30., 53., 76., 109., 142., 167., 194.],
+                           [159.],
+                           [6., 51., 78., 113., 154., 183.],
+                           [138.],
+                           [23., 59., 154., 185.],
+                           [12., 14., 52., 54., 109., 145., 192.],
+                           [29., 61., 84., 122., 145., 168.],
+                           [26., 99.],
+                           [3., 31., 55., 85., 108., 158., 191.],
+                           [5., 37., 70., 119., 170.],
+                           [38., 79., 117., 157., 192.],
+                           [174.],
+                           [114.],
                            []]
-        sts2_with_trial = [[   3.,  119.],
-                           [  54.,  155.,  183.],
-                           [  35.,  133.],
-                           [  25.,  100.,  176.],
-                           [  9.,  98.],
-                           [   6.,   97.,  198.],
-                           [   7.,   62.,  148.],
-                           [ 100.,  158.],
-                           [   7.,   62.,  122.,  179.,  191.],
-                           [ 125.,  182.],
-                           [  30.,   55.,  127.,  157.,  196.],
-                           [  27.,   70.,  173.],
-                           [  82.,   84.,  198.],
-                           [  11.,   29.,  137.],
-                           [   5.,   49.,   61.,  101.,  142.,  190.],
-                           [  78.,  162.,  178.],
-                           [  13.,   14.,  130.,  172.],
-                           [ 22.],
-                           [  16.,   55.,  109.,  113.,  175.],
-                           [  17.,   33.,   63.,  102.,  144.,  189.,  190.],
-                           [ 58.],
-                           [  27.,   30.,   99.,  145.,  176.],
-                           [  10.,   58.,  116.,  182.],
-                           [  14.,   68.,  104.,  126.,  162.,  194.],
-                           [  56.,  129.,  196.],
-                           [  50.,   78.,  105.,  152.,  190.,  197.],
-                           [  24.,   66.,  113.,  117.,  161.],
-                           [   9.,   31.,   81.,   95.,  136.,  154.],
-                           [  10.,  115.,  185.,  191.],
-                           [  71.,  140.,  157.],
-                           [  15.,   27.,   88.,  102.,  103.,  151.,  181.,  188.],
-                           [  51.,   75.,   95.,  134.,  195.],
-                           [  18.,   55.,   75.,  131.,  186.],
-                           [  10.,   16.,   41.,   42.,   75.,  127.],
-                           [  62.,   76.,  102.,  145.,  171.,  183.],
-                           [  66.,   71.,   85.,  140.,  154.]]
+        sts2_with_trial = [[3., 119.],
+                           [54., 155., 183.],
+                           [35., 133.],
+                           [25., 100., 176.],
+                           [9., 98.],
+                           [6., 97., 198.],
+                           [7., 62., 148.],
+                           [100., 158.],
+                           [7., 62., 122., 179., 191.],
+                           [125., 182.],
+                           [30., 55., 127., 157., 196.],
+                           [27., 70., 173.],
+                           [82., 84., 198.],
+                           [11., 29., 137.],
+                           [5., 49., 61., 101., 142., 190.],
+                           [78., 162., 178.],
+                           [13., 14., 130., 172.],
+                           [22.],
+                           [16., 55., 109., 113., 175.],
+                           [17., 33., 63., 102., 144., 189., 190.],
+                           [58.],
+                           [27., 30., 99., 145., 176.],
+                           [10., 58., 116., 182.],
+                           [14., 68., 104., 126., 162., 194.],
+                           [56., 129., 196.],
+                           [50., 78., 105., 152., 190., 197.],
+                           [24., 66., 113., 117., 161.],
+                           [9., 31., 81., 95., 136., 154.],
+                           [10., 115., 185., 191.],
+                           [71., 140., 157.],
+                           [15., 27., 88., 102., 103., 151., 181., 188.],
+                           [51., 75., 95., 134., 195.],
+                           [18., 55., 75., 131., 186.],
+                           [10., 16., 41., 42., 75., 127.],
+                           [62., 76., 102., 145., 171., 183.],
+                           [66., 71., 85., 140., 154.]]
         self.sts1_neo = [neo.SpikeTrain(
-            i*pq.ms,t_stop = 200*pq.ms) for i in sts1_with_trial]
+            i * pq.ms, t_stop=200 * pq.ms) for i in sts1_with_trial]
         self.sts2_neo = [neo.SpikeTrain(
-            i*pq.ms,t_stop = 200*pq.ms) for i in sts2_with_trial]
+            i * pq.ms, t_stop=200 * pq.ms) for i in sts2_with_trial]
         self.binary_sts = np.array([[[1, 1, 1, 1, 0],
                                      [0, 1, 1, 1, 0],
                                      [0, 1, 1, 0, 1]],
@@ -109,134 +116,136 @@ class UETestCase(unittest.TestCase):
                                      [1, 1, 0, 1, 0]]])
 
     def test_hash_default(self):
-        m = np.array([[0,0,0], [1,0,0], [0,1,0], [0,0,1], [1,1,0],
-                      [1,0,1],[0,1,1],[1,1,1]])
-        expected = np.array([77,43,23])
-        h = ue.hash_from_pattern(m, N=8)
+        m = np.array([[0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 1, 0],
+                      [1, 0, 1], [0, 1, 1], [1, 1, 1]])
+        expected = np.array([77, 43, 23])
+        h = ue.hash_from_pattern(m)
         self.assertTrue(np.all(expected == h))
 
     def test_hash_default_longpattern(self):
-        m = np.zeros((100,2))
-        m[0,0] = 1
-        expected = np.array([2**99,0])
-        h = ue.hash_from_pattern(m, N=100)
+        m = np.zeros((100, 2))
+        m[0, 0] = 1
+        expected = np.array([2**99, 0])
+        h = ue.hash_from_pattern(m)
         self.assertTrue(np.all(expected == h))
 
-    def test_hash_ValueError_wrong_orientation(self):
-        m = np.array([[0,0,0], [1,0,0], [0,1,0], [0,0,1], [1,1,0],
-                      [1,0,1],[0,1,1],[1,1,1]])
-        self.assertRaises(ValueError, ue.hash_from_pattern, m, N=3)
+    def test_hash_inverse_longpattern(self):
+        n_patterns = 100
+        m = np.random.randint(low=0, high=2, size=(n_patterns, 2))
+        h = ue.hash_from_pattern(m)
+        m_inv = ue.inverse_hash_from_pattern(h, N=n_patterns)
+        assert_array_equal(m, m_inv)
 
     def test_hash_ValueError_wrong_entries(self):
-        m = np.array([[0,0,0], [1,0,0], [0,2,0], [0,0,1], [1,1,0],
-                      [1,0,1],[0,1,1],[1,1,1]])
-        self.assertRaises(ValueError, ue.hash_from_pattern, m, N=3)
+        m = np.array([[0, 0, 0], [1, 0, 0], [0, 2, 0], [0, 0, 1], [1, 1, 0],
+                      [1, 0, 1], [0, 1, 1], [1, 1, 1]])
+        self.assertRaises(ValueError, ue.hash_from_pattern, m)
 
     def test_hash_base_not_two(self):
-        m = np.array([[0,0,0], [1,0,0], [0,1,0], [0,0,1], [1,1,0],
-                      [1,0,1],[0,1,1],[1,1,1]])
+        m = np.array([[0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 1, 0],
+                      [1, 0, 1], [0, 1, 1], [1, 1, 1]])
         m = m.T
         base = 3
-        expected = np.array([0,9,3,1,12,10,4,13])
-        h = ue.hash_from_pattern(m, N=3, base=base)
+        expected = np.array([0, 9, 3, 1, 12, 10, 4, 13])
+        h = ue.hash_from_pattern(m, base=base)
         self.assertTrue(np.all(expected == h))
 
-    ## TODO: write a test for ValueError in inverse_hash_from_pattern
     def test_invhash_ValueError(self):
-        self.assertRaises(ValueError, ue.inverse_hash_from_pattern, [128, 8], 4)
+        """
+        The hash is larger than sum(2 ** range(N)).
+        """
+        self.assertRaises(
+            ValueError, ue.inverse_hash_from_pattern, [128, 8], 4)
 
     def test_invhash_default_base(self):
         N = 3
         h = np.array([0, 4, 2, 1, 6, 5, 3, 7])
-        expected = np.array([[0, 1, 0, 0, 1, 1, 0, 1],[0, 0, 1, 0, 1, 0, 1, 1],[0, 0, 0, 1, 0, 1, 1, 1]])
+        expected = np.array(
+            [[0, 1, 0, 0, 1, 1, 0, 1], [0, 0, 1, 0, 1, 0, 1, 1],
+             [0, 0, 0, 1, 0, 1, 1, 1]])
         m = ue.inverse_hash_from_pattern(h, N)
         self.assertTrue(np.all(expected == m))
 
     def test_invhash_base_not_two(self):
         N = 3
-        h = np.array([1,4,13])
+        h = np.array([1, 4, 13])
         base = 3
-        expected = np.array([[0,0,1],[0,1,1],[1,1,1]])
+        expected = np.array([[0, 0, 1], [0, 1, 1], [1, 1, 1]])
         m = ue.inverse_hash_from_pattern(h, N, base)
         self.assertTrue(np.all(expected == m))
 
     def test_invhash_shape_mat(self):
         N = 8
         h = np.array([178, 212, 232])
-        expected = np.array([[0,0,0], [1,0,0], [0,1,0], [0,0,1], [1,1,0],[1,0,1],[0,1,1],[1,1,1]])
+        expected = np.array([[0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1], [
+                            1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]])
         m = ue.inverse_hash_from_pattern(h, N)
         self.assertTrue(np.shape(m)[0] == N)
 
     def test_hash_invhash_consistency(self):
-        m = np.array([[0, 0, 0],[1, 0, 0],[0, 1, 0],[0, 0, 1],[1, 1, 0],[1, 0, 1],[0, 1, 1],[1, 1, 1]])
-        inv_h = ue.hash_from_pattern(m, N=8)
-        m1 = ue.inverse_hash_from_pattern(inv_h, N = 8)
+        m = np.array([[0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1],
+                      [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]])
+        inv_h = ue.hash_from_pattern(m)
+        m1 = ue.inverse_hash_from_pattern(inv_h, N=8)
         self.assertTrue(np.all(m == m1))
 
     def test_n_emp_mat_default(self):
-        mat = np.array([[0, 0, 0, 1, 1],[0, 0, 0, 0, 1],[1, 0, 1, 1, 1],[1, 0, 1, 1, 1]])
-        N = 4
+        mat = np.array([[0, 0, 0, 1, 1], [0, 0, 0, 0, 1],
+                        [1, 0, 1, 1, 1], [1, 0, 1, 1, 1]])
         pattern_hash = [3, 15]
-        expected1 = np.array([ 2.,  1.])
+        expected1 = np.array([2., 1.])
         expected2 = [[0, 2], [4]]
-        nemp,nemp_indices = ue.n_emp_mat(mat,N,pattern_hash)
+        nemp, nemp_indices = ue.n_emp_mat(mat, pattern_hash)
         self.assertTrue(np.all(nemp == expected1))
-        for item_cnt,item in enumerate(nemp_indices):
-            self.assertTrue(np.allclose(expected2[item_cnt],item))
+        for item_cnt, item in enumerate(nemp_indices):
+            self.assertTrue(np.allclose(expected2[item_cnt], item))
 
     def test_n_emp_mat_sum_trial_default(self):
         mat = self.binary_sts
-        pattern_hash = np.array([4,6])
+        pattern_hash = np.array([4, 6])
         N = 3
-        expected1 = np.array([ 1.,  3.])
-        expected2 = [[[0], [3]],[[],[2,4]]]
-        n_emp, n_emp_idx = ue.n_emp_mat_sum_trial(mat, N,pattern_hash)
+        expected1 = np.array([1., 3.])
+        expected2 = [[[0], [3]], [[], [2, 4]]]
+        n_emp, n_emp_idx = ue.n_emp_mat_sum_trial(mat, pattern_hash)
         self.assertTrue(np.all(n_emp == expected1))
-        for item0_cnt,item0 in enumerate(n_emp_idx):
-            for item1_cnt,item1 in enumerate(item0):
-                self.assertTrue(np.allclose(expected2[item0_cnt][item1_cnt],item1))
-
-    def test_n_emp_mat_sum_trial_ValueError(self):
-        mat = np.array([[0,0,0], [1,0,0], [0,1,0], [0,0,1], [1,1,0],
-                      [1,0,1],[0,1,1],[1,1,1]])
-        self.assertRaises(ValueError,ue.n_emp_mat_sum_trial,mat,N=2,pattern_hash = [3,6])
+        for item0_cnt, item0 in enumerate(n_emp_idx):
+            for item1_cnt, item1 in enumerate(item0):
+                self.assertTrue(
+                    np.allclose(expected2[item0_cnt][item1_cnt], item1))
 
     def test_n_exp_mat_default(self):
-        mat = np.array([[0, 0, 0, 1, 1],[0, 0, 0, 0, 1],[1, 0, 1, 1, 1],[1, 0, 1, 1, 1]])
-        N = 4
+        mat = np.array([[0, 0, 0, 1, 1], [0, 0, 0, 0, 1],
+                        [1, 0, 1, 1, 1], [1, 0, 1, 1, 1]])
         pattern_hash = [3, 11]
-        expected = np.array([ 1.536,  1.024])
-        nexp = ue.n_exp_mat(mat,N,pattern_hash)
-        self.assertTrue(np.allclose(expected,nexp))
+        expected = np.array([1.536, 1.024])
+        nexp = ue.n_exp_mat(mat, pattern_hash)
+        self.assertTrue(np.allclose(expected, nexp))
 
     def test_n_exp_mat_sum_trial_default(self):
         mat = self.binary_sts
-        pattern_hash = np.array([5,6])
-        N = 3
-        expected = np.array([ 1.56,  2.56])
-        n_exp = ue.n_exp_mat_sum_trial(mat, N,pattern_hash)
-        self.assertTrue(np.allclose(n_exp,expected))
+        pattern_hash = np.array([5, 6])
+        expected = np.array([1.56, 2.56])
+        n_exp = ue.n_exp_mat_sum_trial(mat, pattern_hash)
+        self.assertTrue(np.allclose(n_exp, expected))
 
     def test_n_exp_mat_sum_trial_TrialAverage(self):
         mat = self.binary_sts
-        pattern_hash = np.array([5,6])
-        N = 3
-        expected = np.array([ 1.62,  2.52])
-        n_exp = ue.n_exp_mat_sum_trial(mat, N, pattern_hash, method='analytic_TrialAverage')
-        self.assertTrue(np.allclose(n_exp,expected))
+        pattern_hash = np.array([5, 6])
+        expected = np.array([1.62, 2.52])
+        n_exp = ue.n_exp_mat_sum_trial(
+            mat, pattern_hash, method='analytic_TrialAverage')
+        self.assertTrue(np.allclose(n_exp, expected))
 
     def test_n_exp_mat_sum_trial_surrogate(self):
         mat = self.binary_sts
         pattern_hash = np.array([5])
-        N = 3
-        n_exp_anal = ue.n_exp_mat_sum_trial(mat, N, pattern_hash, method='analytic_TrialAverage')
-        n_exp_surr = ue.n_exp_mat_sum_trial(mat, N, pattern_hash, method='surrogate_TrialByTrial',n_surr = 1000)
-        self.assertLess((np.abs(n_exp_anal[0]-np.mean(n_exp_surr))/n_exp_anal[0]),0.1)
-
-    def test_n_exp_mat_sum_trial_ValueError(self):
-        mat = np.array([[0,0,0], [1,0,0], [0,1,0], [0,0,1], [1,1,0],
-                      [1,0,1],[0,1,1],[1,1,1]])
-        self.assertRaises(ValueError,ue.n_exp_mat_sum_trial,mat,N=2,pattern_hash = [3,6])
+        n_exp_anal = ue.n_exp_mat_sum_trial(
+            mat, pattern_hash, method='analytic_TrialAverage')
+        n_exp_surr = ue.n_exp_mat_sum_trial(
+            mat, pattern_hash, method='surrogate_TrialByTrial', n_surr=1000)
+        self.assertLess(
+            a=np.abs(n_exp_anal[0] - np.mean(n_exp_surr)) / n_exp_anal[0],
+            b=0.1)
 
     def test_gen_pval_anal_default(self):
         mat = np.array([[[1, 1, 1, 1, 0],
@@ -246,110 +255,116 @@ class UETestCase(unittest.TestCase):
                         [[1, 1, 1, 1, 1],
                          [0, 1, 1, 1, 1],
                          [1, 1, 0, 1, 0]]])
-        pattern_hash = np.array([5,6])
-        N = 3
-        expected = np.array([ 1.56,  2.56])
-        pval_func,n_exp = ue.gen_pval_anal(mat, N,pattern_hash)
-        self.assertTrue(np.allclose(n_exp,expected))
+        pattern_hash = np.array([5, 6])
+        expected = np.array([1.56, 2.56])
+        pval_func, n_exp = ue.gen_pval_anal(mat, pattern_hash)
+        self.assertTrue(np.allclose(n_exp, expected))
         self.assertTrue(isinstance(pval_func, types.FunctionType))
 
     def test_jointJ_default(self):
-        p_val = np.array([0.31271072,  0.01175031])
-        expected = np.array([0.3419968 ,  1.92481736])
-        self.assertTrue(np.allclose(ue.jointJ(p_val),expected))
+        p_val = np.array([0.31271072, 0.01175031])
+        expected = np.array([0.3419968, 1.92481736])
+        self.assertTrue(np.allclose(ue.jointJ(p_val), expected))
 
     def test__rate_mat_avg_trial_default(self):
         mat = self.binary_sts
-        expected = [0.9, 0.7,0.6]
-        self.assertTrue(np.allclose(expected,ue._rate_mat_avg_trial(mat)))
+        expected = [0.9, 0.7, 0.6]
+        self.assertTrue(np.allclose(expected, ue._rate_mat_avg_trial(mat)))
 
     def test__bintime(self):
-        t = 13*pq.ms
-        binsize = 3*pq.ms
+        t = 13 * pq.ms
+        bin_size = 3 * pq.ms
         expected = 4
-        self.assertTrue(np.allclose(expected,ue._bintime(t,binsize)))
+        self.assertTrue(np.allclose(expected, ue._bintime(t, bin_size)))
+
     def test__winpos(self):
-        t_start = 10*pq.ms
-        t_stop = 46*pq.ms
-        winsize = 15*pq.ms
-        winstep = 3*pq.ms
-        expected = [ 10., 13., 16., 19., 22., 25., 28., 31.]*pq.ms
+        t_start = 10 * pq.ms
+        t_stop = 46 * pq.ms
+        winsize = 15 * pq.ms
+        winstep = 3 * pq.ms
+        expected = [10., 13., 16., 19., 22., 25., 28., 31.] * pq.ms
         self.assertTrue(
             np.allclose(
-                ue._winpos(
-                    t_start, t_stop, winsize,
-                    winstep).rescale('ms').magnitude,
+                ue._winpos(t_start, t_stop, winsize,
+                           winstep).rescale('ms').magnitude,
                 expected.rescale('ms').magnitude))
 
     def test__UE_default(self):
         mat = self.binary_sts
-        pattern_hash = np.array([4,6])
-        N = 3
-        expected_S = np.array([-0.26226523,  0.04959301])
+        pattern_hash = np.array([4, 6])
+        expected_S = np.array([-0.26226523, 0.04959301])
         expected_idx = [[[0], [3]], [[], [2, 4]]]
-        expected_nemp = np.array([ 1.,  3.])
-        expected_nexp = np.array([ 1.04,  2.56])
-        expected_rate = np.array([ 0.9,  0.7,  0.6])
-        S, rate_avg, n_exp, n_emp,indices = ue._UE(mat,N,pattern_hash)
-        self.assertTrue(np.allclose(S ,expected_S))
-        self.assertTrue(np.allclose(n_exp ,expected_nexp))
-        self.assertTrue(np.allclose(n_emp ,expected_nemp))
-        self.assertTrue(np.allclose(expected_rate ,rate_avg))
-        for item0_cnt,item0 in enumerate(indices):
-            for item1_cnt,item1 in enumerate(item0):
-                self.assertTrue(np.allclose(expected_idx[item0_cnt][item1_cnt],item1))
+        expected_nemp = np.array([1., 3.])
+        expected_nexp = np.array([1.04, 2.56])
+        expected_rate = np.array([0.9, 0.7, 0.6])
+        S, rate_avg, n_exp, n_emp, indices = ue._UE(mat, pattern_hash)
+        self.assertTrue(np.allclose(S, expected_S))
+        self.assertTrue(np.allclose(n_exp, expected_nexp))
+        self.assertTrue(np.allclose(n_emp, expected_nemp))
+        self.assertTrue(np.allclose(expected_rate, rate_avg))
+        for item0_cnt, item0 in enumerate(indices):
+            for item1_cnt, item1 in enumerate(item0):
+                self.assertTrue(
+                    np.allclose(expected_idx[item0_cnt][item1_cnt], item1))
 
     def test__UE_surrogate(self):
         mat = self.binary_sts
         pattern_hash = np.array([4])
-        N = 3
-        _, rate_avg_surr, _, n_emp_surr,indices_surr =\
-        ue._UE(mat, N, pattern_hash, method='surrogate_TrialByTrial', n_surr=100)
-        _, rate_avg, _, n_emp,indices =\
-        ue._UE(mat, N, pattern_hash, method='analytic_TrialByTrial')
-        self.assertTrue(np.allclose(n_emp ,n_emp_surr))
-        self.assertTrue(np.allclose(rate_avg ,rate_avg_surr))
-        for item0_cnt,item0 in enumerate(indices):
-            for item1_cnt,item1 in enumerate(item0):
-                self.assertTrue(np.allclose(indices_surr[item0_cnt][item1_cnt],item1))
+        _, rate_avg_surr, _, n_emp_surr, indices_surr =\
+            ue._UE(
+                mat,
+                pattern_hash,
+                method='surrogate_TrialByTrial',
+                n_surr=100)
+        _, rate_avg, _, n_emp, indices =\
+            ue._UE(mat, pattern_hash, method='analytic_TrialByTrial')
+        self.assertTrue(np.allclose(n_emp, n_emp_surr))
+        self.assertTrue(np.allclose(rate_avg, rate_avg_surr))
+        for item0_cnt, item0 in enumerate(indices):
+            for item1_cnt, item1 in enumerate(item0):
+                self.assertTrue(
+                    np.allclose(
+                        indices_surr[item0_cnt][item1_cnt],
+                        item1))
 
     def test_jointJ_window_analysis(self):
         sts1 = self.sts1_neo
         sts2 = self.sts2_neo
-        data = np.vstack((sts1,sts2)).T
-        winsize = 100*pq.ms
-        binsize = 5*pq.ms
-        winstep = 20*pq.ms
+        data = np.vstack((sts1, sts2)).T
+        winsize = 100 * pq.ms
+        bin_size = 5 * pq.ms
+        winstep = 20 * pq.ms
         pattern_hash = [3]
-        UE_dic = ue.jointJ_window_analysis(data, binsize, winsize, winstep, pattern_hash)
+        UE_dic = ue.jointJ_window_analysis(
+            data, bin_size, winsize, winstep, pattern_hash)
         expected_Js = np.array(
-            [ 0.57953708,  0.47348757,  0.1729669 ,  
-              0.01883295, -0.21934742,-0.80608759])
+            [0.57953708, 0.47348757, 0.1729669,
+             0.01883295, -0.21934742, -0.80608759])
         expected_n_emp = np.array(
-            [ 9.,  9.,  7.,  7.,  6.,  6.])
+            [9., 9., 7., 7., 6., 6.])
         expected_n_exp = np.array(
-            [ 6.5 ,  6.85,  6.05,  6.6 ,  6.45,  8.7 ])
+            [6.5, 6.85, 6.05, 6.6, 6.45, 8.7])
         expected_rate = np.array(
-            [[ 0.02166667,  0.01861111],
-             [ 0.02277778,  0.01777778],
-             [ 0.02111111,  0.01777778],
-             [ 0.02277778,  0.01888889],
-             [ 0.02305556,  0.01722222],
-             [ 0.02388889,  0.02055556]])*pq.kHz
-        expected_indecis_tril26 = [ 4.,    4.]
-        expected_indecis_tril4 = [ 1.]
-        self.assertTrue(np.allclose(UE_dic['Js'] ,expected_Js))
-        self.assertTrue(np.allclose(UE_dic['n_emp'] ,expected_n_emp))
-        self.assertTrue(np.allclose(UE_dic['n_exp'] ,expected_n_exp))
+            [[0.02166667, 0.01861111],
+             [0.02277778, 0.01777778],
+             [0.02111111, 0.01777778],
+             [0.02277778, 0.01888889],
+             [0.02305556, 0.01722222],
+             [0.02388889, 0.02055556]]) * pq.kHz
+        expected_indecis_tril26 = [4., 4.]
+        expected_indecis_tril4 = [1.]
+        self.assertTrue(np.allclose(UE_dic['Js'], expected_Js))
+        self.assertTrue(np.allclose(UE_dic['n_emp'], expected_n_emp))
+        self.assertTrue(np.allclose(UE_dic['n_exp'], expected_n_exp))
         self.assertTrue(np.allclose(
-            UE_dic['rate_avg'].rescale('Hz').magnitude ,
+            UE_dic['rate_avg'].rescale('Hz').magnitude,
             expected_rate.rescale('Hz').magnitude))
         self.assertTrue(np.allclose(
-            UE_dic['indices']['trial26'],expected_indecis_tril26))
+            UE_dic['indices']['trial26'], expected_indecis_tril26))
         self.assertTrue(np.allclose(
-            UE_dic['indices']['trial4'],expected_indecis_tril4))
-        
-    @staticmethod    
+            UE_dic['indices']['trial4'], expected_indecis_tril4))
+
+    @staticmethod
     def load_gdf2Neo(fname, trigger, t_pre, t_post):
         """
         load and convert the gdf file to Neo format by
@@ -371,7 +386,7 @@ class UETestCase(unittest.TestCase):
         # 20  : ET 201, 202, 203, 204
         """
         data = np.loadtxt(fname)
-    
+
         if trigger == 'PS_4':
             trigger_code = 114
         if trigger == 'RS_4':
@@ -393,7 +408,7 @@ class UETestCase(unittest.TestCase):
                 stop_tmp = data[i][1] + t_post.magnitude
                 sel_data_tmp = np.array(
                     data[np.where((data[:, 1] <= stop_tmp) &
-                                     (data[:, 1] >= start_tmp))])
+                                  (data[:, 1] >= start_tmp))])
                 sp_units_tmp = sel_data_tmp[:, 1][
                     np.where(sel_data_tmp[:, 0] == id_tmp)[0]]
                 if len(sp_units_tmp) > 0:
@@ -410,92 +425,78 @@ class UETestCase(unittest.TestCase):
         spiketrain = np.vstack([i for i in data_tr]).T
         return spiketrain
 
-    # test if the result of newly implemented Unitary Events in
-    # Elephant is consistent with the result of
-    # Riehle et al 1997 Science
-    # (see Rostami et al (2016) [Re] Science, 3(1):1-17)
-    @unittest.skipIf(_check_for_incompatibilty(),
-                     'Incompatible package versions')
-    def test_Riehle_et_al_97_UE(self):      
-        from neo.rawio.tests.tools import (download_test_file,
-                                           create_local_temp_dir,
-                                           make_all_directories)
-        from neo.test.iotest.tools import (cleanup_test_file)
-        url = [
-            "https://raw.githubusercontent.com/ReScience-Archives/" +
-            "Rostami-Ito-Denker-Gruen-2017/master/data",
-            "https://raw.githubusercontent.com/ReScience-Archives/" +
-            "Rostami-Ito-Denker-Gruen-2017/master/data"]
+    # Test if the result of newly implemented Unitary Events in Elephant is
+    # consistent with the result of Riehle et al 1997 Science
+    # (see Rostami et al (2016) [Re] Science, 3(1):1-17).
+    def test_Riehle_et_al_97_UE(self):
+        url = "http://raw.githubusercontent.com/ReScience-Archives/Rostami-" \
+              "Ito-Denker-Gruen-2017/master/data"
         shortname = "unitary_event_analysis_test_data"
-        local_test_dir = create_local_temp_dir(
-            shortname, os.environ.get("ELEPHANT_TEST_FILE_DIR"))
+        local_test_dir = create_local_temp_dir(shortname)
         files_to_download = ["extracted_data.npy", "winny131_23.gdf"]
-        make_all_directories(files_to_download,
-                             local_test_dir)
-        for f_cnt, f in enumerate(files_to_download):
-            download_test_file(f, local_test_dir, url[f_cnt])
+        context = ssl._create_unverified_context()
+        for filename in files_to_download:
+            url_file = "{url}/{filename}".format(url=url, filename=filename)
+            dist = urlopen(url_file, context=context)
+            localfile = os.path.join(local_test_dir, filename)
+            with open(localfile, 'wb') as f:
+                f.write(dist.read())
 
         # load spike data of figure 2 of Riehle et al 1997
-        sys.path.append(local_test_dir)
-        file_name = '/winny131_23.gdf'
-        trigger = 'RS_4'
-        t_pre = 1799 * pq.ms
-        t_post = 300 * pq.ms
-        spiketrain = self.load_gdf2Neo(local_test_dir + file_name,
-                                       trigger, t_pre, t_post)
+        spiketrain = self.load_gdf2Neo(os.path.join(local_test_dir,
+                                                    "winny131_23.gdf"),
+                                       trigger='RS_4',
+                                       t_pre=1799 * pq.ms,
+                                       t_post=300 * pq.ms)
 
         # calculating UE ...
         winsize = 100 * pq.ms
-        binsize = 5 * pq.ms
+        bin_size = 5 * pq.ms
         winstep = 5 * pq.ms
         pattern_hash = [3]
-        method = 'analytic_TrialAverage'
         t_start = spiketrain[0][0].t_start
         t_stop = spiketrain[0][0].t_stop
         t_winpos = ue._winpos(t_start, t_stop, winsize, winstep)
         significance_level = 0.05
 
         UE = ue.jointJ_window_analysis(
-            spiketrain, binsize, winsize, winstep,
-            pattern_hash, method=method)
+            spiketrain, bin_size, winsize, winstep,
+            pattern_hash, method='analytic_TrialAverage')
         # load extracted data from figure 2 of Riehle et al 1997
-        try:
-            extracted_data = np.load(
-                local_test_dir + '/extracted_data.npy').item()
-        except UnicodeError:
-            extracted_data = np.load(
-                local_test_dir + '/extracted_data.npy', encoding='latin1').item()
+        extracted_data = np.load(
+            os.path.join(local_test_dir, 'extracted_data.npy'),
+            encoding='latin1', allow_pickle=True).item()
         Js_sig = ue.jointJ(significance_level)
         sig_idx_win = np.where(UE['Js'] >= Js_sig)[0]
         diff_UE_rep = []
         y_cnt = 0
-        for tr in range(len(spiketrain)):
-            x_idx = np.sort(
-                np.unique(UE['indices']['trial' + str(tr)],
-                          return_index=True)[1])
-            x = UE['indices']['trial' + str(tr)][x_idx]
-            if len(x) > 0:
+        for trial_id in range(len(spiketrain)):
+            trial_id_str = "trial{}".format(trial_id)
+            indices_unique = np.unique(UE['indices'][trial_id_str])
+            if len(indices_unique) > 0:
                 # choose only the significant coincidences
-                xx = []
+                indices_unique_significant = []
                 for j in sig_idx_win:
-                    xx = np.append(xx, x[np.where(
-                        (x * binsize >= t_winpos[j]) &
-                        (x * binsize < t_winpos[j] + winsize))])
-                x_tmp = np.unique(xx) * binsize.magnitude
+                    significant = indices_unique[np.where(
+                        (indices_unique * bin_size >= t_winpos[j]) &
+                        (indices_unique * bin_size < t_winpos[j] + winsize))]
+                    indices_unique_significant.extend(significant)
+                x_tmp = np.unique(indices_unique_significant) * \
+                    bin_size.magnitude
                 if len(x_tmp) > 0:
                     ue_trial = np.sort(extracted_data['ue'][y_cnt])
                     diff_UE_rep = np.append(
                         diff_UE_rep, x_tmp - ue_trial)
                     y_cnt += +1
+        shutil.rmtree(local_test_dir)
         np.testing.assert_array_less(np.abs(diff_UE_rep), 0.3)
-        cleanup_test_file('dir', local_test_dir)
 
-        
+
 def suite():
     suite = unittest.makeSuite(UETestCase, 'test')
     return suite
 
+
 if __name__ == "__main__":
     runner = unittest.TextTestRunner(verbosity=2)
     runner.run(suite())
-

Разница между файлами не показана из-за своего большого размера
+ 550 - 509
code/elephant/elephant/unitary_event_analysis.py


+ 21 - 1
code/elephant/readthedocs.yml

@@ -1,2 +1,22 @@
+# readthedocs version
+version: 2
+
+build:
+    image: latest
+
+sphinx:
+  builder: html
+  configuration: doc/conf.py
+
 conda:
-  file: doc/environment.yml
+  environment: requirements/environment.yml
+
+python:
+    install:
+        - method: pip
+          path: .
+          extra_requirements:
+              - docs
+              - extras
+              - tests
+              - tutorials

+ 0 - 2
code/elephant/requirements-docs.txt

@@ -1,2 +0,0 @@
-numpydoc>=0.5
-sphinx>=1.2.2

+ 0 - 2
code/elephant/requirements-extras.txt

@@ -1,2 +0,0 @@
-pandas>=0.14.1
-scikit-learn

+ 0 - 1
code/elephant/requirements-tests.txt

@@ -1 +0,0 @@
-nose>=1.3.3

+ 0 - 5
code/elephant/requirements.txt

@@ -1,5 +0,0 @@
-neo>=0.5.0
-numpy>=1.8.2
-quantities>=0.10.1
-scipy>=0.14.0
-six>=1.10.0

+ 49 - 35
code/elephant/setup.py

@@ -1,53 +1,67 @@
 # -*- coding: utf-8 -*-
 
-from setuptools import setup
 import os
+import platform
+import struct
 import sys
-try:
-    from urllib.request import urlretrieve
-except ImportError:
+
+from setuptools import setup
+
+python_version_major = sys.version_info.major
+
+if python_version_major == 2:
     from urllib import urlretrieve
+else:
+    from urllib.request import urlretrieve
+
+with open(os.path.join(os.path.dirname(__file__),
+                       "elephant", "VERSION")) as version_file:
+    version = version_file.read().strip()
 
-long_description = open("README.rst").read()
-with open('requirements.txt', 'r') as fp:
-    install_requires = fp.read()
+with open("README.md") as f:
+    long_description = f.read()
+with open('requirements/requirements.txt') as fp:
+    install_requires = fp.read().splitlines()
 extras_require = {}
-for extra in ['extras', 'docs', 'tests']:
-    with open('requirements-{0}.txt'.format(extra), 'r') as fp:
+for extra in ['extras', 'docs', 'tests', 'tutorials']:
+    with open('requirements/requirements-{0}.txt'.format(extra)) as fp:
         extras_require[extra] = fp.read()
 
-# spade specific
-is_64bit = sys.maxsize > 2 ** 32
-is_python3 = float(sys.version[0:3]) > 2.7
 
-if is_python3:
-    if is_64bit:
-        urlretrieve('http://www.borgelt.net/bin64/py3/fim.so',
-                    'elephant/spade_src/fim.so')
-    else:
-        urlretrieve('http://www.borgelt.net/bin32/py3/fim.so',
-                    'elephant/spade_src/fim.so')
-else:
-    if is_64bit:
-        urlretrieve('http://www.borgelt.net/bin64/py2/fim.so',
-                    'elephant/spade_src/fim.so')
+def download_spade_fim():
+    """
+    Downloads SPADE specific PyFIM binary file.
+    """
+    if platform.system() == "Windows":
+        fim_filename = "fim.pyd"
     else:
-        urlretrieve('http://www.borgelt.net/bin32/py2/fim.so',
-                    'elephant/spade_src/fim.so')
+        # Linux
+        fim_filename = "fim.so"
+    spade_src_dir = os.path.join(os.path.dirname(__file__), "elephant",
+                                 "spade_src")
+    fim_lib_path = os.path.join(spade_src_dir, fim_filename)
+    if os.path.exists(fim_lib_path):
+        return
+
+    arch_bits = struct.calcsize("P") * 8
+    url_fim = "http://www.borgelt.net/bin{arch}/py{py_ver}/{filename}". \
+        format(arch=arch_bits, py_ver=python_version_major,
+               filename=fim_filename)
+    try:
+        urlretrieve(url_fim, filename=fim_lib_path)
+        print("Successfully downloaded fim lib to {}".format(fim_lib_path))
+    except Exception:
+        print("Unable to download {url} module.".format(url=url_fim))
+
+
+if len(sys.argv) > 1 and sys.argv[1].lower() != 'sdist':
+    download_spade_fim()
 
 setup(
     name="elephant",
-    version='0.6.0',
+    version=version,
     packages=['elephant', 'elephant.test'],
-    package_data={'elephant': [
-        os.path.join('current_source_density_src', 'test_data.mat'),
-        os.path.join('current_source_density_src', 'LICENSE'),
-        os.path.join('current_source_density_src', 'README.md'),
-        os.path.join('current_source_density_src', '*.py'),
-        os.path.join('spade_src', '*.py'),
-        os.path.join('spade_src', 'LICENSE'),
-        os.path.join('spade_src', '*.so')
-    ]},
+    include_package_data=True,
 
     install_requires=install_requires,
     extras_require=extras_require,

+ 23 - 19
code/example.py

@@ -50,7 +50,8 @@ from neo import Block, Segment
 from elephant.signal_processing import butter
 
 from reachgraspio import reachgraspio
-from neo.utils import add_epoch, cut_segment_by_epoch, get_events
+from neo.utils import cut_segment_by_epoch, add_epoch, get_events
+# from neo.utils import add_epoch, cut_segment_by_epoch, get_events
 
 
 # =============================================================================
@@ -74,12 +75,7 @@ session = reachgraspio.ReachGraspIO(session_name, odml_directory=odml_dir)
 # time shift of the ns2 signal (LFP) induced by the online filter is
 # automatically corrected for by a heuristic factor stored in the metadata
 # (correct_filter_shifts=True).
-data_block = session.read_block(
-    nsx_to_load='all',
-    n_starts=None, n_stops=300 * pq.s,
-    channels=[62], units='all',
-    load_events=True, load_waveforms=True, scaling='voltage',
-    correct_filter_shifts=True)
+data_block = session.read_block(load_waveforms=True, correct_filter_shifts=True, lazy=True)
 
 # Access the single Segment of the data block, reaching up to 300s.
 assert len(data_block.segments) == 1
@@ -98,22 +94,27 @@ data_segment = data_block.segments[0]
 
 filtered_anasig = []
 # Loop through all AnalogSignal objects in the loaded data
+# Create LFP representation of the data if not preset already
 for anasig in data_block.segments[0].analogsignals:
-    if anasig.annotations['nsx'] == 2:
+    if all(anasig.array_annotations['nsx'] == 2):
         # AnalogSignal is LFP from ns2
-        anasig.name = 'LFP (online filter, ns%i)' % anasig.annotations['nsx']
-    elif anasig.annotations['nsx'] in [5, 6]:
+        anasig.name = 'LFP (online filtered, ns2)'
+    elif all(np.isin(anasig.array_annotations['nsx'], [5, 6])):
         # AnalogSignal is raw signal from ns5 or ns6
-        anasig.name = 'raw (ns%i)' % anasig.annotations['nsx']
+        assert len(np.unique(anasig.array_annotations['nsx'])) == 1
+        anasig.name = 'raw (ns%i)' % np.unique(anasig.array_annotations['nsx'])[0]
+
+        # temporarily loading the data into memory
+        anasig_loaded = anasig.load(time_slice=(None, 300*pq.s), channel_indexes=[63])
 
         # Use the Elephant library to filter the analog signal
         f_anasig = butter(
-                anasig,
+                anasig_loaded,
                 highpass_freq=None,
                 lowpass_freq=250 * pq.Hz,
                 order=4)
-        f_anasig.name = 'LFP (offline filtered ns%i)' % \
-            anasig.annotations['nsx']
+        print('filtering done.')
+        f_anasig.name = 'LFP (offline filtered ns%i)' % np.unique(anasig.array_annotations['nsx'])[0]
         filtered_anasig.append(f_anasig)
 # Attach all offline filtered LFPs to the segment of data
 data_block.segments[0].analogsignals.extend(filtered_anasig)
@@ -145,6 +146,8 @@ start_events = get_events(
     name='TrialEvents',
     trial_event_labels='TS-ON',
     performance_in_trial=session.performance_codes['correct_trial'])
+print('got start events.')
+
 
 # Extract single Neo Event object containing all TS-ON triggers
 assert len(start_events) == 1
@@ -164,6 +167,7 @@ epoch = add_epoch(
     attach_result=False,
     name='analysis_epochs',
     array_annotations=start_event.array_annotations)
+print('added epoch.')
 
 # Create new segments of data cut according to the analysis epochs of the
 # 'analysis_epochs' Neo Epoch object. The time axes of all segments are aligned
@@ -201,11 +205,11 @@ nsx_colors = ['b', 'k', 'r']
 # Loop through all analog signals and plot the signal in a color corresponding
 # to its sampling frequency (i.e., originating from the ns2/ns5 or ns2/ns6).
 for i, anasig in enumerate(trial_segment.analogsignals):
-        plt.plot(
-            anasig.times.rescale(time_unit),
-            anasig.squeeze().rescale(amplitude_unit),
-            label=anasig.name,
-            color=nsx_colors[i])
+    plt.plot(
+        anasig.times.rescale(time_unit),
+        anasig.squeeze().rescale(amplitude_unit),
+        label=anasig.name,
+        color=nsx_colors[i])
 
 # Loop through all spike trains and plot the spike time, and overlapping the
 # wave form of the spike used for spike sorting stored separately in the nev

+ 0 - 877
code/neo_utils.py

@@ -1,877 +0,0 @@
-'''
-Convenience functions to extend the functionality of the Neo framework
-version 0.5.
-
-Authors: Julia Sprenger, Lyuba Zehl, Michael Denker
-
-
-Copyright (c) 2017, Institute of Neuroscience and Medicine (INM-6),
-Forschungszentrum Juelich, Germany
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
-
-* Redistributions of source code must retain the above copyright notice, this
-list of conditions and the following disclaimer.
-* Redistributions in binary form must reproduce the above copyright notice,
-this list of conditions and the following disclaimer in the documentation
-and/or other materials provided with the distribution.
-* Neither the names of the copyright holders nor the names of the contributors
-may be used to endorse or promote products derived from this software without
-specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
-FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
-DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
-OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-'''
-
-import copy
-import warnings
-import inspect
-
-import numpy as np
-import quantities as pq
-
-import neo
-
-
-def get_events(container, properties=None):
-    """
-    This function returns a list of Neo Event objects, corresponding to given
-    key-value pairs in the attributes or annotations of the Event.
-
-    Parameter:
-    ---------
-    container: neo.Block or neo.Segment
-        The Neo Block or Segment object to extract data from.
-    properties: dictionary
-        A dictionary that contains the Event keys and values to filter for.
-        Each key of the dictionary is matched to a attribute or an an
-        annotation of Event. The value of each dictionary entry corresponds to
-        a valid entry or a list of valid entries of the attribute or
-        annotation.
-
-        If the value belonging to the key is a list of entries of the same
-        length as the number of events in the Event object, the list entries
-        are matched to the events in the Event object. The resulting Event
-        object contains only those events where the values match up.
-
-        Otherwise, the value is compared to the attributes or annotation of the
-        Event object as such, and depending on the comparison, either the
-        complete Event object is returned or not.
-
-        If None or an empty dictionary is passed, all Event Objects will be
-        returned in a list.
-
-    Returns:
-    --------
-    events: list
-        A list of Event objects matching the given criteria.
-
-    Example:
-    --------
-        >>> event = neo.Event(
-                times = [0.5, 10.0, 25.2] * pq.s)
-        >>> event.annotate(
-                event_type = 'trial start',
-                trial_id = [1, 2, 3]
-        >>> seg = neo.Segment()
-        >>> seg.events = [event]
-
-        # Will return a list with the complete event object
-        >>> get_events(event, properties={event_type='trial start')
-
-        # Will return an empty list
-        >>> get_events(event, properties={event_type='trial stop'})
-
-        # Will return a list with an Event object, but only with trial 2
-        >>> get_events(event, properties={'trial_id' = 2})
-
-        # Will return a list with an Event object, but only with trials 1 and 2
-        >>> get_events(event, properties={'trial_id' = [1, 2]})
-    """
-    if isinstance(container, neo.Segment):
-        return _get_from_list(container.events, prop=properties)
-
-    elif isinstance(container, neo.Block):
-        event_lst = []
-        for seg in container.segments:
-            event_lst += _get_from_list(seg.events, prop=properties)
-        return event_lst
-    else:
-        raise TypeError(
-            'Container needs to be of type neo.Block or neo.Segment, not %s '
-            'in order to extract Events.' % (type(container)))
-
-
-def get_epochs(container, properties=None):
-    """
-    This function returns a list of Neo Epoch objects, corresponding to given
-    key-value pairs in the attributes or annotations of the Epoch.
-
-    Parameters:
-    -----------
-    container: neo.Block or neo.Segment
-        The Neo Block or Segment object to extract data from.
-    properties: dictionary
-        A dictionary that contains the Epoch keys and values to filter for.
-        Each key of the dictionary is matched to an attribute or an an
-        annotation of the Event. The value of each dictionary entry corresponds
-        to a valid entry or a list of valid entries of the attribute or
-        annotation.
-
-        If the value belonging to the key is a list of entries of the same
-        length as the number of epochs in the Epoch object, the list entries
-        are matched to the epochs in the Epoch object. The resulting Epoch
-        object contains only those epochs where the values match up.
-
-        Otherwise, the value is compared to the attribute or annotation of the
-        Epoch object as such, and depending on the comparison, either the
-        complete Epoch object is returned or not.
-
-        If None or an empty dictionary is passed, all Epoch Objects will
-        be returned in a list.
-
-    Returns:
-    --------
-    epochs: list
-        A list of Epoch objects matching the given criteria.
-
-    Example:
-    --------
-        >>> epoch = neo.Epoch(
-                times = [0.5, 10.0, 25.2] * pq.s,
-                durations = [100, 100, 100] * pq.ms)
-        >>> epoch.annotate(
-                event_type = 'complete trial',
-                trial_id = [1, 2, 3]
-        >>> seg = neo.Segment()
-        >>> seg.epochs = [epoch]
-
-        # Will return a list with the complete event object
-        >>> get_epochs(epoch, prop={epoch_type='complete trial')
-
-        # Will return an empty list
-        >>> get_epochs(epoch, prop={epoch_type='error trial'})
-
-        # Will return a list with an Event object, but only with trial 2
-        >>> get_epochs(epoch, prop={'trial_id' = 2})
-
-        # Will return a list with an Event object, but only with trials 1 and 2
-        >>> get_epochs(epoch, prop={'trial_id' = [1, 2]})
-    """
-    if isinstance(container, neo.Segment):
-        return _get_from_list(container.epochs, prop=properties)
-
-    elif isinstance(container, neo.Block):
-        epoch_list = []
-        for seg in container.segments:
-            epoch_list += _get_from_list(seg.epochs, prop=properties)
-        return epoch_list
-    else:
-        raise TypeError(
-            'Container needs to be of type neo.Block or neo.Segment, not %s '
-            'in order to extract Epochs.' % (type(container)))
-
-
-def add_epoch(
-        segment, event1, event2=None, pre=0 * pq.s, post=0 * pq.s,
-        attach_result=True, **kwargs):
-    """
-    Create epochs around a single event, or between pairs of events. Starting
-    and end time of the epoch can be modified using pre and post as offsets
-    before the and after the event(s). Additional keywords will be directly
-    forwarded to the epoch intialization.
-
-    Parameters:
-    -----------
-    sgement : neo.Segment
-        The segement in which the final Epoch object is added.
-    event1 : neo.Event
-        The Neo Event objects containing the start events of the epochs. If no
-        event2 is specified, these event1 also specifies the stop events, i.e.,
-        the epoch is cut around event1 times.
-    event2: neo.Event
-        The Neo Event objects containing the stop events of the epochs. If no
-        event2 is specified, event1 specifies the stop events, i.e., the epoch
-        is cut around event1 times. The number of events in event2 must match
-        that of event1.
-    pre, post: Quantity (time)
-        Time offsets to modify the start (pre) and end (post) of the resulting
-        epoch. Example: pre=-10*ms and post=+25*ms will cut from 10 ms before
-        event1 times to 25 ms after event2 times
-    attach_result: bool
-        If True, the resulting Neo Epoch object is added to segment.
-
-    Keyword Arguments:
-    ------------------
-    Passed to the Neo Epoch object.
-
-    Returns:
-    --------
-    epoch: neo.Epoch
-        An Epoch object with the calculated epochs (one per entry in event1).
-    """
-    if event2 is None:
-        event2 = event1
-
-    if not isinstance(segment, neo.Segment):
-        raise TypeError(
-            'Segment has to be of type neo.Segment, not %s' % type(segment))
-
-    for event in [event1, event2]:
-        if not isinstance(event, neo.Event):
-            raise TypeError(
-                'Events have to be of type neo.Event, not %s' % type(event))
-
-    if len(event1) != len(event2):
-        raise ValueError(
-            'event1 and event2 have to have the same number of entries in '
-            'order to create epochs between pairs of entries. Match your '
-            'events before generating epochs. Current event lengths '
-            'are %i and %i' % (len(event1), len(event2)))
-
-    times = event1.times + pre
-    durations = event2.times + post - times
-
-    if any(durations < 0):
-        raise ValueError(
-            'Can not create epoch with negative duration. '
-            'Requested durations %s.' % durations)
-    elif any(durations == 0):
-        raise ValueError('Can not create epoch with zero duration.')
-
-    if 'name' not in kwargs:
-        kwargs['name'] = 'epoch'
-    if 'labels' not in kwargs:
-        kwargs['labels'] = [
-            '%s_%i' % (kwargs['name'], i) for i in range(len(times))]
-
-    ep = neo.Epoch(times=times, durations=durations, **kwargs)
-
-    ep.annotate(**event1.annotations)
-
-    if attach_result:
-        segment.epochs.append(ep)
-        segment.create_relationship()
-
-    return ep
-
-
-def match_events(event1, event2):
-    """
-    Finds pairs of Event entries in event1 and event2 with the minimum delay,
-    such that the entry of event1 directly preceeds the entry of event2.
-    Returns filtered two events of identical length, which contain matched
-    entries.
-
-    Parameters:
-    -----------
-    event1, event2: neo.Event
-        The two Event objects to match up.
-
-    Returns:
-    --------
-    event1, event2: neo.Event
-        Event objects with identical number of events, containing only those
-        events that could be matched against each other. A warning is issued if
-        not all events in event1 or event2 could be matched.
-    """
-    event1 = event1
-    event2 = event2
-
-    id1, id2 = 0, 0
-    match_ev1, match_ev2 = [], []
-    while id1 < len(event1) and id2 < len(event2):
-        time1 = event1.times[id1]
-        time2 = event2.times[id2]
-
-        # wrong order of events
-        if time1 > time2:
-            id2 += 1
-
-        # shorter epoch possible by later event1 entry
-        elif id1 + 1 < len(event1) and event1.times[id1 + 1] < time2:
-                # there is no event in 2 until the next event in 1
-            id1 += 1
-
-        # found a match
-        else:
-            match_ev1.append(id1)
-            match_ev2.append(id2)
-            id1 += 1
-            id2 += 1
-
-    if id1 < len(event1):
-        warnings.warn(
-            'Could not match all events to generate epochs. Missed '
-            '%s event entries in event1 list' % (len(event1) - id1))
-    if id2 < len(event2):
-        warnings.warn(
-            'Could not match all events to generate epochs. Missed '
-            '%s event entries in event2 list' % (len(event2) - id2))
-
-    event1_matched = _event_epoch_slice_by_valid_ids(
-        obj=event1, valid_ids=match_ev1)
-    event2_matched = _event_epoch_slice_by_valid_ids(
-        obj=event2, valid_ids=match_ev2)
-
-    return event1_matched, event2_matched
-
-
-def cut_block_by_epochs(block, properties=None, reset_time=False):
-    """
-    This function cuts Neo Segments in a Neo Block according to multiple Neo
-    Epoch objects.
-
-    The function alters the Neo Block by adding one Neo Segment per Epoch entry
-    fulfilling a set of conditions on the Epoch attributes and annotations. The
-    original segments are removed from the block.
-
-    A dictionary contains restrictions on which epochs are considered for
-    the cutting procedure. To this end, it is possible to
-    specify accepted (valid) values of specific annotations on the source
-    epochs.
-
-    The resulting cut segments may either retain their original time stamps, or
-    be shifted to a common starting time.
-
-    Parameters
-    ----------
-    block: Neo Block
-        Contains the Segments to cut according to the Epoch criteria provided
-    properties: dictionary
-        A dictionary that contains the Epoch keys and values to filter for.
-        Each key of the dictionary is matched to an attribute or an an
-        annotation of the Event. The value of each dictionary entry corresponds
-        to a valid entry or a list of valid entries of the attribute or
-        annotation.
-
-        If the value belonging to the key is a list of entries of the same
-        length as the number of epochs in the Epoch object, the list entries
-        are matched to the epochs in the Epoch object. The resulting Epoch
-        object contains only those epochs where the values match up.
-
-        Otherwise, the value is compared to the attributes or annotation of the
-        Epoch object as such, and depending on the comparison, either the
-        complete Epoch object is returned or not.
-
-        If None or an empty dictionary is passed, all Epoch Objects will
-        be considered
-
-    reset_time: bool
-        If True the times stamps of all sliced objects are set to fall
-        in the range from 0 to the duration of the epoch duration.
-        If False, original time stamps are retained.
-        Default is False.
-
-    Returns:
-    --------
-    None
-    """
-    if not isinstance(block, neo.Block):
-        raise TypeError(
-            'block needs to be a neo Block, not %s' % type(block))
-
-    old_segments = copy.copy(block.segments)
-    for seg in old_segments:
-        epochs = _get_from_list(seg.epochs, prop=properties)
-        if len(epochs) > 1:
-            warnings.warn(
-                'Segment %s contains multiple epochs with '
-                'requested properties (%s). Subsegments can '
-                'have overlapping times' % (seg.name, properties))
-
-        elif len(epochs) == 0:
-            warnings.warn(
-                'No epoch is matching the requested epoch properties %s. '
-                'No cutting of segment performed.' % (properties))
-
-        for epoch in epochs:
-            new_segments = cut_segment_by_epoch(
-                seg, epoch=epoch, reset_time=reset_time)
-            block.segments += new_segments
-
-        block.segments.remove(seg)
-    block.create_relationship()
-
-
-def cut_segment_by_epoch(seg, epoch, reset_time=False):
-    """
-    Cuts a Neo Segment according to a neo Epoch object
-
-    The function returns a list of neo Segments, where each segment corresponds
-    to an epoch in the neo Epoch object and contains the data of the original
-    Segment cut to that particular Epoch.
-
-    The resulting segments may either retain their original time stamps,
-    or can be shifted to a common time axis.
-
-    Parameters
-    ----------
-    seg: Neo Segment
-        The Segment containing the original uncut data.
-    epoch: Neo Epoch
-        For each epoch in this input, one segment is generated according to
-         the epoch time and duration.
-    reset_time: bool
-        If True the times stamps of all sliced objects are set to fall
-        in the range from 0 to the duration of the epoch duration.
-        If False, original time stamps are retained.
-        Default is False.
-
-    Returns:
-    --------
-    segments: list of Neo Segments
-        Per epoch in the input, a neo.Segment with AnalogSignal and/or
-        SpikeTrain Objects will be generated and returned. Each Segment will
-        receive the annotations of the corresponding epoch in the input.
-    """
-    if not isinstance(seg, neo.Segment):
-        raise TypeError(
-            'Seg needs to be of type neo.Segment, not %s' % type(seg))
-
-    if type(seg.parents[0]) != neo.Block:
-        raise ValueError(
-            'Segment has no block as parent. Can not cut segment.')
-
-    if not isinstance(epoch, neo.Epoch):
-        raise TypeError(
-            'Epoch needs to be of type neo.Epoch, not %s' % type(epoch))
-
-    segments = []
-    for ep_id in range(len(epoch)):
-        subseg = seg_time_slice(seg,
-                                epoch.times[ep_id],
-                                epoch.times[ep_id] + epoch.durations[ep_id],
-                                reset_time=reset_time)
-        # Add annotations of Epoch
-        for a in epoch.annotations:
-            if type(epoch.annotations[a]) is list \
-                    and len(epoch.annotations[a]) == len(epoch):
-                subseg.annotations[a] = copy.copy(epoch.annotations[a][ep_id])
-            else:
-                subseg.annotations[a] = copy.copy(epoch.annotations[a])
-        segments.append(subseg)
-
-    return segments
-
-
-def seg_time_slice(seg, t_start=None, t_stop=None, reset_time=False, **kwargs):
-    """
-    Creates a time slice of a neo Segment containing slices of all child
-    objects.
-
-    Parameters:
-    -----------
-    seg: neo Segment
-        The neo Segment object to slice.
-    t_start: Quantity
-        Starting time of the sliced time window.
-    t_stop: Quantity
-        Stop time of the sliced time window.
-    reset_time: bool
-        If True the times stamps of all sliced objects are set to fall
-        in the range from 0 to the duration of the epoch duration.
-        If False, original time stamps are retained.
-        Default is False.
-
-    Keyword Arguments:
-    ------------------
-        Additional keyword arguments used for initialization of the sliced
-        Neo Segment object.
-
-    Returns:
-    --------
-    seg: Neo Segment
-        Temporal slice of the original Neo Segment from t_start to t_stop.
-    """
-    subseg = neo.Segment(**kwargs)
-
-    for attr in [
-            'file_datetime', 'rec_datetime', 'index',
-            'name', 'description', 'file_origin']:
-        setattr(subseg, attr, getattr(seg, attr))
-
-    subseg.annotations = copy.deepcopy(seg.annotations)
-    # This would be the better definition of t_shift after incorporating
-    # PR#215 at NeuronalEnsemble/python-neo
-    t_shift = seg.t_start - t_start
-
-    # t_min_id = np.argmin(np.array([a.t_start for a in seg.analogsignals]))
-    # t_shift = seg.analogsignals[t_min_id] - t_start
-
-    # cut analogsignals
-    for ana_id in range(len(seg.analogsignals)):
-        ana_time_slice = seg.analogsignals[ana_id].time_slice(t_start, t_stop)
-        # explicitely copying parents as this is not yet fixed in neo (
-        # NeuralEnsemble/python-neo issue #220)
-        ana_time_slice.segment = subseg
-        ana_time_slice.channel_index = seg.analogsignals[ana_id].channel_index
-        if reset_time:
-            ana_time_slice.t_start = ana_time_slice.t_start + t_shift
-        subseg.analogsignals.append(ana_time_slice)
-
-    # cut spiketrains
-    for st_id in range(len(seg.spiketrains)):
-        st_time_slice = seg.spiketrains[st_id].time_slice(t_start, t_stop)
-        if reset_time:
-            st_time_slice = shift_spiketrain(st_time_slice, t_shift)
-        subseg.spiketrains.append(st_time_slice)
-
-    # cut events
-    for ev_id in range(len(seg.events)):
-        ev_time_slice = event_time_slice(seg.events[ev_id], t_start, t_stop)
-        if reset_time:
-            ev_time_slice = shift_event(ev_time_slice, t_shift)
-        # appending only non-empty events
-        if len(ev_time_slice):
-            subseg.events.append(ev_time_slice)
-
-    # cut epochs
-    for ep_id in range(len(seg.epochs)):
-        ep_time_slice = epoch_time_slice(seg.epochs[ep_id], t_start, t_stop)
-        if reset_time:
-            ep_time_slice = shift_epoch(ep_time_slice, t_shift)
-        # appending only non-empty epochs
-        if len(ep_time_slice):
-            subseg.epochs.append(ep_time_slice)
-
-    # TODO: Improve
-    # seg.create_relationship(force=True)
-    return subseg
-
-
-def shift_spiketrain(spiketrain, t_shift):
-    """
-    Shifts a spike train to start at a new time.
-
-    Parameters:
-    -----------
-    spiketrain: Neo SpikeTrain
-        Spiketrain of which a copy will be generated with shifted spikes and
-        starting and stopping times
-    t_shift: Quantity (time)
-        Amount of time by which to shift the SpikeTrain.
-
-    Returns:
-    --------
-    spiketrain: Neo SpikeTrain
-        New instance of a SpikeTrain object starting at t_start (the original
-        SpikeTrain is not modified).
-    """
-    new_st = spiketrain.duplicate_with_new_data(
-        signal=spiketrain.times.view(pq.Quantity) + t_shift,
-        t_start=spiketrain.t_start + t_shift,
-        t_stop=spiketrain.t_stop + t_shift)
-    return new_st
-
-
-def shift_event(ev, t_shift):
-    """
-    Shifts an event by an amount of time.
-
-    Parameters:
-    -----------
-    event: Neo Event
-        Event of which a copy will be generated with shifted times
-    t_shift: Quantity (time)
-        Amount of time by which to shift the Event.
-
-    Returns:
-    --------
-    epoch: Neo Event
-        New instance of an Event object starting at t_shift later than the
-        original Event (the original Event is not modified).
-    """
-    return _shift_time_signal(ev, t_shift)
-
-
-def shift_epoch(epoch, t_shift):
-    """
-    Shifts an epoch by an amount of time.
-
-    Parameters:
-    -----------
-    epoch: Neo Epoch
-        Epoch of which a copy will be generated with shifted times
-    t_shift: Quantity (time)
-        Amount of time by which to shift the Epoch.
-
-    Returns:
-    --------
-    epoch: Neo Epoch
-        New instance of an Epoch object starting at t_shift later than the
-        original Epoch (the original Epoch is not modified).
-    """
-    return _shift_time_signal(epoch, t_shift)
-
-
-def event_time_slice(event, t_start=None, t_stop=None):
-    """
-    Slices an Event object to retain only those events that fall in a certain
-    time window.
-
-    Parameters:
-    -----------
-    event: Neo Event
-        The Event to slice.
-    t_start, t_stop: Quantity (time)
-        Time window in which to retain events. An event at time t is retained
-        if t_start <= t < t_stop.
-
-    Returns:
-    --------
-    event: Neo Event
-        New instance of an Event object containing only the events in the time
-        range.
-    """
-    if t_start is None:
-        t_start = -np.inf
-    if t_stop is None:
-        t_stop = np.inf
-
-    valid_ids = np.where(np.logical_and(
-        event.times >= t_start, event.times < t_stop))[0]
-
-    new_event = _event_epoch_slice_by_valid_ids(event, valid_ids=valid_ids)
-
-    return new_event
-
-
-def epoch_time_slice(epoch, t_start=None, t_stop=None):
-    """
-    Slices an Epoch object to retain only those epochs that fall in a certain
-    time window.
-
-    Parameters:
-    -----------
-    epoch: Neo Epoch
-        The Epoch to slice.
-    t_start, t_stop: Quantity (time)
-        Time window in which to retain epochs. An epoch at time t and
-        duration d is retained if t_start <= t < t_stop - d.
-
-    Returns:
-    --------
-    epoch: Neo Epoch
-        New instance of an Epoch object containing only the epochs in the time
-        range.
-    """
-    if t_start is None:
-        t_start = -np.inf
-    if t_stop is None:
-        t_stop = np.inf
-
-    valid_ids = np.where(np.logical_and(
-        epoch.times >= t_start, epoch.times + epoch.durations < t_stop))[0]
-
-    new_epoch = _event_epoch_slice_by_valid_ids(epoch, valid_ids=valid_ids)
-
-    return new_epoch
-
-
-def _get_from_list(input_list, prop=None):
-    """
-    Internal function
-    """
-    output_list = []
-    # empty or no dictionary
-    if not prop or bool([b for b in prop.values() if b == []]):
-        output_list += [e for e in input_list]
-    # dictionary is given
-    else:
-        for ep in input_list:
-            sparse_ep = ep.copy()
-            for k in prop.keys():
-                sparse_ep = _filter_event_epoch(sparse_ep, k, prop[k])
-                # if there is nothing left, it cannot filtered
-                if sparse_ep is None:
-                    break
-            if sparse_ep is not None:
-                output_list.append(sparse_ep)
-    return output_list
-
-
-def _filter_event_epoch(obj, annotation_key, annotation_value):
-    """
-    Internal function.
-
-    This function return a copy of a neo Event or Epoch object, which only
-    contains attributes or annotations corresponding to requested key-value
-    pairs.
-
-    Parameters:
-    -----------
-    obj : neo.Event
-        The neo Event or Epoch object to modify.
-    annotation_key : string, int or float
-        The name of the annotation used to filter.
-    annotation_value : string, int, float, list or np.ndarray
-        The accepted value or list of accepted values of the attributes or
-        annotations specified by annotation_key. For each entry in obj the
-        respective annotation defined by annotation_key is compared to the
-        annotation value. The entry of obj is kept if the attribute or
-        annotation is equal or contained in annotation_value.
-
-    Returns:
-    --------
-    obj : neo.Event or neo.Epoch
-        The Event or Epoch object with every event or epoch removed that does
-        not match the filter criteria (i.e., where none of the entries in
-        annotation_value match the attribute or annotation annotation_key.
-    """
-    valid_ids = _get_valid_ids(obj, annotation_key, annotation_value)
-
-    if len(valid_ids) == 0:
-        return None
-
-    return _event_epoch_slice_by_valid_ids(obj, valid_ids)
-
-
-def _event_epoch_slice_by_valid_ids(obj, valid_ids):
-    """
-    Internal function
-    """
-    # modify annotations
-    sparse_annotations = _get_valid_annotations(obj, valid_ids)
-
-    # modify labels
-    sparse_labels = _get_valid_labels(obj, valid_ids)
-
-    if type(obj) is neo.Event:
-        sparse_obj = neo.Event(
-            times=copy.deepcopy(obj.times[valid_ids]),
-            labels=sparse_labels,
-            units=copy.deepcopy(obj.units),
-            name=copy.deepcopy(obj.name),
-            description=copy.deepcopy(obj.description),
-            file_origin=copy.deepcopy(obj.file_origin),
-            **sparse_annotations)
-    elif type(obj) is neo.Epoch:
-        sparse_obj = neo.Epoch(
-            times=copy.deepcopy(obj.times[valid_ids]),
-            durations=copy.deepcopy(obj.durations[valid_ids]),
-            labels=sparse_labels,
-            units=copy.deepcopy(obj.units),
-            name=copy.deepcopy(obj.name),
-            description=copy.deepcopy(obj.description),
-            file_origin=copy.deepcopy(obj.file_origin),
-            **sparse_annotations)
-    else:
-        raise TypeError('Can only slice Event and Epoch objects by valid IDs.')
-
-    return sparse_obj
-
-
-def _get_valid_ids(obj, annotation_key, annotation_value):
-    """
-    Internal function
-    """
-    # wrap annotation value to be list
-    if not type(annotation_value) in [list, np.ndarray]:
-        annotation_value = [annotation_value]
-
-    # get all real attributes of object
-    attributes = inspect.getmembers(obj)
-    attributes_names = [t[0] for t in attributes if not(
-        t[0].startswith('__') and t[0].endswith('__'))]
-    attributes_ids = [i for i, t in enumerate(attributes) if not(
-        t[0].startswith('__') and t[0].endswith('__'))]
-
-    # check if annotation is present
-    value_avail = False
-    if annotation_key in obj.annotations:
-        check_value = obj.annotations[annotation_key]
-        value_avail = True
-    elif annotation_key in attributes_names:
-        check_value = attributes[attributes_ids[
-            attributes_names.index(annotation_key)]][1]
-        value_avail = True
-
-    if value_avail:
-        # check if annotation is list and fits to length of object list
-        if not _is_annotation_list(check_value, len(obj)):
-            # check if annotation is single value and fits to requested value
-            if (check_value in annotation_value):
-                valid_mask = np.ones(obj.shape)
-            else:
-                valid_mask = np.zeros(obj.shape)
-                if type(check_value) != str:
-                    warnings.warn(
-                        'Length of annotation "%s" (%s) does not fit '
-                        'to length of object list (%s)' % (
-                            annotation_key, len(check_value), len(obj)))
-
-        # extract object entries, which match requested annotation
-        else:
-            valid_mask = np.zeros(obj.shape)
-            for obj_id in range(len(obj)):
-                if check_value[obj_id] in annotation_value:
-                    valid_mask[obj_id] = True
-    else:
-        valid_mask = np.zeros(obj.shape)
-
-    valid_ids = np.where(valid_mask)[0]
-
-    return valid_ids
-
-
-def _get_valid_annotations(obj, valid_ids):
-    """
-    Internal function
-    """
-    sparse_annotations = copy.deepcopy(obj.annotations)
-    for key in sparse_annotations:
-        if _is_annotation_list(sparse_annotations[key], len(obj)):
-            sparse_annotations[key] = list(np.array(sparse_annotations[key])[
-                valid_ids])
-    return sparse_annotations
-
-
-def _get_valid_labels(obj, valid_ids):
-    """
-    Internal function
-    """
-    labels = obj.labels
-    selected_labels = []
-    if len(labels) > 0:
-        if _is_annotation_list(labels, len(obj)):
-            for vid in valid_ids:
-                selected_labels.append(labels[vid])
-            # sparse_labels = sparse_labels[valid_ids]
-        else:
-            warnings.warn('Can not filter object labels. Shape (%s) does not '
-                          'fit object shape (%s)'
-                          '' % (labels.shape, obj.shape))
-    return np.array(selected_labels)
-
-
-def _is_annotation_list(value, exp_length):
-    """
-    Internal function
-    """
-    return (
-        (isinstance(value, list) or (
-            isinstance(value, np.ndarray) and value.ndim > 0)) and
-        (len(value) == exp_length))
-
-
-def _shift_time_signal(sig, t_shift):
-    """
-    Internal function.
-    """
-    if not hasattr(sig, 'times'):
-        raise AttributeError(
-            'Can only shift signals, which have an attribute'
-            ' "times", not %s' % type(sig))
-    new_sig = sig.duplicate_with_new_data(signal=sig.times + t_shift)
-    return new_sig

+ 10 - 65
code/python-neo/.circleci/config.yml

@@ -7,12 +7,11 @@ workflows:
   version: 2
   test:
     jobs:
-      - test-3.6
-      - test-2.7
+      - test-3.7
 jobs:
-  test-3.6:
+  test-3.7:
     docker:
-      - image: circleci/python:3.6.1
+      - image: circleci/python:3.7.7
 
     environment:
       - NEO_TEST_FILE_DIR: "/home/circleci/repo/files_for_testing_neo"
@@ -23,11 +22,11 @@ jobs:
       - checkout
 
       # Download and cache dependencies
-      - restore_cache:
-          keys:
-          - v1-dependencies-{{ checksum "requirements.txt" }}
-          # fallback to using the latest cache if no exact match is found
-          - v1-dependencies-
+      #- restore_cache:
+      #    keys:
+      #    - v1-dependencies-{{ checksum "requirements.txt" }}
+      #    # fallback to using the latest cache if no exact match is found
+      #    #- v1-dependencies-
 
       - restore-cache:
           keys:
@@ -40,7 +39,8 @@ jobs:
             . venv/bin/activate
             pip install -r requirements.txt
             pip install -r .circleci/requirements_testing.txt
-            pip install . 
+            pip install .
+            pip freeze
 
       - save_cache:
           paths:
@@ -68,58 +68,3 @@ jobs:
       - store_artifacts:
           path: test-reports
           destination: test-reports
-
-  test-2.7:
-    docker:
-      - image: circleci/python:2.7-stretch
-      
-    environment:
-      - NEO_TEST_FILE_DIR: "/home/circleci/repo/files_for_testing_neo"
-
-    working_directory: ~/repo
-
-    steps:
-      - checkout
-
-      # Download and cache dependencies
-      - restore_cache:
-          keys:
-          - v1-py2-dependencies-{{ checksum "requirements.txt" }}
-          # fallback to using the latest cache if no exact match is found
-          - v1-py2-dependencies-
-
-      - restore-cache:
-          keys:
-            - test-files-f7905c85d1
-
-      - run:
-          name: install dependencies
-          command: |
-            virtualenv venv2
-            . venv2/bin/activate
-            pip install -r requirements.txt
-            pip install -r .circleci/requirements_testing.txt
-            pip install mock  # only needed for Python 2
-            pip install . 
-
-      - save_cache:
-          paths:
-            - ./venv2
-          key: v1-py2-dependencies-{{ checksum "requirements.txt" }}
-
-      - save_cache:
-          paths:
-            - ./files_for_testing_neo
-          key: test-files-f7905c85d1
-
-      # run tests!
-      - run:
-          name: run tests
-          command: |
-            . venv2/bin/activate
-            nosetests -v --with-coverage --cover-package=neo
-
-      - store_artifacts:
-          path: test-reports
-          destination: test-reports
-          

+ 1 - 1
code/python-neo/.circleci/requirements_testing.txt

@@ -4,9 +4,9 @@ igor
 klusta
 tqdm
 nixio>=1.5.0b2
-axographio>=0.3.1
 matplotlib
 ipython
 https://github.com/nsdf/nsdf/archive/0.1.tar.gz
 coverage
 coveralls
+pillow

+ 28 - 5
code/python-neo/.travis.yml

@@ -1,15 +1,38 @@
 language: python
-python:
-  - "2.7"
-  - "3.4"
-  - "3.5"
-  - "3.6"
+dist: xenial
+sudo: false
+
+matrix:
+  include:
+    - python: "3.5"
+      env: NUMPY_VERSION="1.11.3"
+    - python: "3.5"
+      env: NUMPY_VERSION="1.18.3"
+    - python: "3.6"
+      env: NUMPY_VERSION="1.12.1"
+    - python: "3.6"
+      env: NUMPY_VERSION="1.18.3"
+    - python: "3.7"
+      env: NUMPY_VERSION="1.14.5"
+    - python: "3.7"
+      env: NUMPY_VERSION="1.15.4"
+    - python: "3.7"
+      env: NUMPY_VERSION="1.16.6"
+    - python: "3.7"
+      env: NUMPY_VERSION="1.17.5"
+    - python: "3.7"
+      env: NUMPY_VERSION="1.18.3"
+    - python: "3.8"
+      env: NUMPY_VERSION="1.18.3"
 
 # command to install dependencies
+before_install:
+  - pip install "numpy==$NUMPY_VERSION"
 install:
   - pip install -r requirements.txt
   - pip install coveralls
   - pip install .
+  - pip install pillow
 # command to run tests, e.g. python setup.py test
 script:
   nosetests --with-coverage --cover-package=neo

+ 1 - 1
code/python-neo/CITATION.txt

@@ -7,7 +7,7 @@ To cite Neo in publications, please use:
 
 A BibTeX entry for LaTeX users is::
 
-    @article{neo09,
+    @article{neo14,
         author = {Garcia S. and Guarino D. and Jaillet F. and Jennings T.R. and Pröpper R. and
                   Rautenberg P.L. and Rodgers C. and Sobolev A. and Wachtler T. and Yger P.
                   and Davison A.P.},

+ 1 - 1
code/python-neo/LICENSE.txt

@@ -1,4 +1,4 @@
-Copyright (c) 2010-2018, Neo authors and contributors
+Copyright (c) 2010-2019, Neo authors and contributors
 All rights reserved.
 
 Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

+ 1 - 1
code/python-neo/README.rst

@@ -56,7 +56,7 @@ For installation instructions, see doc/source/install.rst
 
 To cite Neo in publications, see CITATION.txt
 
-:copyright: Copyright 2010-2018 by the Neo team, see doc/source/authors.rst.
+:copyright: Copyright 2010-2019 by the Neo team, see doc/source/authors.rst.
 :license: 3-Clause Revised BSD License, see LICENSE.txt for details.
 
 

+ 1 - 1
code/python-neo/doc/source/api_reference.rst

@@ -6,4 +6,4 @@ API Reference
 .. testsetup:: *
 
     from neo import SpikeTrain
-    import quantities as pq
+    import quantities as pq

+ 28 - 5
code/python-neo/doc/source/authors.rst

@@ -7,7 +7,7 @@ of Neo. The institutional affiliations are those at the time of the contribution
 and may not be the current affiliation of a contributor.
 
 * Samuel Garcia [1]
-* Andrew Davison [2]
+* Andrew Davison [2, 21]
 * Chris Rodgers [3]
 * Pierre Yger [2]
 * Yann Mahnoun [4]
@@ -43,8 +43,15 @@ and may not be the current affiliation of a contributor.
 * Jeffrey Gill [18]
 * Lucas (lkoelman@github)
 * Mark Histed
-* Mike Sintsov
-* Scott W Harden [19]
+* Mike Sintsov [19]
+* Scott W Harden [20]
+* Chek Yin Choi (hkchekc@github)
+* Corentin Fragnaud [21]
+* Alexander Kleinjohann
+* Christian Kothe
+* rishidhingra@github
+* Hugo van Kemenade
+* Aitor Morales-Gregorio [13]
 
 1. Centre de Recherche en Neuroscience de Lyon, CNRS UMR5292 - INSERM U1028 - Universite Claude Bernard Lyon 1
 2. Unité de Neuroscience, Information et Complexité, CNRS UPR 3293, Gif-sur-Yvette, France
@@ -52,7 +59,7 @@ and may not be the current affiliation of a contributor.
 4. Laboratoire de Neurosciences Intégratives et Adaptatives, CNRS UMR 6149 - Université de Provence, Marseille, France
 5. G-Node, Ludwig-Maximilians-Universität, Munich, Germany
 6. Institut de Neurosciences de la Timone, CNRS UMR 7289 - Université d'Aix-Marseille, Marseille, France
-7. Centre de Neurosciences Integratives et Cignitives, UMR 5228 - CNRS - Université Bordeaux I - Université Bordeaux II
+7. Centre de Neurosciences Integratives et Cognitives, UMR 5228 - CNRS - Université Bordeaux I - Université Bordeaux II
 8. Neural Information Processing Group, TU Berlin, Germany
 9. Department of Neurobiology & Anatomy, Drexel University College of Medicine, Philadelphia, PA, USA
 10. University of Konstanz, Konstanz, Germany
@@ -64,6 +71,22 @@ and may not be the current affiliation of a contributor.
 16. Ottawa Hospital Research Institute, Canada
 17. Swinburne University of Technology, Australia
 18. Case Western Reserve University (CWRU) · Department of Biology
-19. Harden Technologies, LLC
+19. IAL Developmental Neurobiology, Kazan Federal University, Kazan, Russia
+20. Harden Technologies, LLC
+21. Institut des Neurosciences Paris-Saclay, CNRS UMR 9197 - Université Paris-Sud, Gif-sur-Yvette, France
 
 If we've somehow missed you off the list we're very sorry - please let us know.
+
+
+Acknowledgements
+----------------
+
+.. image:: https://www.braincouncil.eu/wp-content/uploads/2018/11/wsi-imageoptim-EU-Logo.jpg
+   :alt: "EU Logo"
+   :height: 104px
+   :width: 156px
+   :align: right
+
+Neo was developed in part in the Human Brain Project,
+funded from the European Union's Horizon 2020 Framework Programme for Research and Innovation
+under Specific Grant Agreements No. 720270 and No. 785907 (Human Brain Project SGA1 and SGA2).

+ 6 - 7
code/python-neo/doc/source/conf.py

@@ -1,4 +1,3 @@
-# -*- coding: utf-8 -*-
 #
 # neo documentation build configuration file, created by
 # sphinx-quickstart on Fri Feb 25 14:18:12 2011.
@@ -25,7 +24,7 @@ with open("../../neo/version.py") as fp:
 neo_version = '.'.join(str(e) for e in LooseVersion(neo_release).version[:2])
 
 
-AUTHORS = u'Neo authors and contributors <neuralensemble@googlegroups.com>'
+AUTHORS = 'Neo authors and contributors <neuralensemble@googlegroups.com>'
 
 # If extensions (or modules to document with autodoc) are in another directory,
 # add these directories to sys.path here. If the directory is relative to the
@@ -51,8 +50,8 @@ source_suffix = '.rst'
 master_doc = 'index'
 
 # General information about the project.
-project = u'Neo'
-copyright = u'2010-2018, ' + AUTHORS
+project = 'Neo'
+copyright = '2010-2019, ' + AUTHORS
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the
@@ -141,7 +140,7 @@ html_favicon = None
 # Add any paths that contain custom static files (such as style sheets) here,
 # relative to this directory. They are copied after the builtin static files,
 # so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
+html_static_path = ['images']
 
 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
 # using the given strftime format.
@@ -193,7 +192,7 @@ htmlhelp_basename = 'neodoc'
 # Grouping the document tree into LaTeX files. List of tuples
 # (source start file, target name, title, author,
 #  documentclass [howto/manual]).
-latex_documents = [('index', 'neo.tex', u'Neo Documentation',
+latex_documents = [('index', 'neo.tex', 'Neo Documentation',
                     AUTHORS, 'manual')]
 
 # The name of an image file (relative to this directory) to place at the
@@ -216,5 +215,5 @@ latex_documents = [('index', 'neo.tex', u'Neo Documentation',
 todo_include_todos = True  # set to False before releasing documentation
 
 rst_epilog = """
-.. |neo_github_url| replace:: https://github.com/NeuralEnsemble/python-neo/archive/neo-{0}.zip
+.. |neo_github_url| replace:: https://github.com/NeuralEnsemble/python-neo/archive/neo-{}.zip
 """.format(neo_release)

+ 42 - 36
code/python-neo/doc/source/core.rst

@@ -4,7 +4,7 @@ Neo core
 
 .. currentmodule:: neo.core
 
-This figure shows the main data types in Neo:
+This figure shows the main data types in Neo, with the exception of the newly added ImageSequence and RegionOfInterest classes:
 
 .. image:: images/base_schematic.png
    :height: 500 px
@@ -24,7 +24,7 @@ associated metadata (units, sampling frequency, etc.).
   * :py:class:`SpikeTrain`: A set of action potentials (spikes) emitted by the same unit in a period of time (with optional waveforms).
   * :py:class:`Event`: An array of time points representing one or more events in the data.
   * :py:class:`Epoch`: An array of time intervals representing one or more periods of time in the data.
-
+  * :py:class:`ImageSequence`: A three dimensional array representing a sequence of images.
 
 Container objects
 -----------------
@@ -38,29 +38,31 @@ There is a simple hierarchy of containers:
     May contain any of the data objects.
   * :py:class:`Block`: The top-level container gathering all of the data, discrete and continuous,
     for a given recording session.
-    Contains :class:`Segment`, :class:`Unit` and :class:`ChannelIndex` objects.
+    Contains :class:`Segment` and :class:`Group` objects.
 
 
-Grouping objects
-----------------
+Grouping/linking objects
+------------------------
 
 These objects express the relationships between data items, such as which signals
 were recorded on which electrodes, which spike trains were obtained from which
 membrane potential signals, etc. They contain references to data objects that
 cut across the simple container hierarchy.
 
-  * :py:class:`ChannelIndex`: A set of indices into :py:class:`AnalogSignal` objects,
-    representing logical and/or physical recording channels. This has two uses:
+  * :py:class:`ChannelView`: A set of indices into :py:class:`AnalogSignal` objects,
+    representing logical and/or physical recording channels.
+    For spike sorting of extracellular signals, where spikes may be recorded on more than one
+    recording channel, the :py:class:`ChannelView` can be used to reference the group of recording channels
+    from which the spikes were obtained.
 
-      1. for linking :py:class:`AnalogSignal` objects recorded from the same (multi)electrode
-         across several :py:class:`Segment`\s.
-      2. for spike sorting of extracellular signals, where spikes may be recorded on more than one
-         recording channel, and the :py:class:`ChannelIndex` can be used to associate each
-         :py:class:`Unit` with the group of recording channels from which it was obtained.
+  * :py:class:`Group`: Can contain any of the data objects, views, or other groups,
+    outside the hierarchy of the segment and block containers.
+    A common use is to link the :class:`SpikeTrain` objects within a :class:`Block`,
+    possibly across multiple Segments, that were emitted by the same neuron.
 
-  * :py:class:`Unit`: links the :class:`SpikeTrain` objects within a :class:`Block`,
-    possibly across multiple Segments, that were emitted by the same cell.
-    A :class:`Unit` is linked to the :class:`ChannelIndex` object from which the spikes were detected.
+  * :py:class:`CircularRegionOfInterest`, :py:class:`RectangularRegionOfInterest` and :py:class:`PolygonRegionOfInterest`
+    are three subclasses that link :class:`ImageSequence` objects to signals (:class:`AnalogSignal` objects)
+    extracted from them.
 
 
 NumPy compatibility
@@ -101,20 +103,20 @@ In general, an object can access its children with an attribute *childname+s* in
     * :attr:`Block.segments`
     * :attr:`Segments.analogsignals`
     * :attr:`Segments.spiketrains`
-    * :attr:`Block.channel_indexes`
+    * :attr:`Block.groups`
 
 These relationships are bi-directional, i.e. a child object can access its parent:
 
     * :attr:`Segment.block`
     * :attr:`AnalogSignal.segment`
     * :attr:`SpikeTrain.segment`
-    * :attr:`ChannelIndex.block`
+    * :attr:`Group.block`
 
 Here is an example showing these relationships in use::
 
     from neo.io import AxonIO
     import urllib
-    url = "https://portal.g-node.org/neo/axon/File_axon_3.abf"
+    url = "https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data/raw/master/axon/File_axon_3.abf"
     filename = './test.abf'
     urllib.urlretrieve(url, filename)
 
@@ -129,38 +131,39 @@ Here is an example showing these relationships in use::
 
 In some cases, a one-to-many relationship is sufficient. Here is a simple example with tetrodes, in which each tetrode has its own group.::
 
-    from neo import Block, ChannelIndex
+    from neo import Block, Group
     bl = Block()
 
     # the four tetrodes
     for i in range(4):
-        chx = ChannelIndex(name='Tetrode %d' % i,
-                           index=[0, 1, 2, 3])
-        bl.channelindexes.append(chx)
+        group = Group(name='Tetrode %d' % i)
+        bl.groups.append(group)
 
     # now we load the data and associate it with the created channels
     # ...
 
-Now consider a more complex example: a 1x4 silicon probe, with a neuron on channels 0,1,2 and another neuron on channels 1,2,3. We create a group for each neuron to hold the :class:`Unit` object associated with this spike sorting group. Each group also contains the channels on which that neuron spiked. The relationship is many-to-many because channels 1 and 2 occur in multiple groups.::
+Now consider a more complex example: a 1x4 silicon probe, with a neuron on channels 0,1,2 and another neuron on channels 1,2,3.
+We create a group for each neuron to hold the spiketrains for each spike sorting group together with
+the channels on which that neuron spiked::
 
     bl = Block(name='probe data')
 
     # one group for each neuron
-    chx0 = ChannelIndex(name='Group 0',
-                        index=[0, 1, 2])
-    bl.channelindexes.append(chx0)
+    view0 = ChannelView(recorded_signals, index=[0, 1, 2])
+    unit0 = Group(view0, name='Group 0')
+    bl.groups.append(unit0)
 
-    chx1 = ChannelIndex(name='Group 1',
-                        index=[1, 2, 3])
-    bl.channelindexes.append(chx1)
+    view1 = ChannelView(recorded_signals, index=[1, 2, 3])
+    unit1 = Group(view1, name='Group 1')
+    bl.groups.append(unit1)
 
-    # now we add the spiketrain from Unit 0 to chx0
-    # and add the spiketrain from Unit 1 to chx1
+    # now we add the spiketrains from Unit 0 to unit0
+    # and add the spiketrains from Unit 1 to unit1
     # ...
 
-Note that because neurons are sorted from groups of channels in this situation, it is natural that the :py:class:`ChannelIndex` contains a reference to the :py:class:`Unit` object.
-That unit then contains references to its spiketrains. Also note that recording channels can be
-identified by names/labels as well as, or instead of, integer indices.
+
+Now each putative neuron is represented by a :class:`Group` containing the spiktrains of that neuron
+and a view of the signal selecting only those channels from which the spikes were obtained.
 
 
 See :doc:`usecases` for more examples of how the different objects may be used.
@@ -185,6 +188,8 @@ Relationship:
 
 :download:`Click here for a better quality SVG diagram <./images/simple_generated_diagram.svg>`
 
+.. note:: This figure has not yet been updated to include :class:`ImageSequence` and :class:`RegionOfInterest`.
+
 For more details, see the :doc:`api_reference`.
 
 Initialization
@@ -194,7 +199,7 @@ Neo objects are initialized with "required", "recommended", and "additional" arg
 
     - Required arguments MUST be provided at the time of initialization. They are used in the construction of the object.
     - Recommended arguments may be provided at the time of initialization. They are accessible as Python attributes. They can also be set or modified after initialization.
-    - Additional arguments are defined by the user and are not part of the Neo object model. A primary goal of the Neo project is extensibility. These additional arguments are entries in an attribute of the object: a Python dict called :py:attr:`annotations`. 
+    - Additional arguments are defined by the user and are not part of the Neo object model. A primary goal of the Neo project is extensibility. These additional arguments are entries in an attribute of the object: a Python dict called :py:attr:`annotations`.
       Note : Neo annotations are not the same as the *__annotations__* attribute introduced in Python 3.6.
 
 Example: SpikeTrain
@@ -270,4 +275,5 @@ using the :meth:`array_annotate` method provided by all Neo data objects, e.g.::
 Since Array Annotations may be written to a file or database, there are some
 limitations on the data types of arrays: they must be 1-dimensional (i.e. not nested)
 and contain the same types as annotations:
- ``integer``, ``float``, ``complex``, ``Quantity``, ``string``, ``date``, ``time`` and ``datetime``.
+
+    ``integer``, ``float``, ``complex``, ``Quantity``, ``string``, ``date``, ``time`` and ``datetime``.

+ 31 - 53
code/python-neo/doc/source/developers_guide.rst

@@ -3,7 +3,7 @@ Developers' guide
 =================
 
 These instructions are for developing on a Unix-like platform, e.g. Linux or
-Mac OS X, with the bash shell. If you develop on Windows, please get in touch.
+macOS, with the bash shell. If you develop on Windows, please get in touch.
 
 
 Mailing lists
@@ -33,23 +33,23 @@ patch (see below) and attach it to the ticket.
 
 To keep track of changes to the code and to tickets, you can register for
 a GitHub account and then set to watch the repository at `GitHub Repository`_
-(see https://help.github.com/articles/watching-repositories/).
+(see https://help.github.com/en/articles/watching-and-unwatching-repositories).
 
 Requirements
 ------------
 
-    * Python_ 2.7, 3.4 or later
-    * numpy_ >= 1.7.1
-    * quantities_ >= 0.9.0
-    * nose_ >= 0.11.1 (for running tests)
-    * Sphinx_ >= 0.6.4 (for building documentation)
-    * (optional) tox_ >= 0.9 (makes it easier to test with multiple Python versions)
+    * Python_ 3.5 or later
+    * numpy_ >= 1.11.0
+    * quantities_ >= 0.12.1
+    * nose_ >= 1.1.2 (for running tests)
+    * Sphinx_ (for building documentation)
     * (optional) coverage_ >= 2.85 (for measuring test coverage)
     * (optional) scipy >= 0.12 (for MatlabIO)
     * (optional) h5py >= 2.5 (for KwikIO, NeoHdf5IO)
+    * (optional) nixio (for NixIO)
+    * (optional) pillow (for TiffIO)
 
 We strongly recommend you develop within a virtual environment (from virtualenv, venv or conda).
-It is best to have at least one virtual environment with Python 2.7 and one with Python 3.x.
 
 Getting the source code
 -----------------------
@@ -57,18 +57,17 @@ Getting the source code
 We use the Git version control system. The best way to contribute is through
 GitHub_. You will first need a GitHub account, and you should then fork the
 repository at `GitHub Repository`_
-(see http://help.github.com/fork-a-repo/).
+(see http://help.github.com/en/articles/fork-a-repo).
 
 To get a local copy of the repository::
 
     $ cd /some/directory
     $ git clone git@github.com:<username>/python-neo.git
-    
+
 Now you need to make sure that the ``neo`` package is on your PYTHONPATH.
 You can do this either by installing Neo::
 
     $ cd python-neo
-    $ python setup.py install
     $ python3 setup.py install
 
 (if you do this, you will have to re-run ``setup.py install`` any time you make
@@ -99,10 +98,6 @@ Before you make any changes, run the test suite to make sure all the tests pass
 on your system::
 
     $ cd neo/test
-
-With Python 2.7 or 3.x::
-
-    $ python -m unittest discover
     $ python3 -m unittest discover
 
 If you have nose installed::
@@ -115,7 +110,6 @@ otherwise it will report on tests that failed or produced errors.
 
 To run tests from an individual file::
 
-    $ python test_analogsignal.py
     $ python3 test_analogsignal.py
 
 
@@ -193,36 +187,21 @@ You can then push your changes to your online repository on GitHub::
 
 Once you think your changes are ready to be included in the main Neo repository,
 open a pull request on GitHub
-(see https://help.github.com/articles/using-pull-requests).
+(see https://help.github.com/en/articles/about-pull-requests).
 
 
 Python version
 --------------
 
-Neo core should work with both Python 2.7 and Python 3 (version 3.4 or newer).
-Neo IO modules should ideally work with both Python 2 and 3, but certain
-modules may only work with one or the other (see :doc:`install`).
-
-So far, we have managed to write code that works with both Python 2 and 3.
-Mainly this involves avoiding the ``print`` statement (use ``logging.info``
-instead), and putting ``from __future__ import division`` at the beginning of
-any file that uses division.
-
-If in doubt, `Porting to Python 3`_ by Lennart Regebro is an excellent resource.
-
-The most important thing to remember is to run tests with at least one version
-of Python 2 and at least one version of Python 3. There is generally no problem
-in having multiple versions of Python installed on your computer at once: e.g.,
-on Ubuntu Python 2 is available as `python` and Python 3 as `python3`, while
-on Arch Linux Python 2 is `python2` and Python 3 `python`. See `PEP394`_ for
-more on this. Using virtual environments makes this very straightforward.
+Neo should work with Python 3.5 or newer. If you need support for Python 2.7,
+use Neo v0.8.0 or earlier.
 
 
 Coding standards and style
 --------------------------
 
 All code should conform as much as possible to `PEP 8`_, and should run with
-Python 2.7, and 3.4 or newer.
+Python 3.5 or newer.
 
 You can use the `pep8`_ program to check the code for PEP 8 conformity.
 You can also use `flake8`_, which combines pep8 and pyflakes.
@@ -263,7 +242,7 @@ Michael Denker and Julia Sprenger have the necessary permissions to do this)::
 
 .. talk about readthedocs
 
-    
+
 
 .. make a release branch
 
@@ -276,25 +255,24 @@ See :ref:`io_dev_guide` for implementation of a new IO.
 
 
 
-.. _Python: http://www.python.org
-.. _nose: http://somethingaboutorange.com/mrl/projects/nose/
-.. _unittest2: http://pypi.python.org/pypi/unittest2
+.. _Python: https://www.python.org
+.. _nose: https://nose.readthedocs.io/
 .. _Setuptools: https://pypi.python.org/pypi/setuptools/
 .. _tox: http://codespeak.net/tox/
-.. _coverage: http://nedbatchelder.com/code/coverage/
-.. _`PEP 8`: http://www.python.org/dev/peps/pep-0008/
+.. _coverage: https://coverage.readthedocs.io/
+.. _`PEP 8`: https://www.python.org/dev/peps/pep-0008/
 .. _`issue tracker`: https://github.com/NeuralEnsemble/python-neo/issues
 .. _`Porting to Python 3`: http://python3porting.com/
-.. _`NeuralEnsemble Google group`: http://groups.google.com/group/neuralensemble
+.. _`NeuralEnsemble Google group`: https://groups.google.com/forum/#!forum/neuralensemble
 .. _reStructuredText: http://docutils.sourceforge.net/rst.html
-.. _Sphinx: http://sphinx.pocoo.org/
-.. _numpy: http://numpy.scipy.org/
-.. _quantities: http://pypi.python.org/pypi/quantities
-.. _PEP257: http://www.python.org/dev/peps/pep-0257/
-.. _PEP394: http://www.python.org/dev/peps/pep-0394/
-.. _PyPI: http://pypi.python.org
-.. _GitHub: http://github.com
+.. _Sphinx: http://www.sphinx-doc.org/
+.. _numpy: https://numpy.org/
+.. _quantities: https://pypi.org/project/quantities/
+.. _PEP257: https://www.python.org/dev/peps/pep-0257/
+.. _PEP394: https://www.python.org/dev/peps/pep-0394/
+.. _PyPI: https://pypi.org
+.. _GitHub: https://github.com
 .. _`GitHub Repository`: https://github.com/NeuralEnsemble/python-neo/
-.. _pep8: https://pypi.python.org/pypi/pep8
-.. _flake8: https://pypi.python.org/pypi/flake8/
-.. _pyflakes: https://pypi.python.org/pypi/pyflakes/
+.. _pep8: https://pypi.org/project/pep8/
+.. _flake8: https://pypi.org/project/flake8/
+.. _pyflakes: https://pypi.org/project/pyflakes/

+ 3 - 1
code/python-neo/doc/source/examples.rst

@@ -11,7 +11,9 @@ A set of examples in :file:`neo/examples/` illustrates the use of Neo classes.
 
 
 
-.. literalinclude:: ../../examples/read_files.py
+.. literalinclude:: ../../examples/read_files_neo_io.py
+
+.. literalinclude:: ../../examples/read_files_neo_rawio.py
 
 .. literalinclude:: ../../examples/simple_plot_with_matplotlib.py
 

+ 0 - 0
code/python-neo/doc/source/images/base_schematic.png


Некоторые файлы не были показаны из-за большого количества измененных файлов