Browse Source

draft of repo

Manuel Schottdorf 3 years ago
parent
commit
3750fc532f

+ 21 - 39
LICENSE

@@ -1,39 +1,21 @@
-Creative Commons CC0 1.0 Universal
-CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER.
-Statement of Purpose
-
-The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an "owner") of an original work of authorship and/or a database (each, a "Work").
-
-Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works ("Commons") that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others.
-
-For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the "Affirmer"), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights.
-
-1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights ("Copyright and Related Rights"). Copyright and Related Rights include, but are not limited to, the following:
-
-i. the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work;
-
-ii. moral rights retained by the original author(s) and/or performer(s);
-
-iii. publicity and privacy rights pertaining to a person's image or likeness depicted in a Work;
-
-iv. rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below;
-
-v. rights protecting the extraction, dissemination, use and reuse of data in a Work;
-
-vi. database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and
-
-vii. other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof.
-
-2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer's Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer's heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer's express Statement of Purpose.
-
-3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer's express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer's Copyright and Related Rights in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "License"). The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not (i) exercise any of his or her remaining Copyright and Related Rights in the Work or (ii) assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer's express Statement of Purpose.
-
-4. Limitations and Disclaimers.
-
-a. No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.
-
-b. Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law.
-
-c. Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person's Copyright and Related Rights in the Work. Further, Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work.
-
-d. Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work.
+MIT License
+
+Copyright (c) 2020 Manuel Schottdorf
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.

+ 56 - 1
README.md

@@ -1,2 +1,57 @@
-# Macaque-ganglion-cells
+# Overview
+This repository contains all raw data (spike times), and modelling code for the manuscript *"Macaque ganglion cells responses to natural scenes: time, not space is what they really care about"*
 
+For further inquiries, don't hesitate to get in touch with us
+
+Barry Lee <blee@sunyopt.edu>
+Manuel Schottdorf <mschottdorf@princeton.edu>
+
+Funding for the project is through HHS, NIH, and the National Eye Institute EY13110
+
+# Organization of the repository
+The repository is organized into five folders:
+
+- `\data\` contains the raw data of individual cells, organized into plain-text tables of spike-times.
+
+- `\stimuli\` contains the visual stimuli, and details for calibration, such as gamma correction and gun spectra.
+
+- `\retinatools\` contains the python-code of the model. It is used as a python-library in the following two folders:
+
+- `\run_model\` Contains jupyter-notebooks to to run the model, and reproduce the model-figures from the paper. This can be run on any laptop without problems. It also contains a subset of the data, obtained by averaging across the 6 1min repeats, for measuring model performance.
+
+- `\run_model_HPC\` Contains the code to run the RGC model for whole arrays of cells, and was used to compute the supplemental movie (REMOVE BRODY).
+
+All code was tested on mac os 11 with Python 3, numpy, scipy and cv2. 
+
+# Details of the data files.
+
+## Organization of spiking data
+The time resolution on these data is 0.1 msec. 10 min files. These ASCII files are records of spike occurrence. Total spikes are given and then the video start time (an internal control; it varies from cell to cell).  The spikes/5 sec column are a check on firing rates through the video. In 0 are the total spikes before the video starts; time period is variable. The following 120 bins are spike counts in subsequent 5 sec epochs. Finally, bin 121 is spike count following the live video termination before recording was switched off. The spike time list that follows is referred to the begin of the live video (and have been corrected for a slight difference in clock rate between video and data acquisition computers). 6x1min files Format is similar. The first 1 min of the 10 min video is repeated 6 times. There is a variable delay (blank frames) before the beginning of the first live video repeat. The each live repeat is preceded by 5 sec of blank frames, and there are ca. 5 sec of blank frames after termination before the next repeat starts. After completion of the 6 repeats there is a further period of maintained activity (column 7).
+
+## Cell List
+
+| File name     |  Cell Type      |  Record Duration | Cell Key     |
+|---------------|-----------------|-----------|--------|
+| Lss01071.txt	|     -M+L off    |   10 min  |	67#4   |
+| Lss01078.txt	|       MC off    |   6x1min  |	67#6   |
+| Lss01181.txt	|       MC off    |   6x1min  |	68#3   |
+| Lss01221.txt	|       MC on	  |   10 min  |	68#10  |
+
+# Details of the stimulus videos
+
+1) 1x10_256.mpg  This is the 10 minute video. It begins with  751  blank frames (approximately equal energy white) followed by the 10 min live video (starting at frame 752, 90000 frames) followed by 750 blank frames. These frames were played at 150 frames/sec, 3 times acquisition rate. Timing pulses are provided on the audio channel beginning at the first frame of the video. They have a complex pattern, repeating every 5 sec. They were provided to ensure syncing with the data acquisition system that recorded spike trains.
+
+2) 6x1_256.mpg.  This is 1 min video. Structure as in 1x10min, except only 9000 frames.
+
+3) Gamma correction. The r,g,b, values (0-256) from the mpeg decompression can be converted to intensity values by
+        red = 0.01451 + 0.9855*pow(1.0*I/256, 2.3122);
+        green = 0.005123 + 0.9949*pow(1.0*I/256, 2.2752);
+        blue = 0.02612 + 0.9739*pow(1.0*I/256, 2.2818);
+
+where I is the bit value. 
+
+4) Gun spectra. The spectra measured for the 3 display guns are tabulated in GunSpectra.txt  Based on the gun spectra and luminance estimates, the gun values were calibrated to deliver the same luminance. However, based on reverse correlation analysis on M cells, it was estimated that, relative to the red gun, the green gun luminance was overestimated by ca. 5% and the blue gun underestimated by ca. 20%.
+
+# Model details
+
+A codel for primate retinal MC/PC/KC responses to natural scenes

File diff suppressed because it is too large
+ 21589 - 0
data/lSS01071.txt


File diff suppressed because it is too large
+ 2638 - 0
data/lSS01078.txt


File diff suppressed because it is too large
+ 4196 - 0
data/lSS01181.txt


File diff suppressed because it is too large
+ 21917 - 0
data/lSS01221.txt


BIN
retinatools/__init__.py


BIN
retinatools/__pycache__/__init__.cpython-37.pyc


BIN
retinatools/__pycache__/library.cpython-37.pyc


+ 332 - 0
retinatools/library.py

@@ -0,0 +1,332 @@
+import numpy as np
+import cv2
+from scipy import interpolate
+
+def SimpleLowpass(signal, dt, RC):
+    Nsamples = signal.size
+    output = np.zeros(Nsamples)
+    s = dt / (RC + dt)
+    output[0] = s*signal[0]
+    for i in range(1,Nsamples):
+        output[i] = output[i-1] + s * (signal[i] - output[i-1])
+    return output
+
+def SimpleHighpass(signal, dt, RC):
+    Nsamples = signal.size
+    output = np.zeros(Nsamples)
+    s = dt / (RC + dt)
+    output[0] = s*signal[0]
+    for i in range(1,Nsamples):
+        output[i] = output[i-1] + s * (signal[i] - output[i-1])
+    return signal - output
+
+def LuminanceGainControl(signal, dt, RC):
+    Nsamples = signal.size
+    output = np.zeros(Nsamples)   # Initialization of output
+    helper = np.zeros(Nsamples)   # Internal helper variable, i.e. LP filtered output
+    
+    s = dt / (RC + dt)
+    output[0] = 0
+    helper[0] = 1
+
+    for i in range(1,Nsamples):
+        helper[i] = helper[i-1] + s * (output[i-1] - helper[i-1])   # this LP filters the last output to get new helper
+        output[i] = signal[i] / helper[i]
+    return output
+
+
+def rec(x):
+    return x*(x>0)
+
+def nlplus(x, c1):
+    return (1 + rec(x)/c1)**2
+
+def nlc(x, q0, q1, c2):
+    return q0 + q1*x/(c2 + x)
+
+def nltan(x, k):
+    return (2/np.pi)*np.arctan(k*x)
+
+
+def ContrastGainControlLoop(signal, dt, RCplus, RCA, c1):
+    Nsamples = signal.size
+    output = np.zeros(Nsamples)
+    cA = np.zeros(Nsamples)
+    d = np.ones(Nsamples)
+    
+    output[0] = 0
+    d[0] = 1
+    
+    for i in range(1,Nsamples):
+        
+        # First, calculate the HP filtered input with TC RC
+        s = dt / (RCA + dt)
+        cA[i] = cA[i-1] + s * (signal[i] - cA[i-1])
+
+        # In Contrast gain, divisive normalization for output
+        s = dt / (RCplus + dt)
+        d[i] = d[i-1] + s * (output[i-1] - d[i-1])
+        
+        output[i] = (signal[i] - cA[i]) / nlplus(d[i], c1)
+        
+    return output
+
+def delay_signal(signal, dt, delay):
+    """
+    This delays a signal by "delay"
+    Internally, the signal is 
+    (1) resampled at rate RESAMPLED: int (default 100) with cubic spline
+    (2) shifted by the time DELAY
+    (3) resampled at original sampling
+    """
+    RESAMPLED = 100
+    
+    times = np.arange(0, len(signal)*dt, dt)
+    interpolationfunction = interpolate.interp1d(times, signal, kind='cubic')
+    
+    times_fine = np.arange(0, np.max(times), dt/RESAMPLED)
+    signal_fine = interpolationfunction(times_fine)
+    
+    new_idx = int( delay / (dt/RESAMPLED) )
+    
+    delayed_signal = np.roll(signal_fine, new_idx)
+    delayed_signal[0:new_idx] = 0
+    
+    interpolationfunction2 = interpolate.interp1d(times_fine, delayed_signal, kind='cubic', fill_value="extrapolate")
+
+    new_signal = interpolationfunction2(times)
+    return new_signal
+
+def receptiveField(x, y, xoff, yoff, rcenter):
+    center = np.exp( -((x-xoff)**2 + (y-yoff)**2) / (2*rcenter*rcenter)) /(2*np.pi*rcenter*rcenter)
+    return center
+
+def get_RF(xoff, yoff, center):
+    X = np.linspace(-128, 128, 256)
+    Y = np.linspace(-128, 128, 256)
+    x, y = np.meshgrid(X,Y)
+    RF = receptiveField(x, y, xoff, yoff, center)
+
+    return RF
+
+def filter_video_PC(filename, maxframe, fitparameters):
+
+    xoff = fitparameters['xoff'] 
+    yoff = fitparameters['yoff']
+    rcenterM = fitparameters['rcenterM']
+    rcenterL = fitparameters['rcenterL']
+
+    RFM = get_RF(xoff, yoff, rcenterM)
+    RFL = get_RF(xoff, yoff, rcenterL)
+
+    vidcap = cv2.VideoCapture(filename)
+
+    counter = 0
+    success = True
+    ratesM = []
+    ratesL = []
+
+    while success and (counter < 9750):
+        success, image = vidcap.read()
+        scone, mcone, lcone = image_to_cone(image)
+        
+        rateM = np.sum(mcone*RFM)
+        rateL = np.sum(lcone*RFL)
+
+        ratesM.append( rateM )
+        ratesL.append( rateL )
+
+        counter += 1
+        
+    return np.array(ratesM), np.array(ratesL)
+
+
+
+def filter_video_MC(filename, maxframe, fitparameters):
+
+    xoff = fitparameters['xoff'] 
+    yoff = fitparameters['yoff']
+    rcenter = fitparameters['rcenter']
+    rsurr = fitparameters['rsurr']
+
+    RFcenter = get_RF(xoff, yoff, rcenter)
+    RFsurround = get_RF(xoff, yoff, rsurr)
+
+    vidcap = cv2.VideoCapture(filename)
+
+    counter = 0
+    success = True
+
+    rates_center = []
+    rates_surround = []
+
+    while success and (counter < maxframe):
+        success, image = vidcap.read()
+        scone, mcone, lcone = image_to_cone(image)
+        
+        luminance_frame = mcone + lcone
+        
+        rate_center = np.sum(luminance_frame*RFcenter)
+        rates_center.append(rate_center)
+
+        rate_surround = np.sum(luminance_frame*RFsurround)
+        rates_surround.append(rate_surround)
+        
+        counter += 1
+
+    return np.array(rates_center), np.array(rates_surround)
+
+
+def filter_video_BO(filename, maxframe, fitparameters):
+
+    xoff = fitparameters['xoff'] 
+    yoff = fitparameters['yoff']
+    rcenterY = fitparameters['rcenterY']
+    rcenterB = fitparameters['rcenterB']
+
+    RFY = get_RF(xoff, yoff, rcenterY)
+    RFB = get_RF(xoff, yoff, rcenterB)
+
+    vidcap = cv2.VideoCapture(filename)
+
+    counter = 0
+    success = True
+    ratesL = []
+    ratesM = []
+    ratesB = []
+
+    while success and (counter < maxframe):
+        success, image = vidcap.read()
+        scone, mcone, lcone = image_to_cone(image)
+        
+        rateL = np.sum(lcone*RFY)
+        rateM = np.sum(mcone*RFY)
+        rateB = np.sum(scone*RFB)
+
+        ratesL.append( rateL )
+        ratesM.append( rateM )
+        ratesB.append( rateB )
+
+        counter += 1
+        
+    return np.array(ratesL), np.array(ratesM), np.array(ratesB)
+
+def image_to_cone(image):
+    """
+    Spectral fits and parameters for this display, measured by Barry and Hans
+    Image has to be BGR!
+    """
+    #Raw RGB values
+    im_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
+
+    red  = np.double(im_rgb[:,:,0])
+    green  = np.double(im_rgb[:,:,1])
+    blue  = np.double(im_rgb[:,:,2])
+    
+    #Hans' gamma correction
+    rgamma = 0.01451  + 0.9855*((1.0*red/256)**2.3122)
+    ggamma = 0.005123 + 0.9949*((1.0*green/256)**2.2752)
+    bgamma = 0.02612  + 0.9739*((1.0*blue/256)**2.2818)
+    
+    lcone = 1.0*(2.74*rgamma + 3.4*ggamma + 1.34*bgamma)
+    mcone = 1.21*(1.06*rgamma + 3.58*ggamma + 2.07*bgamma)
+    scone = 0.212*rgamma + 8.28*ggamma + 285*bgamma
+
+    return scone, mcone, lcone
+
+
+def get_MC_response(rates_center, rates_surround, fitparameters, type):
+    rC = SimpleLowpass(fitparameters['lum_factor']*rates_center, fitparameters['dt'], fitparameters['tau1'])
+    rC = SimpleLowpass(rC, fitparameters['dt'], fitparameters['tau1'])
+    rC = SimpleLowpass(rC, fitparameters['dt'], fitparameters['tau1'])
+    rC = LuminanceGainControl(rC, fitparameters['dt'],  fitparameters['tau2'])
+
+    rS = SimpleLowpass(fitparameters['lum_factor']*rates_surround, fitparameters['dt'], fitparameters['tau1'])
+    rS = SimpleLowpass(rS, fitparameters['dt'], fitparameters['tau1'])
+    rS = SimpleLowpass(rS, fitparameters['dt'], fitparameters['tau1'])
+    rS = LuminanceGainControl(rS, fitparameters['dt'],  fitparameters['tau2'])
+
+    #Delay surround
+    delayed_surround = delay_signal(rS, fitparameters['dt'],  fitparameters['surround_delay'])
+
+    #Calculate input signal from center - surround
+    lum_signal = rec( rC -  fitparameters['ksurr'] * delayed_surround )
+
+    if type == "on-cell":
+        b = nltan(lum_signal, fitparameters['k1'])
+    elif type == "off-cell":
+        b = -1*nltan(lum_signal, fitparameters['k1'])
+    else:
+        error(type + " ...not defined.")
+
+    filtered_signal = ContrastGainControlLoop(b, fitparameters['dt'], fitparameters['tauplus'], fitparameters['tauA'], fitparameters['c1'])
+    Rpc = rec( nltan(filtered_signal, fitparameters['k2']) )
+    
+    return Rpc
+
+
+def get_PC_response(lum_signalM, lum_signalL, fitparameters, type):
+    signalM = SimpleLowpass(fitparameters['lum_factor']*lum_signalM, fitparameters['dt'], fitparameters['tau1'])
+    signalM = SimpleLowpass(signalM, fitparameters['dt'], fitparameters['tau1'])
+    signalM = SimpleLowpass(signalM, fitparameters['dt'], fitparameters['tau1'])
+    signalM = LuminanceGainControl(signalM, fitparameters['dt'], fitparameters['tau2'])
+
+    signalL = SimpleLowpass(fitparameters['lum_factor']*lum_signalL, fitparameters['dt'], fitparameters['tau1']) 
+    signalL = SimpleLowpass(signalL, fitparameters['dt'], fitparameters['tau1'])
+    signalL = SimpleLowpass(signalL, fitparameters['dt'], fitparameters['tau1'])
+    signalL = LuminanceGainControl(signalL, fitparameters['dt'], fitparameters['tau2'])
+
+    bM = nltan(signalM, fitparameters['k1'])
+    bL = nltan(signalL, fitparameters['k1'])
+
+    if type == "Ron":
+        out = rec(bL - fitparameters['alpha']*bM)
+    elif type == "Gon":
+        out = rec(bM - fitparameters['alpha']*bL)
+    else:
+        error(type + " ...not defined.")
+        
+    out = SimpleHighpass(out, fitparameters['dt'], fitparameters['tau3'])
+    Rpc = rec( fitparameters['k2']*out + fitparameters['o1'])
+    return Rpc
+
+def get_BO_response(lum_signalB, lum_signalM, lum_signalL, fitparameters):
+
+    signalM = SimpleLowpass(lum_signalM, fitparameters['dt'], fitparameters['tau1'])
+    signalM = SimpleLowpass(signalM, fitparameters['dt'], fitparameters['tau1'])
+    signalM = SimpleLowpass(signalM, fitparameters['dt'], fitparameters['tau1'])
+    signalM = LuminanceGainControl(signalM, fitparameters['dt'], fitparameters['tau2'])
+    bM = nltan(signalM, fitparameters['kM'])
+
+    signalL = SimpleLowpass(lum_signalL, fitparameters['dt'], fitparameters['tau1'])
+    signalL = SimpleLowpass(signalL, fitparameters['dt'], fitparameters['tau1'])
+    signalL = SimpleLowpass(signalL, fitparameters['dt'], fitparameters['tau1'])
+    signalL = LuminanceGainControl(signalL, fitparameters['dt'], fitparameters['tau2'])
+    bL = nltan(signalL, fitparameters['kL'])
+
+    bY = fitparameters['alpha']*bM + fitparameters['beta']*bL
+    bY = delay_signal(bY, fitparameters['dt'], fitparameters['delay'])
+
+    signalB = SimpleLowpass(lum_signalB, fitparameters['dt'], fitparameters['tau1']) 
+    signalB = SimpleLowpass(signalB, fitparameters['dt'], fitparameters['tau1'])
+    signalB = SimpleLowpass(signalB, fitparameters['dt'], fitparameters['tau1'])
+    signalB = LuminanceGainControl(signalB, fitparameters['dt'], fitparameters['tau2'])
+    bB = nltan(signalB, fitparameters['kB'])
+
+    out = rec(bB - bY)
+    Rpc = SimpleHighpass(out, fitparameters['dt'], fitparameters['tau3'])
+    Rpc = rec(Rpc - fitparameters['offset'])
+
+    return Rpc
+
+def compare_to_file(filename, Rpc, dt, column, shift):
+    cell_activity = np.loadtxt(filename)
+    
+    time_cell = np.arange(0, len(cell_activity))*dt + 5000 - shift*6.66
+    cell = cell_activity[(time_cell>5500) * (time_cell<64980), column]
+ 
+    time = np.arange(0, len(Rpc)*dt, dt) 
+    model = Rpc[(time>=5500) * (time<64980)]
+    
+    r = np.corrcoef(cell, model)
+    return r[0,1]

BIN
run_model/078Moff.pdf


BIN
run_model/110Ron.pdf


BIN
run_model/130Gon.pdf


BIN
run_model/178Bon.pdf


BIN
run_model/299Mon.pdf


File diff suppressed because it is too large
+ 315 - 0
run_model/BlueOn.ipynb


File diff suppressed because it is too large
+ 299 - 0
run_model/MC_off.ipynb


File diff suppressed because it is too large
+ 292 - 0
run_model/MC_on.ipynb


File diff suppressed because it is too large
+ 303 - 0
run_model/PC_Gon.ipynb


File diff suppressed because it is too large
+ 292 - 0
run_model/PC_Ron.ipynb


File diff suppressed because it is too large
+ 423 - 0
run_model/PC_off.ipynb


+ 748 - 0
run_model/Piecharts.ipynb

@@ -0,0 +1,748 @@
+{
+ "cells": [
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import numpy as np\n",
+    "import pylab as pl\n",
+    "from retinatools.library import filter_video_PC, get_PC_response, filter_video_MC, get_MC_response, filter_video_BO, get_BO_response, compare_to_file"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Note: all reliabilities rounded to 2 significant digits, and rounded up"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "pl.rcParams['figure.figsize'] = [15, 4]\n",
+    "def make_plots(cell_name, RpcF, RpfS, filename, dt, shift, column, retinal_reliability): \n",
+    "    \n",
+    "    # Calculate the R^2 values / variaces explained\n",
+    "    rF = compare_to_file(filename, RpcF, dt, column, shift)    \n",
+    "    rS = compare_to_file(filename, RpcS, dt, column, shift)\n",
+    "    spatial_factors = rS**2\n",
+    "    temporal_factors = rF**2 - rS**2\n",
+    "    missing_in_model = retinal_reliability - rF**2\n",
+    "    \n",
+    "    #Get the data for the plot\n",
+    "    cell_activity = np.loadtxt(filename)\n",
+    "    time_cell = np.arange(0, len(cell_activity))*dt + 5000 - shift*6.66 \n",
+    "    time_model = np.arange(0, len(RpcS)*dt, dt)\n",
+    "\n",
+    "    #Make the plot\n",
+    "    pl.figure()\n",
+    "    pl.subplot(1,2,1)\n",
+    "    pl.plot(time_model, RpcS/np.std(RpcS), label=\"Model without surround\")\n",
+    "    pl.plot(time_model, RpcF/np.std(RpcF), label = \"Full model\")\n",
+    "    pl.plot(time_cell, cell_activity[:,column]/np.std(cell_activity[:,column]), label=\"Cell data\")\n",
+    "    pl.xlim([10000,11500])\n",
+    "    pl.ylim([0,8])\n",
+    "    pl.xlabel(\"Time [ms]\")\n",
+    "    pl.ylabel(\"Activity [Z-scored]\")\n",
+    "    pl.legend();\n",
+    "    pl.title(\"r_model=\" + str(rF))\n",
+    "    pl.subplot(1,2,2)\n",
+    "    labels = 'Center-contribution', '\"Surround\"-Contribution', 'Missing in Model', 'Individual variations'\n",
+    "    sizes = [spatial_factors, temporal_factors, missing_in_model, 1-retinal_reliability]\n",
+    "    explode = (0.1, 0.1, 0.1, 0.1)\n",
+    "    pl.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=-60)\n",
+    "    pl.axis('equal')\n",
+    "    pl.title(\"Cell: \" +  cell_name);\n",
+    "\n",
+    "    #Save figure + data\n",
+    "    pl.savefig(\"./\"+ cell_name + \".pdf\")\n",
+    "    \n",
+    "    # Select data for cell and model, and save it\n",
+    "    t0 = 5000\n",
+    "    time_cell = np.arange(0, len(cell_activity))*dt + 5000 - shift*6.66\n",
+    "    cell = cell_activity[(time_cell>t0) * (time_cell<64980), column]\n",
+    "    time = np.arange(0, len(RpcF)*dt, dt) \n",
+    "    model = RpcF[(time>=t0) * (time<64980)]\n",
+    "    time_file = time[(time>=t0) * (time<64980)]\n",
+    "    model_data_save = np.transpose([time_file, cell/np.std(cell), model/np.std(model)])\n",
+    "    np.savetxt(\"./\"+ cell_name + '_time_cell_fit.csv', model_data_save, fmt='%1.5f', delimiter=', ', newline='\\n', header='time [ms], cell, model', footer='', comments = '', encoding=None)\n",
+    "\n",
+    "    pl.close()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# PC cells"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "for cell in [\"110Ron\", \"130Gon\"]:\n",
+    "    \n",
+    "    if cell == \"110Ron\":\n",
+    "        fitparameters = {'xoff': 2.9, 'yoff': -1.8, 'rcenterM': 20, 'rcenterL': 5.0, \\\n",
+    "                         'lum_factor': 1200, 'tau1': 6.9, 'tau2': 60, 'k1': 0.017, 'tau3': 400,\\\n",
+    "                         'alpha': 0.68, 'k2': 4, 'o1': 0.07, 'dt': 1000/150.}\n",
+    "        filename = './data/File110Ron.txt'\n",
+    "        celltype = \"Ron\"\n",
+    "        fitparameters_s = fitparameters.copy()\n",
+    "        fitparameters_s['rcenterM'] = fitparameters_s['rcenterL']\n",
+    "        retinal_reliability = 0.91**2 # From Barry for 110Ron\n",
+    "\n",
+    "    elif cell == \"130Gon\":\n",
+    "        fitparameters = {'xoff': 2.9, 'yoff': -2.2, 'rcenterM': 4.9, 'rcenterL': 32, \\\n",
+    "                         'lum_factor': 290, 'tau1': 7.8, 'tau2': 100, 'k1': 0.018, 'tau3': 2000,\\\n",
+    "                         'alpha': 0.71, 'k2': 3.0, 'o1': 0.06, 'dt': 1000/150.}\n",
+    "        filename = './data/File130Gn.txt'\n",
+    "        celltype = \"Gon\"\n",
+    "        fitparameters_s = fitparameters.copy()\n",
+    "        fitparameters_s['rcenterL'] = fitparameters_s['rcenterM']\n",
+    "        retinal_reliability = 0.80**2 # From Barry for 130Gon\n",
+    "\n",
+    "\n",
+    "    # full model\n",
+    "    lum_signalM, lum_signalL = filter_video_PC('../1x10_256.mpg', 9750, fitparameters)\n",
+    "    RpcF = get_PC_response(lum_signalM, lum_signalL, fitparameters, celltype)\n",
+    "\n",
+    "    # no-surround model\n",
+    "    lum_signalM, lum_signalL = filter_video_PC('../1x10_256.mpg', 9750, fitparameters_s)\n",
+    "    RpcS = get_PC_response(lum_signalM, lum_signalL, fitparameters_s, celltype)\n",
+    "    \n",
+    "    make_plots(cell, RpcF, RpcS, filename, dt = fitparameters['dt'], shift = 2, column = 0, retinal_reliability = retinal_reliability)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# MC cells"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "for cell in [\"299Mon\", \"078Moff\"]:\n",
+    "    \n",
+    "    if cell == \"299Mon\":\n",
+    "        fitparameters = {'xoff': -2.5, 'yoff': -5.5, 'rcenter': 4.0, 'rsurr': 13.0, 'ksurr': 0.45, 'surround_delay': 3, \\\n",
+    "                         'lum_factor': 60, 'tau1': 7.3, 'tau2': 38, 'k1': 0.019, 'c1': 0.016, 'tauplus': 110, \\\n",
+    "                         'tauA': 20, 'k2': 50, 'dt': 1000/150.}\n",
+    "        filename = './data/File299Mon.txt'\n",
+    "        celltype = 'on-cell'\n",
+    "        fitparameters_s = fitparameters.copy()\n",
+    "        fitparameters_s['rsurr'] = fitparameters_s['rcenter']\n",
+    "        retinal_reliability = 0.92**2 # From 299 MC on cell\n",
+    "\n",
+    "    elif cell == \"078Moff\":\n",
+    "        fitparameters = {'xoff': -8.4, 'yoff': 2.5, 'rcenter': 4.5, 'rsurr': 20, 'ksurr': 0.34, 'surround_delay': 8, \\\n",
+    "                         'lum_factor': 1000, 'tau1': 5.0, 'tau2': 16, 'k1': 0.00048, 'c1': 0.0024, 'tauplus': 22, \\\n",
+    "                         'tauA': 33, 'k2': 560, 'dt': 1000/150.}\n",
+    "        filename = './data/File078Moff.txt'\n",
+    "        celltype = 'off-cell'\n",
+    "        fitparameters_s = fitparameters.copy()\n",
+    "        fitparameters_s['rsurr'] = fitparameters_s['rcenter']\n",
+    "        retinal_reliability = 0.89**2 # From 78 MC off cell\n",
+    "\n",
+    "\n",
+    "    # full model\n",
+    "    rates_center, rates_surround = filter_video_MC('../1x10_256.mpg', 9750, fitparameters)\n",
+    "    RpcF = get_MC_response(rates_center, rates_surround, fitparameters, type = celltype)\n",
+    "\n",
+    "    # no-surround model\n",
+    "    rates_center, rates_surround = filter_video_MC('../1x10_256.mpg', 9750, fitparameters_s)\n",
+    "    RpcS = get_MC_response(rates_center, rates_surround, fitparameters_s, type = celltype)\n",
+    "    \n",
+    "    make_plots(cell, RpcF, RpcS, filename, dt = fitparameters['dt'], shift = 3, column = 0, retinal_reliability = retinal_reliability)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Blue ON "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "cell = \"178Bon\"\n",
+    "\n",
+    "fitparameters = {'xoff': -7.6, 'yoff': 9.5, 'rcenterB': 10.0, 'rcenterY': 14.0,\\\n",
+    "                 'tau1': 10.2, 'tau2': 86, 'delay': 29, 'beta': 0.0, 'alpha': 4.9,  \\\n",
+    "                 'tau3': 280, 'kL': 0.01, 'kM': 0.051, 'kB': 0.12, 'offset': 0.025, 'dt': 1000/150.}\n",
+    "\n",
+    "filename = './data/File178Bon.txt'\n",
+    "column = 0\n",
+    "shift = 3\n",
+    "fitparameters_s = fitparameters.copy()\n",
+    "fitparameters_s['rcenterY'] = fitparameters_s['rcenterB']\n",
+    "retinal_reliability = 0.91**2 # From barry 178 Bon cell\n",
+    "\n",
+    "\n",
+    "# full model\n",
+    "lum_signalL, lum_signalM, lum_signalB = filter_video_BO('../1x10_256.mpg', 9750, fitparameters)\n",
+    "RpcF = get_BO_response(lum_signalB, lum_signalM, lum_signalL, fitparameters)\n",
+    "\n",
+    "# no-surround model\n",
+    "lum_signalL, lum_signalM, lum_signalB = filter_video_BO('../1x10_256.mpg', 9750, fitparameters_s)\n",
+    "RpcS = get_BO_response(lum_signalB, lum_signalM, lum_signalL, fitparameters_s)\n",
+    "\n",
+    "make_plots(cell, RpcF, RpcS, filename, dt = fitparameters['dt'], shift = 3, column =0, retinal_reliability = retinal_reliability)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 47,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import numpy as np\n",
+    "from scipy import stats"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 48,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Summary Statistics"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 49,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "bon = [0.7802624476726236, 0.8153849801352536, 0.7320091829732691]\n",
+    "MCoff = [0.8049267745650888, 0.8036790439827152, 0.7935184997426455]\n",
+    "MCon = [0.8019790948348724, 0.8262426182591949, 0.8027547139233095]\n",
+    "Gon = [0.77803242901496, 0.7773717115038241, 0.7643298886394182]\n",
+    "Ron = [0.8356837630544626, 0.8421023207571601, 0.8631643469665483]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 50,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.8055167908846377"
+      ]
+     },
+     "execution_count": 50,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.mean([MCoff, MCon])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 51,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.009978215489268237"
+      ]
+     },
+     "execution_count": 51,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.std([MCoff, MCon])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 52,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.004073589415373797"
+      ]
+     },
+     "execution_count": 52,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.std([MCoff, MCon])/np.sqrt(6)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 53,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "[None, None, None]"
+      ]
+     },
+     "execution_count": 53,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "a = []\n",
+    "[a.append(s) for s in MCon]\n",
+    "[a.append(s) for s in MCoff]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 56,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "[0.8019790948348724, 0.8262426182591949, 0.8027547139233095, 0.8049267745650888, 0.8036790439827152, 0.7935184997426455]\n"
+     ]
+    },
+    {
+     "data": {
+      "text/plain": [
+       "Ttest_1sampResult(statistic=180.5122672859438, pvalue=9.899823797921202e-11)"
+      ]
+     },
+     "execution_count": 56,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "print(a)\n",
+    "stats.ttest_1samp(a,0.0)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 57,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.8101140766560623"
+      ]
+     },
+     "execution_count": 57,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.mean([Gon, Ron])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 58,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.038054509734225"
+      ]
+     },
+     "execution_count": 58,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.std([Gon, Ron])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 59,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.015535688543437792"
+      ]
+     },
+     "execution_count": 59,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.std([Gon, Ron])/np.sqrt(6)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 60,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "[None, None, None]"
+      ]
+     },
+     "execution_count": 60,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "a = []\n",
+    "[a.append(s) for s in Gon]\n",
+    "[a.append(s) for s in Ron]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 61,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "[0.77803242901496, 0.7773717115038241, 0.7643298886394182, 0.8356837630544626, 0.8421023207571601, 0.8631643469665483]\n"
+     ]
+    },
+    {
+     "data": {
+      "text/plain": [
+       "Ttest_1sampResult(statistic=47.60198351217366, pvalue=7.729070905370263e-08)"
+      ]
+     },
+     "execution_count": 61,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "print(a)\n",
+    "stats.ttest_1samp(a,0.0)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.8014294544016898"
+      ]
+     },
+     "execution_count": 13,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.mean([bon, MCoff, MCon, Gon, Ron])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 14,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.03193849279442697"
+      ]
+     },
+     "execution_count": 14,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.std([bon, MCoff, MCon, Gon, Ron])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.008246483379718748"
+      ]
+     },
+     "execution_count": 5,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.std([bon, MCoff, MCon, Gon, Ron])/np.sqrt(15)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 63,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "[None, None, None]"
+      ]
+     },
+     "execution_count": 63,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "a = []\n",
+    "[a.append(s) for s in bon]\n",
+    "[a.append(s) for s in MCoff]\n",
+    "[a.append(s) for s in MCon]\n",
+    "[a.append(s) for s in Gon]\n",
+    "[a.append(s) for s in Ron]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 65,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "Ttest_1sampResult(statistic=93.88904032952797, pvalue=5.283140568700127e-21)"
+      ]
+     },
+     "execution_count": 65,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "stats.ttest_1samp(a,0.0)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.7758855369270488"
+      ]
+     },
+     "execution_count": 15,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.mean(bon)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.034178442512351186"
+      ]
+     },
+     "execution_count": 16,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.std(bon)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "0.019732932984988107"
+      ]
+     },
+     "execution_count": 13,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "np.std(bon)/np.sqrt(3)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 62,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "Ttest_1sampResult(statistic=32.104091600278146, pvalue=0.00096883034904349)"
+      ]
+     },
+     "execution_count": 62,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "stats.ttest_1samp(bon,0.0)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.7.8"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

+ 1 - 0
run_model_HPC/Bon-cell.mp4

@@ -0,0 +1 @@
+/annex/objects/MD5-s26709934--7f060efabe1ea96a6cb7b20214121a25

+ 1 - 0
run_model_HPC/Gon-cell.mp4

@@ -0,0 +1 @@
+/annex/objects/MD5-s32136648--1c28e170658702c4f51570150b3d3e28

+ 1 - 0
run_model_HPC/Ron-cell.mp4

@@ -0,0 +1 @@
+/annex/objects/MD5-s33709877--aafe496cd49711753cbe05e2159fb1c3

File diff suppressed because it is too large
+ 4772 - 0
run_model_HPC/data/19796315.out


File diff suppressed because it is too large
+ 4772 - 0
run_model_HPC/data/19796884.out


File diff suppressed because it is too large
+ 4772 - 0
run_model_HPC/data/19829503.out


+ 126 - 0
run_model_HPC/make_movie_BO.py

@@ -0,0 +1,126 @@
+import argparse
+import ipyparallel as ipp
+import numpy as np
+import os
+import cv2 as cv2
+import time
+from retinatools.library import image_to_cone
+
+#Helper function
+def RF_FilterFrame(idx, luminance_frame, r_RF):
+    """
+    Helper function to do the parallelized filtering with RFs.
+    """
+    from retinatools.library import get_RF
+
+    X = np.linspace(-128, 128, 256)
+    Y = np.linspace(-128, 128, 256)
+    idx_x = idx%256
+    idx_y = np.int(np.floor(idx/256))
+    x, y = np.meshgrid(X,Y)    
+    xoff = X[idx_x]
+    yoff = Y[idx_y]
+    RF = get_RF(xoff, yoff, r_RF)
+    product = np.sum(luminance_frame * RF)
+    return product
+
+def parallel_prediction(idx_x, RFstrip, fitparameters):    
+    ''' 
+    Helper function to do the parallelized processing of the filtered signal; 
+    This transform center/surround signal to activity with standard parameters.
+    '''
+    from retinatools.library import get_BO_response
+
+    lum_signalS = RFstrip[0, idx_x, :]
+    lum_signalM = RFstrip[1, idx_x, :]
+    lum_signalL = RFstrip[2, idx_x, :]
+    R = get_BO_response(lum_signalS, lum_signalM, lum_signalL, fitparameters)
+    return R
+
+
+
+fitparameters = {'rcenterB': 10.0, 'rcenterY': 14.0,\
+                 'tau1': 10.2, 'tau2': 86, 'delay': 29, 'beta': 0.0, 'alpha': 4.9,  \
+                 'tau3': 280, 'kL': 0.01, 'kM': 0.051, 'kB': 0.12, 'offset': 0.025, 'dt': 1000/150.}
+cell = "178Bon"
+
+savepath = "/jukebox/tank/schottdorf/tmp/"
+
+N_MAXFRAME = 4500  # 4500 to only process first 2.5 min *60sec *30fps in the mpg -> 30sec ret video
+
+def main(profile):
+    rc = ipp.Client(profile=profile)                 # ipyparallel status
+    print(rc.ids)
+
+    dview = rc[:]                                    # Set up the engines
+    dview.use_cloudpickle()                          # Make sure to fix the closure over luminance_frame
+    with dview.sync_imports():
+        import numpy as np
+
+    ################################
+    #####   Process the movie  #####
+    ################################
+    vidcap = cv2.VideoCapture('/usr/people/ms81/cells/1x10_256.mpg')
+    data = np.ones((3, 256, 256, N_MAXFRAME))            # for s,m,l  X 256px  X 256px  X N_frames
+    counter = 0
+    success = True
+    t0 = time.time()
+    while success and (counter < N_MAXFRAME):   
+        success, image = vidcap.read()
+        scone, mcone, lcone = image_to_cone(image)        
+        for tt in ["S", "M", "L"]:
+            if tt == "S":
+                r_RF = fitparameters['rcenterB']
+                luminance_frame = scone
+                idx = 0
+            elif tt == "M":
+                r_RF = fitparameters['rcenterY']
+                luminance_frame = mcone
+                idx = 1
+            elif tt == "L":
+                r_RF = fitparameters['rcenterY']
+                luminance_frame = lcone
+                idx = 2
+
+            rc[:].push(dict( luminance_frame = luminance_frame, RF_FilterFrame = RF_FilterFrame, r_RF = r_RF ))         # Seed namespace for the ipython engines. 
+            tmp = rc[:].map_sync( lambda ix: RF_FilterFrame(ix, luminance_frame, r_RF), range(0,256*256) )              # Execute parallel computation on all ipython engines
+            data[idx,:,:,counter] = np.reshape(tmp, (256,256))                                                          # Reorganize data back to frame
+        counter += 1
+        print(counter/N_MAXFRAME)
+    t1 = time.time()
+
+    # filename = "rgc-{0}-raw.npy".format(os.environ["SLURM_JOB_ID"])
+    # np.save(filename, data)
+    print("Part 1 DONE; Time needed [in h]:")
+    print( (t1-t0)/3600 )
+
+
+    ################################
+    ####  Apply temp. process  #####
+    ################################
+    t1 = time.time()
+    output_model = np.zeros((256, 256, N_MAXFRAME))
+    rc[:].push(dict( parallel_prediction = parallel_prediction))
+    for idx_x in range(0,256):
+        RFstrip = data[:,idx_x,:,:]
+        rc[:].push(dict( RFstrip = RFstrip, fitparameters = fitparameters, parallel_prediction = parallel_prediction ))
+        tmp = rc[:].map_sync(lambda idx_y: parallel_prediction(idx_y, RFstrip, fitparameters), range(0,256))
+        output_model[idx_x,:,:] = np.reshape(tmp, (256, N_MAXFRAME))
+        print(idx_x/256)
+    t2 = time.time()
+
+    filename = "-{0}-frames.npy".format(os.environ["SLURM_JOB_ID"])
+    pp = savepath + cell + filename
+    np.save(pp, output_model)
+    print("Part 2 DONE; Time needed [in h]:")
+    print("Saved: " + pp)
+    print((t2-t1)/3600)
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser()
+    parser.add_argument("-p", "--profile", required=True, help="Name of IPython profile to use")
+
+    args = parser.parse_args()
+    main(args.profile)
+    print("Done and done.")
+

+ 124 - 0
run_model_HPC/make_movie_MC.py

@@ -0,0 +1,124 @@
+import argparse
+import ipyparallel as ipp
+import numpy as np
+import os
+import cv2 as cv2
+import time
+from retinatools.library import image_to_cone
+
+#Helper function
+def RF_FilterFrame(idx, luminance_frame, r_RF):
+    """
+    Helper function to do the parallelized filtering with RFs.
+    """
+    from retinatools.library import get_RF
+
+    X = np.linspace(-128, 128, 256)
+    Y = np.linspace(-128, 128, 256)
+    idx_x = idx%256
+    idx_y = np.int(np.floor(idx/256))
+    x, y = np.meshgrid(X,Y)    
+    xoff = X[idx_x]
+    yoff = Y[idx_y]
+    RF = get_RF(xoff, yoff, r_RF)
+    product = np.sum(luminance_frame * RF)
+    return product
+
+def parallel_prediction(idx_x, RFstrip, fitparameters, celltype):    
+    ''' 
+    Helper function to do the parallelized processing of the filtered signal; 
+    This transform center/surround signal to activity with standard parameters.
+    '''
+    from retinatools.library import get_MC_response
+
+    signal_center   = RFstrip[0, idx_x, :]
+    signal_surround = RFstrip[1, idx_x, :]
+    R = get_MC_response(signal_center, signal_surround, fitparameters, celltype)
+    return R
+
+
+fitparameters = {'rcenter': 4.0, 'rsurr': 13.0, 'ksurr': 0.45, 'surround_delay': 3, \
+                 'lum_factor': 60, 'tau1': 7.3, 'tau2': 38, 'k1': 0.019, 'c1': 0.016, 'tauplus': 110, \
+                 'tauA': 20, 'k2': 50, 'dt': 1000/150.}  # for 299  Mon
+celltype = "on-cell"
+
+# fitparameters = {'rcenter': 4.5, 'rsurr': 20, 'ksurr': 0.34, 'surround_delay': 8, \
+#                     'lum_factor': 1000, 'tau1': 5.0, 'tau2': 16, 'k1': 0.00048, 'c1': 0.0024, 'tauplus': 22, \
+#                     'tauA': 33, 'k2': 560, 'dt': 1000/150.}  # For 078Moff.txt
+# celltype = "off-cell"
+savepath = "/jukebox/tank/schottdorf/tmp/"
+
+N_MAXFRAME = 4500  # 4500 to only process first 2.5 min *60sec *30fps in the mpg -> 30sec ret video
+
+def main(profile):
+    rc = ipp.Client(profile=profile)                 # ipyparallel status
+    print(rc.ids)
+
+    dview = rc[:]                                    # Set up the engines
+    dview.use_cloudpickle()                          # Make sure to fix the closure over luminance_frame
+    with dview.sync_imports():
+        import numpy as np
+
+    ################################
+    #####   Process the movie  #####
+    ################################
+    vidcap = cv2.VideoCapture('/usr/people/ms81/cells/1x10_256.mpg')
+    data = np.ones((2, 256, 256, N_MAXFRAME))            # for center & surround   X 256px  X 256px  X N_frames
+    counter = 0
+    success = True
+    t0 = time.time()
+    while success and (counter < N_MAXFRAME):   
+        success, image = vidcap.read()
+        scone, mcone, lcone = image_to_cone(image)        
+        luminance_frame = mcone + lcone
+        for tt in ["center", "surr"]:
+            if tt == "center":
+                r_RF = fitparameters['rcenter']
+                luminance_frame = luminance_frame
+                idx = 0
+            else:
+                r_RF = fitparameters['rsurr']
+                luminance_frame = luminance_frame
+                idx = 1
+            rc[:].push(dict( luminance_frame = luminance_frame, RF_FilterFrame = RF_FilterFrame, r_RF = r_RF ))         # Seed namespace for the ipython engines. 
+            tmp = rc[:].map_sync( lambda ix: RF_FilterFrame(ix, luminance_frame, r_RF), range(0,256*256) )              # Execute parallel computation on all ipython engines
+            data[idx,:,:,counter] = np.reshape(tmp, (256,256))                                                          # Reorganize data back to frame
+        counter += 1
+        print(counter/N_MAXFRAME)
+    t1 = time.time()
+
+    # filename = "rgc-{0}-raw.npy".format(os.environ["SLURM_JOB_ID"])
+    # np.save(filename, data)
+    print("Part 1 DONE; Time needed [in h]:")
+    print( (t1-t0)/3600 )
+
+
+    ################################
+    ####  Apply temp. process  #####
+    ################################
+    t1 = time.time()
+    output_model = np.zeros((256, 256, N_MAXFRAME))
+    rc[:].push(dict( parallel_prediction = parallel_prediction))
+    for idx_x in range(0,256):
+        RFstrip = data[:,idx_x,:,:]
+        rc[:].push(dict( RFstrip = RFstrip, celltype = celltype, fitparameters = fitparameters, parallel_prediction = parallel_prediction ))
+        tmp = rc[:].map_sync(lambda idx_y: parallel_prediction(idx_y, RFstrip, fitparameters, celltype), range(0,256))
+        output_model[idx_x,:,:] = np.reshape(tmp, (256, N_MAXFRAME))
+        print(idx_x/256)
+    t2 = time.time()
+
+    filename = "-{0}-frames.npy".format(os.environ["SLURM_JOB_ID"])
+    pp = savepath + celltype + filename
+    np.save(pp, output_model)
+    print("Part 2 DONE; Time needed [in h]:")
+    print("Saved: " + pp)
+    print((t2-t1)/3600)
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser()
+    parser.add_argument("-p", "--profile", required=True, help="Name of IPython profile to use")
+
+    args = parser.parse_args()
+    main(args.profile)
+    print("Done and done.")
+

+ 127 - 0
run_model_HPC/make_movie_PC.py

@@ -0,0 +1,127 @@
+import argparse
+import ipyparallel as ipp
+import numpy as np
+import os
+import cv2 as cv2
+import time
+from retinatools.library import image_to_cone
+
+#Helper function
+def RF_FilterFrame(idx, luminance_frame, r_RF):
+    """
+    Helper function to do the parallelized filtering with RFs.
+    """
+    from retinatools.library import get_RF
+
+    X = np.linspace(-128, 128, 256)
+    Y = np.linspace(-128, 128, 256)
+    idx_x = idx%256
+    idx_y = np.int(np.floor(idx/256))
+    x, y = np.meshgrid(X,Y)    
+    xoff = X[idx_x]
+    yoff = Y[idx_y]
+    RF = get_RF(xoff, yoff, r_RF)
+    product = np.sum(luminance_frame * RF)
+    return product
+
+def parallel_prediction(idx_x, RFstrip, fitparameters, celltype):    
+    ''' 
+    Helper function to do the parallelized processing of the filtered signal; 
+    This transform center/surround signal to activity with standard parameters.
+    '''
+    from retinatools.library import get_PC_response
+
+    lum_signalM = RFstrip[0, idx_x, :]
+    lum_signalL = RFstrip[1, idx_x, :]
+    R = get_PC_response(lum_signalM, lum_signalL, fitparameters, celltype)
+    return R
+
+
+
+# fitparameters = {'rcenterM': 20, 'rcenterL': 5.0, \
+#                     'lum_factor': 1200, 'tau1': 6.9, 'tau2': 60, 'k1': 0.017, 'tau3': 400,\
+#                     'alpha': 0.68, 'k2': 4, 'o1': 0.07, 'dt': 1000/150.}  #110Ron
+# celltype = "Ron"
+
+fitparameters = {'rcenterM': 4.9, 'rcenterL': 32, \
+                 'lum_factor': 290, 'tau1': 7.8, 'tau2': 100, 'k1': 0.018, 'tau3': 2000,\
+                 'alpha': 0.71, 'k2': 3.0, 'o1': 0.06, 'dt': 1000/150.}  #130Gon
+celltype = "Gon"
+
+
+
+savepath = "/jukebox/tank/schottdorf/tmp/"
+
+N_MAXFRAME = 4500  # 4500 to only process first 2.5 min *60sec *30fps in the mpg -> 30sec ret video
+
+def main(profile):
+    rc = ipp.Client(profile=profile)                 # ipyparallel status
+    print(rc.ids)
+
+    dview = rc[:]                                    # Set up the engines
+    dview.use_cloudpickle()                          # Make sure to fix the closure over luminance_frame
+    with dview.sync_imports():
+        import numpy as np
+
+    ################################
+    #####   Process the movie  #####
+    ################################
+    vidcap = cv2.VideoCapture('/usr/people/ms81/cells/1x10_256.mpg')
+    data = np.ones((2, 256, 256, N_MAXFRAME))            # for center & surround   X 256px  X 256px  X N_frames
+    counter = 0
+    success = True
+    t0 = time.time()
+    while success and (counter < N_MAXFRAME):   
+        success, image = vidcap.read()
+        scone, mcone, lcone = image_to_cone(image)        
+        for tt in ["M", "L"]:
+            if tt == "M":
+                r_RF = fitparameters['rcenterM']
+                luminance_frame = mcone
+                idx = 0
+            else:
+                r_RF = fitparameters['rcenterL']
+                luminance_frame = lcone
+                idx = 1
+            rc[:].push(dict( luminance_frame = luminance_frame, RF_FilterFrame = RF_FilterFrame, r_RF = r_RF ))         # Seed namespace for the ipython engines. 
+            tmp = rc[:].map_sync( lambda ix: RF_FilterFrame(ix, luminance_frame, r_RF), range(0,256*256) )              # Execute parallel computation on all ipython engines
+            data[idx,:,:,counter] = np.reshape(tmp, (256,256))                                                          # Reorganize data back to frame
+        counter += 1
+        print(counter/N_MAXFRAME)
+    t1 = time.time()
+
+    # filename = "rgc-{0}-raw.npy".format(os.environ["SLURM_JOB_ID"])
+    # np.save(filename, data)
+    print("Part 1 DONE; Time needed [in h]:")
+    print( (t1-t0)/3600 )
+
+
+    ################################
+    ####  Apply temp. process  #####
+    ################################
+    t1 = time.time()
+    output_model = np.zeros((256, 256, N_MAXFRAME))
+    rc[:].push(dict( parallel_prediction = parallel_prediction))
+    for idx_x in range(0,256):
+        RFstrip = data[:,idx_x,:,:]
+        rc[:].push(dict( RFstrip = RFstrip, celltype = celltype, fitparameters = fitparameters, parallel_prediction = parallel_prediction ))
+        tmp = rc[:].map_sync(lambda idx_y: parallel_prediction(idx_y, RFstrip, fitparameters, celltype), range(0,256))
+        output_model[idx_x,:,:] = np.reshape(tmp, (256, N_MAXFRAME))
+        print(idx_x/256)
+    t2 = time.time()
+
+    filename = "-{0}-frames.npy".format(os.environ["SLURM_JOB_ID"])
+    pp = savepath + celltype + filename
+    np.save(pp, output_model)
+    print("Part 2 DONE; Time needed [in h]:")
+    print("Saved: " + pp)
+    print((t2-t1)/3600)
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser()
+    parser.add_argument("-p", "--profile", required=True, help="Name of IPython profile to use")
+
+    args = parser.parse_args()
+    main(args.profile)
+    print("Done and done.")
+

+ 193 - 0
run_model_HPC/make_movie_from_files.ipynb

@@ -0,0 +1,193 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Misc notes"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Barry's data is mostly from  from 3 deg to 8 deg of eccentricity; most cells around 5 deg eccentricity. There is a nonlinear relationships between retinal eccentricity (in $mm$) and visual angle (in $deg$) in macaque, see see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC50193/. A polynomial fit is given by\n",
+    "\\begin{eqnarray}\n",
+    "A = a + b E + c E^2\n",
+    "\\end{eqnarray}\n",
+    "with, $a = 0.1$, $b = 4.21$, and $c = 0.038$. Here A is eccentricity in $deg$, and E is eccentricity in $mm$. Using this equation, the cells are situated between $0.7$mm to $1.8$mm, mostly around $1.2$mm.\n",
+    "\n",
+    "Taking the derivative of the quation, one gets the magnification factor\n",
+    "\\begin{eqnarray}\n",
+    "\\frac{dA}{dE} = b + 2cE \\qquad\\text{ in deg/mm}\n",
+    "\\end{eqnarray}\n",
+    "Evaluating this at $E=1.2\\,[0.7,\\,1.8]$mm, the range of manification factors is $4.30\\,[4.26,\\,4.35]$deg/mm. Inverting yields \n",
+    "\\begin{eqnarray}\n",
+    "    s = 0.232 [0.235,0.230] \\text{mm/deg}\n",
+    "\\end{eqnarray}\n",
+    "\n",
+    "The field of view is 256 px and 4.6degree across. Therefore\n",
+    "* 256px are 4.6deg $\\to$ 56px per 1deg\n",
+    "* 256px are $s*4.6\\text{deg}=1.07[1.06, 1.08]$mm $\\to$ 48px are 0.2mm"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import numpy as np\n",
+    "import cv2 as cv2\n",
+    "import pylab as pl\n",
+    "from matplotlib import cm"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Import the movie to show in comparison"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "frame_raw = np.ones((3,256,256,4500))\n",
+    "\n",
+    "vidcap = cv2.VideoCapture('../../1x10_256.mpg')\n",
+    "success, image = vidcap.read()\n",
+    "counter = 0\n",
+    "success = True\n",
+    "while success and (counter < 4500):   # 4500 to only process first 2.5 min *60sec *30fps in the mpg -> 30sec ret video\n",
+    "    success, image = vidcap.read()\n",
+    "    \n",
+    "    #Raw RGB values\n",
+    "    im_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n",
+    "\n",
+    "    red   = np.double(im_rgb[:,:,0])\n",
+    "    green = np.double(im_rgb[:,:,1])\n",
+    "    blue  = np.double(im_rgb[:,:,2])\n",
+    "    \n",
+    "    #Save luminance frame\n",
+    "    frame_raw[:,:,:,counter] = np.array([red/256., green/256., blue/256.])\n",
+    "    counter += 1"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Parse the npy file to show model prediction"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "filtered_frames = np.load(\"./Gon-19829503-frames.npy\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "startframe = 600 #1sec before video begins at 150fps\n",
+    "\n",
+    "for idx in range(startframe,4500):\n",
+    "    \n",
+    "\n",
+    "    cell_frame = filtered_frames[:,:,idx]\n",
+    "    cell_frame[230:240,180:(180+48)] = 10  # 0.2 mm\n",
+    "\n",
+    "    fig, axes = pl.subplots(nrows=1, ncols=2, figsize=(7, 4))\n",
+    "    \n",
+    "    r = np.reshape(frame_raw[0,:,:,idx],(256, 256))\n",
+    "    g = np.reshape(frame_raw[1,:,:,idx],(256, 256))\n",
+    "    b = np.reshape(frame_raw[2,:,:,idx],(256, 256))\n",
+    "\n",
+    "    img = np.zeros((256,256,3))\n",
+    "    img[:,:,0] = np.array(r, dtype = np.float)\n",
+    "    img[:,:,1] = np.array(g, dtype = np.float)\n",
+    "    img[:,:,2] = np.array(b, dtype = np.float)\n",
+    "\n",
+    "    img[230:240,180:(180+56),0] = 1 # 1 deg scale bar\n",
+    "    img[230:240,180:(180+56),1] = 1\n",
+    "    img[230:240,180:(180+56),2] = 1\n",
+    "    \n",
+    "    axes[0].imshow(img)\n",
+    "    axes[0].set_title(\"Video\")\n",
+    "    axes[0].axes.xaxis.set_ticklabels([])\n",
+    "    axes[0].axes.yaxis.set_ticklabels([])\n",
+    "    axes[0].text(x=182,y=220, s = \"1 deg\", fontsize=15, color = 'w')\n",
+    "\n",
+    "    axes[1].imshow(cell_frame, vmin=0, vmax=1, cmap=cm.Greys_r)\n",
+    "    axes[1].set_title(\"PC-Gon-cell\")\n",
+    "    axes[1].axes.xaxis.set_ticklabels([])\n",
+    "    axes[1].axes.yaxis.set_ticklabels([])\n",
+    "    axes[1].text(x=20,y=30, s = \"time = \"+str(np.floor((idx-startframe)/150*1000)/1000)+\"s\", fontsize=15,color = 'w')\n",
+    "    axes[1].text(x=170,y=220, s = \"0.2 mm\", fontsize=15, color = 'w')\n",
+    "\n",
+    "    fig.tight_layout()\n",
+    "\n",
+    "    pl.savefig(\"./video_frames/file%04d.png\" % (idx-startframe))\n",
+    "    pl.close()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Make into movie:\n",
+    "\n",
+    "ffmpeg -r 60 -f image2 -s 504x288 -i ./video_frames/file%04d.png -vcodec libx264 -crf 5  -pix_fmt yuv420p simulation.mp4"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.7.8"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

+ 1 - 0
run_model_HPC/off-cell.mp4

@@ -0,0 +1 @@
+/annex/objects/MD5-s34319025--601b0c46b9590730e3b45d50bc294c03

+ 1 - 0
run_model_HPC/on-cell.mp4

@@ -0,0 +1 @@
+/annex/objects/MD5-s33302631--aea6f9eec945f1839354580ceea94b77

+ 36 - 0
run_model_HPC/run_python.sh

@@ -0,0 +1,36 @@
+#!/usr/bin/env bash
+
+#output to slurm file, use partition Brody, run for 50h and use 24 CPUs
+#SBATCH -o %j.out
+#SBATCH -p Brody
+#SBATCH -J alltest
+#SBATCH --ntasks=24
+#SBATCH --ntasks-per-core=1
+#SBATCH --time=50:00:00
+
+module load anacondapy/5.3.1
+. activate testenvironment
+
+profile=ms81_${SLURM_JOB_ID}_$(hostname)
+
+echo "Creating profile ${profile}"
+ipython profile create ${profile}
+ 
+echo "Launching controller"
+ipcontroller --ip="*" --profile=${profile} --log-to-file &
+sleep 10
+
+echo "Launching engines"
+srun ipengine --profile=${profile} --location=$(hostname) --log-to-file &
+sleep 45
+
+echo "Launching job"
+python -u make_movie_PC.py --profile ${profile}
+
+if [ $? -eq 0 ]
+then
+    echo "Job ${SLURM_JOB_ID} completed successfully!"
+else
+    echo "FAILURE: Job ${SLURM_JOB_ID}"
+fi
+

+ 1 - 0
stimuli/1x10_256.mpg

@@ -0,0 +1 @@
+/annex/objects/MD5-s265338036--d64bdae05eb07895a8f30cda287c5a74

File diff suppressed because it is too large
+ 1 - 0
stimuli/GunSpectra.txt