{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# One target island (openCV 4.0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import modules\n", "\n", "Make sure that Python 3 and the following modules (recommended version ID) are installed on your computer before running this cell:\n", "\n", "numpy (1.18.1),\n", "sounddevice (0.3.14),\n", "openCV (4.0.1),\n", "tkinter (8.6.8),\n", "scipy (1.3.2),\n", "pyfirmata (1.1.0)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4.5.2\n" ] } ], "source": [ "import numpy as np # Import numpy module\n", " # conda install -c conda-forge python-sounddevice\n", "import sounddevice as sd # Import sounddevice module for \"real-time\" sound playback\n", "import cv2 # Import opencv module for image processing\n", "print(cv2.__version__) # Make sure you have the correct version of openCV \n", "import tkinter as tk # Import Tkinter module for GUIs\n", "\n", "from scipy.io import wavfile # WAV-file import filter\n", "from pyfirmata import Arduino # Arduino support\n", "from math import e # Euler's number\n", "from collections import OrderedDict # keep order of session settings\n", "\n", "import math # Import math module\n", "import time # Import time module for time measurements and pausing\n", "import random # Import random module for random number generation\n", "import json # JSON to read / write session settings\n", "import datetime # session date/time management\n", "import os # file/folder path handling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load default experiment settings\n", "\n", "For every experimental cofiguration you can copy the 'defaults.json' file as a specific experimental preset and load it here instead of 'defaults.json'." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " \"trial_number\": 50,\n", " \"session_duration\": 3600,\n", " \"trial_duration\": 60,\n", " \"start_radius\": 67,\n", " \"start_x\": 195,\n", " \"start_y\": 195,\n", " \"target_radius\": 80,\n", " \"target_duration\": 5,\n", " \"subject\": \"003901\",\n", " \"experiment_type\": \"aSIT\",\n", " \"background_color\": \"T\",\n", " \"init_duration\": 0.2,\n", " \"arena_x\": 650,\n", " \"arena_y\": 500,\n", " \"arena_radius\": 430,\n", " \"distractor_island\": 0,\n", " \"experiment_date\": \"2021-06-28_17-21-54\"\n", "}\n" ] } ], "source": [ "with open('defaults.json') as json_file:\n", " cfg = OrderedDict(json.load(json_file))\n", "cfg['experiment_date'] = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')\n", "\n", "print(json.dumps(cfg, indent=4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Configure experiment in a starting screen\n", "\n", "The next cell will open a window that can be used to configure the experiment by entering the desired number of trials per session (usually limited by the number of rewards the feeder provides without refil), the session and trial durations, the size (here: 134 px in diameter = 17.87 cm) and the XY-position of the starting platform, the size of the target area, and the duration the animal has to spend in the target to recieve a reward. In addition, you can give your experiment a purposeful ID (e.g. subject ID, experiment type and date), provide information about the contrast between arena and subject, and define the duration the animal has to spend in the starting area to initialize a new trial. \n", "\n", "If you choose your target size, always make sure that it is small enough to fit in the arena without overlap with the starting/initialization area. \n", "\n", "The experiment ID you enter in the popup window will automatically be added to the file names of the protocols that will be generated for each session.\n", "\n", "**To save your configuration, hit the apply button**. To close the popup window and proceed, hit the continue button." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "EXPERIMENT ID: 003901_aSIT_2021-06-28_17-21-54\n", "Trials per session: 50\n", "Session duration [s]: 3600\n", "Trial duration [s]: 60\n", "Radius of the starting platform [pixels]: 67\n", "X-position of the starting platform [pixels]: 650\n", "Y-position of the starting platform [pixels]: 500\n", "Radius of the target platform [pixels]: 80\n", "Target duration [s]: 5\n", "Subject: 003901\n", "Experiment type: aSIT\n", "Subject is darker than background [T = True; F = False]: T\n", "Initialisation Duration [s]: 0.2\n", "Arena X coordinate [pixels]: 650\n", "Arena Y coordinate [pixels]: 500\n", "Arena radius [pixels]: 430\n", "Enable distractor island [0/1]: 0\n", "Experiment date: 2021-06-28_17-21-54\n" ] } ], "source": [ "def show_entry_fields():\n", " print(\"EXPERIMENT ID: %s\" % experiment_id)\n", " for key, field in entry_fields.items():\n", " print(\"%s: %s\" % (labels[key].cget('text'), entry_fields[key].get()))\n", " \n", "def update_config():\n", " for key, entry_field in entry_fields.items():\n", " tp = type(cfg[key])\n", " cfg[key] = tp(entry_field.get())\n", " experiment_id = \"%s_%s_%s\" % (cfg['subject'], cfg['experiment_type'], cfg['experiment_date'])\n", " \n", "def show_and_update_config():\n", " update_config()\n", " show_entry_fields()\n", " \n", "# Starting screen\n", "master = tk.Tk()\n", "master.title('Experimental parameters')\n", "\n", "# Labels\n", "tk.Label(master, text=\"Instructions: \\n 1. Enter parameters (only integers are allowed as numbers) \\n 2. Press 'Apply' \\n 3. Press 'Continue'\").grid(row=0, padx=10, pady=10)\n", "labels = OrderedDict()\n", "labels['trial_number'] = tk.Label(master, text=\"Trials per session\")\n", "labels['session_duration'] = tk.Label(master, text=\"Session duration [s]\")\n", "labels['trial_duration'] = tk.Label(master, text=\"Trial duration [s]\")\n", "labels['start_radius'] = tk.Label(master, text=\"Radius of the starting platform [pixels]\")\n", "labels['start_x'] = tk.Label(master, text=\"X-position of the starting platform [pixels]\")\n", "labels['start_y'] = tk.Label(master, text=\"Y-position of the starting platform [pixels]\")\n", "labels['target_radius'] = tk.Label(master, text=\"Radius of the target platform [pixels]\")\n", "labels['target_duration'] = tk.Label(master, text=\"Target duration [s]\")\n", "labels['subject'] = tk.Label(master, text=\"Subject\")\n", "labels['experiment_type'] = tk.Label(master, text=\"Experiment type\")\n", "labels['background_color'] = tk.Label(master, text=\"Subject is darker than background [T = True; F = False]\")\n", "labels['init_duration'] = tk.Label(master, text=\"Initialisation Duration [s]\")\n", "labels['arena_x'] = tk.Label(master, text=\"Arena X coordinate [pixels]\")\n", "labels['arena_y'] = tk.Label(master, text=\"Arena Y coordinate [pixels]\")\n", "labels['arena_radius'] = tk.Label(master, text=\"Arena radius [pixels]\")\n", "labels['distractor_island'] = tk.Label(master, text=\"Enable distractor island [0/1]\")\n", "labels['experiment_date'] = tk.Label(master, text=\"Experiment date\")\n", "\n", "for i, (key, label) in enumerate(labels.items()):\n", " label.grid(row=i + 4, padx=5, pady=5)\n", "\n", "# Entry fields\n", "entry_fields = OrderedDict()\n", "for i, (key, value) in enumerate(cfg.items()):\n", " entry_fields[key] = tk.Entry(master)\n", " entry_fields[key].insert('end', value)\n", " entry_fields[key].grid(row=i + 4, column=1)\n", "\n", "experiment_id = \"%s_%s_%s\" % (cfg['subject'], cfg['experiment_type'], cfg['experiment_date'])\n", "\n", "tk.Button(master, text='Apply', command=show_and_update_config).grid(row=len(entry_fields) + 4, column=0, sticky='s', pady=4)\n", "tk.Button(master, text='Continue', command=master.destroy).grid(row=len(entry_fields) + 4, column=1, sticky='w', pady=4)\n", "\n", "tk.mainloop()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Protocol 1\n", "\n", "Run the upcoming cell, to create a session folder and to save the chosen experimetal parameters to a JSON-file (\"experiment_id_parameters.json\"). The session folder will be created here where this notebook is located." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# This session's protocols will be saved to this folder\n", "save_to = os.path.join('sessions', experiment_id)\n", " \n", "if not os.path.exists(save_to):\n", " os.makedirs(save_to)\n", "\n", "# Saves all parameters to a JSON file with the user-defined \"Experiment ID\" as filename\n", "with open(os.path.join(save_to, experiment_id + '_parameters.json'), 'w') as f:\n", " json.dump(cfg, f, indent=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initialize the microcontroller\n", "\n", "The next cell Initializes a microcontroller for subsequent hardware control. This is, where you will probably have to get creative yourself, depending on what you would like to do. Here, we use an Arduino Nano. With the channel definitions below, we can later provide differently colored illumination during the experiment (for example to stimulate with colors rather than sound) and trigger two different feeders. \n", "\n", "For the example setup, two automatic fish feeders with 27 feeding slots each were \"hacked\", so that they can be controlled *via* two additional Arduinos with motor shields. These additional Arduinos drive the feeder motors each time they get a trigger signal from the main Arduino. The two feeders allow the provision of 54 rewards per session. The two feeders were installed at different positions above the arena and are activated alternately, to lower the predictability of where in the arena the reward will drop. The starting feeder is chosen randomly for each new session.\n", "\n", "If you DON'T have a real arduino connected, you can just still run this experiment with the Fake Feeder. This option is set by default in the next cell. The Fake Feeder will just print the text message here in this notebook when feeding." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# Define colors and feeder channel for Arduino output\n", "arduinoBlue = 9 # Blue diodes\n", "arduinoYellow = 10 # Yellow diodes\n", "arduinoRed = 11 # Red diodes\n", "arduinoFeeder1 = 12 # Trigger pulse for feeder1\n", "arduinoFeeder2 = 4 # Trigger pulse for feeder2\n", "\n", "# Feeder changes every trial, start feeder randomized\n", "feederID = random.choice([1, 2])\n", "\n", "\n", "class RealFeeder(Arduino):\n", " def __init__(self, *args, **kwargs):\n", " super(RealFeeder, self).__init__(*args, **kwargs)\n", " def feed(self, feed_id):\n", " self.digital[feed_id].write(1) \n", " time.sleep(.068)\n", " self.digital[feed_id].write(0) \n", "\n", " \n", "class FakeFeeder():\n", " def feed(self, feed_id):\n", " print(\"Fake Arduino - feeding %s...\" % feed_id)\n", " def exit(self):\n", " print(\"Fake Arduino - exiting...\")\n", " \n", " \n", "# Initialize REAL Arduino feeder if connected\n", "board = RealFeeder('COM10') # May be another COM-Port - in Windows, just check the Hardware Manager\n", "\n", "# OR initialize FAKE Arduino feeder for testing purposes. It will just print message here when feeding\n", "#board = FakeFeeder()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prepare the audio stream\n", "\n", "The following cell initiates the audio stream, to which we will later feed our stimuli. The default sample rate is set to 44.1 kHz. The cell also loads sound files with the stimuli. Here, we use short pure tones as stimuli and a silent sound object, which is fed to the audiostream between stimuli. In our setup, we found this to be necessarry to reduce undesired clicking sounds at stimulus on- and offset, even though the sounds are ramped. Whether this will be necessary for you, will strongly depend on your audio hardware. \n", "\n", "The audio stimulation provided by this notebook differs from the MATLAB version in two important aspects: Firstly, the MATLAB version generates the stimuli on the fly, while this notebook uses sound files as input. Feel free to change the code if you prefer the other solution. Secondly, the MATLAB version stimulates at fixed time intervals and the sample rate of the video tracking is locked to the stimulation interval, i.e. high temporal precision in the sound stimulation comes with the cost of lower temporal resolution of the animal tracking. Here, we chose the opposite approach, with the video feed defining the cycle frequency (approx. 14 Hz with the given Camera and a resolution of 800x600 px) and the audio stimulation being locked to the framerate of the camera. Thus, higher temporal resolution of the animal tracking comes with the cost that inter-stimulus intervals cannot freely be chosen, but only be multiple integers (3 or higher) of the mean video frame duration. In the example setup and the code below, we decided for the stimulus to be played every three cycles (approx. every 215 ms). \n", "\n", "The duration of the audio files should not exceed the cycle length." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ ":12: WavFileWarning: Chunk (non-data) not understood, skipping it.\n", " distractorSoundTrial = wavfile.read(os.path.join('assets', '10kHz-short-68.wav'))[1]\n", ":13: WavFileWarning: Chunk (non-data) not understood, skipping it.\n", " attractorSoundTarget1 = wavfile.read(os.path.join('assets', '4000Hz-short-68.wav'))[1]\n", ":14: WavFileWarning: Chunk (non-data) not understood, skipping it.\n", " distractorSoundOdd1 = wavfile.read(os.path.join('assets', '6000Hz-short-68.wav'))[1]\n", ":15: WavFileWarning: Chunk (non-data) not understood, skipping it.\n", " silenceSound = wavfile.read(os.path.join('assets', 'silence-short-68.wav'))[1]\n" ] } ], "source": [ "# Set sample rate for audio output\n", "sd.default.samplerate = 44100\n", "fs = 44100 \n", "\n", "# Audio stream\n", "stream = sd.OutputStream(samplerate=fs, channels=1, dtype='float32')\n", "\n", "# Cycle counter: sound is played every \"delayLength\" cycles (video frames)\n", "delayLength = 3 \n", "\n", "# Open sound files\n", "distractorSoundTrial = wavfile.read(os.path.join('assets', '10kHz-short-68.wav'))[1]\n", "attractorSoundTarget1 = wavfile.read(os.path.join('assets', '4000Hz-short-68.wav'))[1]\n", "distractorSoundOdd1 = wavfile.read(os.path.join('assets', '6000Hz-short-68.wav'))[1]\n", "silenceSound = wavfile.read(os.path.join('assets', 'silence-short-68.wav'))[1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Protocol 2\n", "\n", "The following cell generates a video object to which the later video feed will be saved. The colours that are defined will later be used for labeling. The labelled video file (\"ExperimentID_video.avi\") will be saved to the session folder for documentation purposes. " ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Define BGR colors\n", "BGR_COLOR = {\n", " 'red': (0,0,255),\n", " 'green': (127,255,0),\n", " 'blue': (255,127,0),\n", " 'yellow': (0,127,255),\n", " 'black': (0,0,0),\n", " 'white': (255,255,255)\n", "}\n", "\n", "# Define the codec and create VideoWriter object\n", "videoName = os.path.join(save_to, experiment_id + '_video.avi')\n", "fourcc = cv2.VideoWriter_fourcc(*'XVID')\n", "# Make sure that the frame rate of your output appoximately matches \n", "# the number of cycles per second, to avoid time lapsed output videos\n", "out = cv2.VideoWriter(videoName,fourcc, 15.0, (800,600))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Capture a background image\n", "\n", "The tracking algorithm used in this notebook compares the frames of the video feed during the experiment with an image of the empty arena to later track the position of the largest object in the arena (which usually is your animal). If you are confident in the stability of your video quality, it should suffice to capture the picture once and to skip this cell in the subsequent experiments. However, since this step only takes a few seconds, we recommend to take a new picture of the arena for each new experiment. In the preview of the video feed that will pop-up if you run the next cell, the space outside the arena is masked, so that the camera preview can also be used to check if the camera/arena are still positioned correctly. \n", "\n", "Before taking the picture, make sure that the conditions in your lab (especially the illumination) are the exact same as they will be during the experiments. Once you are happy with the preview of your background image, press \"c\" to capture the image. It will be saved as \"background.png\" to the session folder containing this notebook.\n", "\n", "This notebook will use the main camera of your system as an input device. If you have more than one camera installed (e.g. on a notebook with internal chat camera), make sure to deactivate all cameras other than the camera of your setup prior to running the notebook. Also make sure that the video dimensions defined here match you arena dimensions defined above and the video dimensions of the video feeds that will be defined in the subsequent cells." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "# Define video capture device (0 = webcam1) to capture background frame\n", "cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)\n", "# Set picture dimensions\n", "cap.set(3,1200) # Width\n", "cap.set(4,880) # Height\n", "\n", "# Capture Background frame (c = capture)\n", "while(True):\n", " # Capture frame-by-frame\n", " ret, img = cap.read()\n", " img2 = img\n", "\n", " # Display the resulting frame\n", " imgArena = cv2.circle(img, (cfg['arena_x'], cfg['arena_y']), cfg['arena_radius'], (0,0,255), 2)\n", " #imgArenaStart = cv2.circle(imgArena, (cfg['start_x'], cfg['start_x']), cfg['start_radius'], (255,0,255), 2)\n", "\n", " # Mask the space outside the arena\n", " mask = np.zeros(shape=img.shape, dtype=\"uint8\")\n", " cv2.circle(mask, (cfg['arena_x'], cfg['arena_y']), cfg['arena_radius'], (255,255,255), -1)\n", "\n", " maskedImg2 = cv2.bitwise_and(src1 = img2, src2 = mask)\n", " #imgArenaStart = cv2.bitwise_and(src1 = imgArenaStart, src2 = mask)\n", " imgArena = cv2.bitwise_and(src1=imgArena, src2=mask)\n", "\n", " cv2.imshow('Press (c)-to capture the background image', imgArena)\n", " if cv2.waitKey(1) & 0xFF == ord('c'):\n", " cv2.imwrite(os.path.join(save_to, 'background.png'), maskedImg2)\n", " break\n", "\n", "# When the background image is captured, release the capture\n", "cap.release()\n", "cv2.destroyAllWindows()\n", "\n", "# Loads current background as object img for later use\n", "background = cv2.imread(os.path.join(save_to, 'background.png'), 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prepare the experiment\n", "\n", "The following cell will provide another preview of the video feed from the arena. It will allow you to double-check if everything is prepared for the experiment. If so, you can bring your animal and put it into the arena. \n", "\n", "Once you have left the room with your setup and are happy with what you see in the live-feed, hit \"c\" to close the preview." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "# Define video capture device for live-stream (0 = webcam1)\n", "cap2 = cv2.VideoCapture(0, cv2.CAP_DSHOW)\n", "# Set picture dimensions\n", "cap2.set(3,1200)\n", "cap2.set(4,880)\n", "\n", "# Show video to see animal leaving the box\n", "while(True):\n", " # Capture frame-by-frame\n", " ret, img3 = cap2.read()\n", " \n", " cv2.imshow('Press (c)-to continue', img3)\n", " if cv2.waitKey(1) & 0xFF == ord('c'):\n", " break\n", "\n", "cap2.release()\n", "cv2.destroyAllWindows()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Protocol 3\n", "\n", "The following cell generates a CSV-file to which the essential data (i.e. animal position, positions of the target areas, etc.) from each cycle (video frame) of the experiment will be saved. The CSV-file (\"ExperimentID_protocol.csv\") will be saved to the session folder inside the folder containing this notebook. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def log_frame_data(args):\n", " with open(os.path.join(save_to, experiment_id + '_protocol.csv'), 'a') as f:\n", " f.write(\",\".join([str(x) for x in args]) + \"\\n\")\n", "\n", "headers = [\n", " 'FrameID', # Frame ID\n", " 'Time [s]', # Time stamp\n", " 'Phase', # Phase of experiment\n", " 'Animal_x', # X-Coordinate of the subject\n", " 'Animal_y', # Y-Coordinate of the subject\n", " 'Start_x', # X-Coordinate of the starting platform\n", " 'Start_y', # Y-Coordinate of the starting platform\n", " 'Start_rad', # Radius of the starting platform\n", " 'Target_x', # X-Coordinate of the target \n", " 'Target_y', # Y-Coordinate of the target \n", " 'Target_rad', # Radius of the target platform\n", " 'TrialID', # Trial ID\n", " 'Rewarded Trials [%]',# Percentage of trials rewarded\n", " 'Sound Played' # sound played\n", "]\n", "\n", "log_frame_data(headers) # saves headers to the log file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Open a start button\n", "\n", "This cell provides a start button. If you run this notebook cell-by-cell, this button is obsolete. However, if you run all cells at once, this is the point of no return. Once you have started the experiment, it cannot be paused until the session criteria are met or it is interrupted manually." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "root = tk.Tk()\n", "\n", "frame = tk.Frame(root)\n", "frame.pack()\n", "\n", "button = tk.Button(frame, text=\"Start Experiment!\", fg=\"black\", command=root.destroy)\n", "button.pack(side=tk.LEFT)\n", "\n", "root.mainloop()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initialize the camera\n", "\n", "This cell initializes the camera for the actual tracking and defines some counters and initial variables needed during the experiment." ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [], "source": [ "# Define video capture device for live-stream (0 = webcam1) and tracking\n", "cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)\n", "# Set picture dimensions\n", "cap2.set(3,1200)\n", "cap2.set(4,880)\n", "\n", "# Mask the space outside the arena\n", "mask = np.zeros(shape = img.shape, dtype = \"uint8\")\n", "cv2.circle(mask, (cfg['arena_x'], cfg['arena_y']), cfg['arena_radius'], (255,255,255), -1)\n", "\n", "# Experiment starts in phase 0 with 0 trials\n", "expPhase = 0\n", "trialCounter = 0\n", "rewardCounter = 0\n", "frameCounter = 0\n", "trialCountdown = 0\n", "targetCountdown = 0\n", "\n", "# Initial values for target area\n", "targetX, targetY = 0, 0\n", "distractorX, distractorY = 0, 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Start the experiment\n", "\n", "The final cell contains all the code for animal tracking and hardware control in response to the animals's behavior. We hope that the comments provided in the code suffice to understand the individual steps and to adjust them to your own setup and needs, if necessary.\n", "\n", "The experiment will stop automatically, if either one of the following conditions is met:\n", "\n", "(1) The pre-defined session duration is reached;
\n", "(2) The pre-definde number of trials is reached;
\n", "(3) The experiment is voluntarily stopped prematurely by hitting \"q\". \n", "\n", "If you should decide to stop the experiment manually, always use the \"q\"-button on your keyboard. Just quitting Jupyter/Python will lead to data loss!" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'cv2' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m/tmp/ipykernel_608708/373650743.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0;31m# Define video capture device for live-stream (0 = webcam1) and tracking\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 8\u001b[0;31m \u001b[0mcap\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcv2\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mVideoCapture\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcv2\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mCAP_DSHOW\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 9\u001b[0m \u001b[0;31m# Set picture dimensions\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[0mcap\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mset\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1200\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mNameError\u001b[0m: name 'cv2' is not defined" ] } ], "source": [ "def get_random_x_y():\n", " alpha = 2 * math.pi * random.random() # random angle\n", " r = (cfg['arena_radius'] - 20 - cfg['target_radius']) * math.sqrt(random.random()) # random radius\n", " return int(r * math.cos(alpha) + cfg['arena_x']), int(r * math.sin(alpha) + cfg['arena_y'])\n", "\n", "\n", "# Define video capture device for live-stream (0 = webcam1) and tracking\n", "cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)\n", "# Set picture dimensions\n", "cap.set(3,1200)\n", "cap.set(4,880)\n", "\n", "# Experiment starts in phase 0 with 0 trials\n", "expPhase = 0\n", "trialCounter = 0\n", "rewardCounter = 0\n", "frameCounter = 0\n", "trialCountdown = 0\n", "targetCountdown = 0\n", "\n", "# Initial values for target area\n", "targetX, targetY = 0, 0\n", "distractorX, distractorY = 0, 0\n", "\n", "\n", "# Define and start the experiment timer\n", "expTime = time.time()\n", "\n", "# Start the audio stream\n", "stream.start()\n", "\n", "# Conditions to be met for the experiment to start and continue\n", "while(cap.isOpened() and trialCounter < cfg['trial_number'] and (time.time()-expTime)<=cfg['session_duration']):\n", " \n", " # ---------- render a frame and compute animal position ---------------\n", " \n", " # Here you can choose different modes of amplitude modulation by commenting/uncommenting \n", " ampMod = (random.randrange(2396,2962,1)/100)**e/10000 # Unbiased Voltage Ratio -5dB\n", " ### ampMod = random.randrange(5623,10001,1)/10000 # Voltage Ratio -5dB\n", " ### ampMod = random.randrange(3162,10001,1)/10000 # Power Ratio -5dB\n", " ### ampMod = 1 # No modulation\n", " \n", " ret, frame = cap.read()\n", " if not ret == True:\n", " break\n", "\n", " # Mask the space outside the arena\n", " mask = np.zeros(shape = frame.shape, dtype = \"uint8\")\n", " cv2.circle(mask, (cfg['arena_x'], cfg['arena_y']), cfg['arena_radius'], (255,255,255), -1) \n", " \n", " maskedFrame = cv2.bitwise_and(src1=frame, src2=mask)\n", "\n", " ## Animal tracking\n", " # Substracts background from current frame\n", " subject = cv2.subtract(background, maskedFrame) if cfg['background_color'] == 'T' else cv2.subtract(maskedFrame, background)\n", "\n", " # Converts subject to grey scale\n", " subjectGray = cv2.cvtColor(subject, cv2.COLOR_BGR2GRAY)\n", "\n", " # Applies blur and thresholding to the subject\n", " kernelSize = (25,25)\n", " frameBlur = cv2.GaussianBlur(subjectGray, kernelSize, 0)\n", " _, thresh = cv2.threshold(frameBlur, 40, 255, cv2.THRESH_BINARY)\n", "\n", " # Finds contours and selects the contour with the largest area\n", " if int(cv2.__version__.split('.')[0]) < 4:\n", " _, contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)\n", " else:\n", " contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)\n", "\n", " # If there is no subject, the sreen is blackened, indicating that there is a problem\n", " # with the tracking or that your animal has escaped.\n", " # This code block helps when building and testing the setup. During a real experiment,\n", " # the condition hopefully is never met. \n", " if (len(contours) == 0):\n", " x = 20\n", " y = 40\n", " #subjectHullCentroid = np.zeros(frame.shape,np.uint8)\n", " #subjectHullCentroid = cv2.circle(subjectHullCentroid, (x,y), 3, BGR_COLOR['yellow'], -1)\n", "\n", " # If there is a subject, it is tracked\n", " else:\n", " contour = contours[np.argmax(list(map(cv2.contourArea, contours)))]\n", " M = cv2.moments(contour)\n", " if ((M['m00']) == 0):\n", " x = 20 if expPhase < 3 else 780\n", " y = 40 if expPhase < 3 else 580\n", " subjectHullCentroid = np.zeros(frame.shape,np.uint8)\n", " subjectHullCentroid = cv2.circle(subjectHullCentroid, (x,y), 3, BGR_COLOR['yellow'], -1)\n", " else:\n", " x = int(M['m10'] / M['m00'])\n", " y = int(M['m01'] / M['m00'])\n", " hull = cv2.convexHull(contour)\n", " subjectHullCentroid = maskedFrame\n", "\n", " # Draws contour and centroid of the subject\n", " cv2.drawContours(subjectHullCentroid, [contour], 0, BGR_COLOR['green'], 1, cv2.LINE_AA)\n", " subjectHullCentroid = cv2.circle(subjectHullCentroid, (x,y), 3, BGR_COLOR['yellow'], -1)\n", "\n", " # Draws the arena contour, the attractor target, the distractor target, and a red / green dot, \n", " # signalling that the subject is outside / inside the attractor target area\n", " dot_colors = {0: 'red', 1: 'green', 2: 'blue', 3: 'green'}\n", " subjectHullCentroidArena = cv2.circle(subjectHullCentroid, (cfg['arena_x'],cfg['arena_y']), cfg['arena_radius'], (0,0,255), 2)\n", " subjectHullCentroidArenaStartIn = cv2.circle(subjectHullCentroidArena,(20,20), 10, BGR_COLOR[dot_colors[expPhase]], -6)\n", " if expPhase > 1: \n", " subjectHullCentroidArenaStart = cv2.circle(subjectHullCentroidArena,(targetX,targetY), cfg['target_radius'], (0, 255, 0), 2)\n", " if cfg['distractor_island']:\n", " subjectHullCentroidArenaStart = cv2.circle(subjectHullCentroidArenaStart,(distractorX,distractorY), cfg['target_radius'], (255,0,0), 2)\n", " else:\n", " subjectHullCentroidArenaStart = cv2.circle(subjectHullCentroidArena,(cfg['start_x'], cfg['start_x']), cfg['start_radius'], (255,0,255), 2)\n", " \n", " # Adds a stopwatch for the experiment duration to the video\n", " subjectHullCentroidArenaStartInText=cv2.putText(subjectHullCentroidArenaStartIn,\n", " '' + str('Time: %.2f' % ((time.time()-expTime))),\n", " (10,590), cv2.FONT_HERSHEY_DUPLEX, .5, BGR_COLOR['white'])\n", "\n", " # Adds a trial duration countdown to the video\n", " if expPhase > 0:\n", " subjectHullCentroidArenaStartInText=cv2.putText(subjectHullCentroidArenaStartInText,\n", " '' + str('Trial: %.2f' % ((cfg['trial_duration']-trialCountdown))),\n", " (670,590), cv2.FONT_HERSHEY_DUPLEX, .5, BGR_COLOR['red'] if expPhase == 2 else BGR_COLOR['green'])\n", "\n", " # Adds a target duration countdown to the video\n", " if expPhase == 3:\n", " subjectHullCentroidArenaStartInText=cv2.putText(subjectHullCentroidArenaStartInText,\n", " '' + str('Target: %.2f' % ((cfg['target_duration']-targetCountdown))),\n", " (670,570), cv2.FONT_HERSHEY_DUPLEX, .5, BGR_COLOR['green'])\n", "\n", " # Adds the current trial number to the video\n", " subjectHullCentroidArenaStartInText=cv2.putText(subjectHullCentroidArenaStartInText,\n", " '' + str('Trial#: %.0f' % (trialCounter)),\n", " (670,30), cv2.FONT_HERSHEY_DUPLEX, .5, BGR_COLOR['blue'])\n", "\n", " # Adds the current number of collected rewards to the video\n", " subjectHullCentroidArenaStartInText=cv2.putText(subjectHullCentroidArenaStartInText,\n", " '' + str('Reward#: %.0f' % (rewardCounter)),\n", " (670,50), cv2.FONT_HERSHEY_DUPLEX, .5, BGR_COLOR['blue'])\n", "\n", " # Writes the modified frame to the video protocol and shows it in a popup window \n", " out.write(subjectHullCentroidArenaStartInText)\n", " cv2.imshow('Press (q)-to end the experiment',subjectHullCentroidArenaStartInText)\n", "\n", " # Frame ID\n", " frameCounter += 1\n", "\n", " # Calculates the percentage of successful/rewarded trials\n", " percentCorrect = 0 if rewardCounter==0 else 100/trialCounter*rewardCounter\n", "\n", " # ---------- experiment logic ---------------\n", " \n", " # Phase 0 = Animal just entered the arena or finished a trial\n", " if expPhase == 0:\n", " stream.write(silenceSound) # In phase 0, there is no acoustic stimulation\n", " soundPlayed = 'false'\n", " \n", " # Checks, if the subject is in the starting/initialization area\n", " # If so, the protocol proceeds to phase 1 and a timer is started\n", " if (x-cfg['start_x'])**2 + (y-cfg['start_y'])**2 <= cfg['start_radius']**2:\n", " expPhase = 1\n", " startInZone = time.time()\n", " \n", " # Phase 1 = Animal is in the starting area\n", " elif expPhase == 1:\n", " stream.write(silenceSound) # In phase 1, there is no acoustic stimulation too\n", " soundPlayed = 'false'\n", " \n", " # Checks, if the subject is still in the starting/initialization area \n", " if (x-cfg['start_x'])**2 + (y-cfg['start_y'])**2 <= cfg['start_radius']**2:\n", " stopInZone = time.time()\n", "\n", " # Checks, if the time spent in the starting/initialization area exceeds the initiation duration\n", " # If so, the protocol proceeds to phase 2, the trial timer is started, the designated distractor (trial)\n", " # sound is played every \"delayLength\" cycles, and the target areas for the current trial are generated\n", " if (stopInZone - startInZone) >= cfg['init_duration']:\n", " expPhase = 2\n", " startTrial = time.time()\n", "\n", " targetX, targetY = 9000, 9000\n", " while ( (targetX-cfg['arena_x'])**2 + (targetY-cfg['arena_y'])**2 >= cfg['arena_radius']**2 or \n", " math.sqrt((cfg['start_x']-targetX)**2 + (cfg['start_y']-targetY)**2) <= cfg['start_radius'] + cfg['target_radius'] ):\n", " targetX, targetY = get_random_x_y()\n", "\n", " if cfg['distractor_island']:\n", " # Generates the second target (distractor), which cannot overlap with the starting area or the first target\n", " distractorX, distractorY = 9000, 9000\n", " while ((((distractorX-cfg['arena_x'])**2)+((distractorY-cfg['arena_y'])**2)) >= (cfg['arena_radius']**2) or\n", " math.sqrt(((cfg['start_x']-distractorX)**2)+((cfg['start_y']-distractorY)**2)) <= (cfg['start_radius'] + cfg['target_radius']) or\n", " math.sqrt(((targetX-distractorX)**2)+((targetY-distractorY)**2)) <= (cfg['target_radius']+cfg['target_radius']+5)):\n", " distractorX, distractorY = get_random_x_y()\n", " \n", " # If the animal leaves the starting area before the initialization duration is reached, \n", " # the protocol goes back to phase 0\n", " else: \n", " expPhase = 0\n", " \n", " # Phase 2 = Animal initiated the trial\n", " elif expPhase == 2:\n", " stopTrial = time.time()\n", "\n", " # If the maximum trial duration is reached, the trial is terminated and the protocol goes back to phase 0\n", " if (stopTrial-startTrial) >= cfg['trial_duration']:\n", " expPhase = 0\n", " trialCounter += 1\n", " trialCountdown = 0\n", " continue\n", "\n", " # Time left for trial successful trial completion\n", " trialCountdown = (stopTrial - startTrial)\n", "\n", " # Checks, if the animal is in the attractor target area\n", " # If so, acoustic stimulation switches to the designated attractor stimulus and the protocol \n", " # proceeds to phase 3\n", " if (x-targetX)**2 + (y-targetY)**2 <= cfg['target_radius']**2:\n", " startInTarget = time.time()\n", " expPhase = 3\n", "\n", " # If the animal is in the distractor target area, instead, acoustic stimulation switches to \n", " # the designated target distractor stimulus and the protocol remains in phase 2\n", " elif cfg['distractor_island'] and (((x-distractorX)**2)+((y-distractorY)**2)) <= (cfg['target_radius']**2):\n", " if frameCounter % delayLength == 0:\n", " stream.write((distractorSoundOdd1*ampMod))\n", " soundPlayed = 'true-DistractorOdd1'\n", " else:\n", " stream.write(silenceSound)\n", " soundPlayed = 'false'\n", " \n", " # If the animal is not in the target areas, the protocol keeps playing back the designated trial\n", " # distractor stimulus and remains in phase 2\n", " else:\n", " if frameCounter % delayLength == 0:\n", " stream.write((distractorSoundTrial*ampMod))\n", " soundPlayed = 'true-DistractorTrial'\n", " else:\n", " stream.write(silenceSound)\n", " soundPlayed = 'false'\n", " \n", " # Phase 3 = Animal entered the target area\n", " elif expPhase == 3:\n", "\n", " # Checks, if the animal is still in the attractor target area\n", " # If so, acoustic stimulation continues with the designated attractor stimulus and the protocol \n", " # remains in phase 3 \n", " if (x-targetX)**2 + (y-targetY)**2 <= cfg['target_radius']**2:\n", " stopInTarget = time.time()\n", "\n", " if frameCounter % delayLength == 0:\n", " stream.write((attractorSoundTarget1*ampMod)) \n", " soundPlayed = 'true-AttractorTarget1'\n", " else:\n", " stream.write(silenceSound)\n", " soundPlayed = 'false'\n", "\n", " # Checks, if the desired target duration is reached \n", " # If so, the subject is rewarded, the trial and reward counters are increased by 1,\n", " # the target countdown stops, and the protocol goes back to phase 1\n", " if (stopInTarget - startInTarget) >= cfg['target_duration']:\n", " trialCounter += 1\n", " rewardCounter += 1\n", " targetCountdown = 0\n", "\n", " # Activates the current feeder and switches to the other feeder for the next reward\n", " board.feed(arduinoFeeder1 if feederID == 1 else arduinoFeeder2)\n", " feederID = 2 if feederID == 1 else 1\n", " expPhase = 0\n", " startTrial = time.time()\n", " \n", " # If the desired target duration is not reached, the protocol remains in phase 3 and the\n", " # countdown continues\n", " else:\n", " expPhase = 3\n", " targetCountdown = (stopInTarget - startInTarget)\n", "\n", " # If the animal has left the attractor target area, the protocol switches to the designated trial\n", " # distractor stimulus and goes back to phase 2\n", " else:\n", " expPhase = 2\n", "\n", " # Writes a new row to the protocol\n", " log_frame_data([\n", " frameCounter, (time.time()-expTime), expPhase, \n", " x, y, cfg['start_x'], cfg['start_y'], cfg['start_radius'], targetX, targetY, \n", " cfg['target_radius'], trialCounter, percentCorrect, soundPlayed\n", " ])\n", "\n", " if cv2.waitKey(1) & 0xFF == ord('q'):\n", " break\n", " \n", "# If the session is over or interrupted, all capture and output devices are released, streams are stopped, \n", "# windows are destroyed, protocol files are saved, and the communication with the Arduino is terminated\n", "cap.release()\n", "out.release()\n", "stream.stop()\n", "cv2.destroyAllWindows()\n", "board.exit()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.8" } }, "nbformat": 4, "nbformat_minor": 2 }