Browse Source

add minutes of last meeting

julien colomb 4 years ago
parent
commit
e55091b2e2
1 changed files with 43 additions and 1 deletions
  1. 43 1
      minutes/2019-08-08-matthewmeeting.md

+ 43 - 1
minutes/2019-08-08-matthewmeeting.md

@@ -1,3 +1,39 @@
+# Meeting outputs
+
+### main decisions
+
+- force19, ok
+- continue discussion with other SFB, info dissemination project is our to lead
+- Get a meeting with Thomas. We do not care who develop it but we need something going to start testing.
+- Continue to work on mice metadata (some files will be sent by matthew soonish)
+
+## strategy
+
+Main objective is information dissemination, the development of the tool should keep this in mind.
+
+- Get MVP with commenting
+- Analyse comment to find gaps
+- modify tool to push people to fill the gaps (empty box, badges,...)
+
+
+## software
+
+- need simplified workflow, only one tab ?
+
+## visualisation
+
+- need search filter function
+- interaction with author should be straightforward
+- maybe need to link to something they already know, as a reminder of what we are talking about.
+
+## My role
+
+- ASAP stop developing, but foster tool use
+- social component
+
+
+
+
 #Vorbereitung
 
 ## 1. Collaborations
@@ -34,15 +70,21 @@ Visualisation and data upload in a 2 tab browsable software
 - metadata: minimal standards:
      - animal (species, genotype, date_of_birth, light modus, diet) - similar to/extended from  pyrat ?
      - experiment (date, time, temperature, light modus)
-- https://www.nwb.org/ standard for data (python, matlab export)     
+- https://www.nwb.org/ standard for data (python, matlab export, called hdf5)     
      
 
 ## 5. RDM work
 
 - talk with Robert, work with keisuke:
     - data of paper to be organised, pushed to GIN
+    - structure worked, file naming comes next.
     - time to perform the work will be reported : paper to follow when 3-4 case studies are there.
     - next steps will be to share the analysis code.
 - next will talk with christian for imaging data, using same approach.    
 
 
+## 6. survey and server
+
+- 14 labs responded
+- estimation gots from 5 to 200 TB (whole lab data)
+- need to talk to itb people about it.