Oct 5-9, 2020

First edition

Virtual event on

Good Scientific Practices in EEG and MEG research

Sessions

Good Research Practices: what makes a reliable M/EEG study?

 

Click here to import all sessions to your Google calendar.

Click on session titles to view the detailed schedule.

Human factors

Know your own, and others’ biases while planning and conducting research Manuel Mercier, Madison Elliott, Walter Sinnott-Armstrong

Pre-registration

Current and upcoming pre-registration practice in the lab, and in the editorial system Sophie Herbst, Roni Tibon, Pia Rotshtein

Data Collection

Anticipate to ensure the quality for your data and meet the standards John Mosher, Katia Lehongre, Giovanni Mento, Emily Kappenmann

Signal processing

Know the strengths and limits of your methods Karim Jerbi, Jean-Marc Lina, Alex Gramfort

A new Reporting Framework for EEG

 A draft proposal and roundtable discussion (Guest session) Anđela Šoškić, Suzy Styles, et al.

 

Reliability

On the reproducibility of M/EEG research – Bertille Somon, Yuri Pavlov, Aina Puce

Collaboration tools

Learn about new tools for open collaboration – Guiomar Niso, Martina G. Vilas, Antonio Schettino

Software (mis)uses

At your own risk: great tools don’t make good practices Alexandre Gramfort, Arnaud Delorme, François Tadel, Robert Oostenveld

Statistics

Power, reliability, and robustness – Maximilien Chaumon, Aaron Caldwell, Steven Luck, Guillaume Rousselet

Coded tools

Great software tools for improved research practicesMarijn Van Vliet, Laurens Krol, Andrea Brovelli

Beyond electrophysiology

Taking a step back, and thinking about practices in the long run… – Daniele Schon, Yseult Hejja Brichard, David Poeppel

Program

Monday 5 October

Time (UTC)

Speaker’s name

Talk Link

Human Factor (Chair: Maximilien Chaumon)

13:00

Manuel Mercier

The inner practice of science and for science: the influence of cognitive biases on researchers and scientific production

In Wikipedia, a cognitive bias is defined as “a systematic pattern of deviation from rationality in judgment”. As any human being, researchers are prone to cognitive biases, which is a critical matter as these biases can lead to an “unsustainable” science. In this talk, and after a brief introduction on cognitive biases, we will see how they can influence our research for instance through perceptual distortion, illogical choice or misinterpretation. Next, we will envisage some remedies to counteract our inner trend for irrationality. Finally, together with the M.E.EG community we will discuss what can be done at a larger scale to reduce the impact of cognitive biases on scientific practices.

13:20

Madison Elliott

Title TBA

TBA

13:40

Break

13:50

Walter Sinnott-Armstrong

Some common fallacies

All humans make mistakes, and neuroscientists are no exception. We need to understand and watch out for common fallacies in reasoning in order to avoid them in neuroscience research, just as in everyday life. My talk will focus on a few fallacies that are common in some kinds of neuroscience research, including EEG, and illustrate these fallacies with real examples.

14:15

Round Table

14:45

Break

Pre-Registration (Chair: Antonio Schettino)

15:00

Sophie Herbst

Preregistered reports in M/EEG research – a road map through the garden of forking paths?

M/EEG analyses pipelines combine a multitude of processing steps, such as filters, artifact removal, time-frequency transforms, etc., to be chosen by the experimenter. Given that it is impossible to test the independent contribution of each step to the results, novice and even expert neuroscientists are often left with the frustration to not know how strongly a given effect (or the absence thereof) depends on the choices made in their pipeline. Preregistration provides a potential remedy to this problem in that the pre-and post-processing steps are fixed before the study is conducted, and, importantly, expert feedback can be obtained early.

Based on recently obtained ’in-principle-acceptance’ for an EEG replication study, assessing the seminal finding of enhanced delta phase coherence by temporal predictions (Stefanics et al. 2010), I would like to discuss to which extent preregistration can foster replicable and robust EEG/MEG research, and help the community to devise less user-dependent pipelines.

15:20

Roni Tibon

Prereg posters: Presenting planned research in academic events

We recently proposed ‘prereg posters’—conference posters that present planned scientific projects—as a new form of preregistration. The presentation of planned research, before data are collected, allows presenters to receive feedback on their hypotheses, design and analyses from their colleagues, which is likely to improve the study. In turn, this can improve more formal preregistration, reducing the chances of subsequent deviation, and/or facilitate submission of the work as a Registered Report. In my talk, I will review data collected at the BNA2019 Festival of Neuroscience, where prereg posters were recently implemented. I will show preliminary evidence for the value of prereg posters in receiving constructive feedback, promoting open science and supporting early-career researchers. I will then discuss the outlook of prereg posters, particularly in the context of the current shift towards online academic events.

15:40

Break

15:50

Pia Rotshtein

Title TBA

TBA

16:15

Round Table

16:45

Live posters & virtual socials

Tuesday 6 October – Track 1

Two parallel tracks – you can jump from one to the other

Time

Speaker’s name

Talk Link

Collaborative tools (Chair: Karim Jerbi)

14:00 (Paris) / 8:00 (NYC)

Guiomar Niso

Title TBA

TBA

14:20 (Paris) / 8:20 (NYC)

Martina G. Vilas

The Turing Way: A guide to reproducible, ethical and collaborative research practices

Reproducible research is necessary to ensure that scientific output can be trusted and built upon in future work. But conducting reproducible research requires skills in data management, software development, version control, and continuous integration techniques, that are usually not taught or expected of academic researchers.
The Turing Way is an open-source, community-led handbook that supports this knowledge in an accessible and comprehensible form for everyone. Its moonshot goal is to make reproducible research “too easy not to do”. In addition to discussing different approaches for reproducibility, it also provides material on ethical practices in data science, inclusive collaborative work, and effective communication and management of research projects. The handbook has so far been collaboratively built with the help of more than 175 people from different disciplines and career stages within data research, who have contributed to the project’s online repository (https://github.com/alan-turing-institute/the-turing-way).
This talk will give an overview of The Turing Way book, project, and community, and will show how you can get involved in its development.

14:40 (Paris) / 8:40 (NYC)

Break

14:50 (Paris) / 8:50 (NYC)

Antonio Schettino

Open Science Framework: One Service to Rule Them All

To improve the trustworthiness of research output in many disciplines, an increasing number of journals and funding agencies encourage or require sharing of data, materials, and analysis protocols associated with each publication. Consequently, researchers are turning to comprehensive services that facilitate collaborative workflow with colleagues and evaluators. One of the most popular is the Open Science Framework (OSF), a free online platform developed by the non-profit organization Center for Open Science. The OSF allows researchers to manage, document, and share all the products of their workflow, from the preregistration of the initial idea to the preprint of the final report. In this talk, I will show how the OSF helped me open up my research workflow, guide the audience through one of my public OSF projects, and discuss challenges and lessons learned during the process.

15:15 (Paris) / 9:15 (NYC)

Round Table

15:45 (Paris) / 9:45 (NYC)

Break

Reliability (Chair: TBA)

16:00 (Paris) / 10:00 (NYC)

Bertille Somon

Open science for better AI

In the last ten years, publications on artificial intelligence (AI) have nearly doubled, opening promising avenue for an increased use of AI in our daily life. One key feature and challenge of AI is the generalizability of algorithms, notably for (inverse-)reinforcement learning or transfer learning (Arora and Doshi, 2019). Alike many AI research, these areas have a common root in neuroscience and psychology (Hassabis et al., 2020). Although working on freely available databases is becoming practice in computer science (e.g. ImageNet), it is not the case for neuroscience and psychology (except for the brain-computer interface community; e.g. BNCI Horizon 2020 project). Yet, a recent brain imaging study (Botvinik-Nezer et al., 2020) revealed the limits of reproducibility by sharing the same fMRI datasets with 70 research teams who analyzed them and identified consistent results for only 4 out of the 9 hypotheses tested. In the case of electroencephalography (EEG), generalizing data recording, processing and analysis is prevented from by: i) a lack of experimental design sharing; ii) recording format and parameters are not standardized (EEG systems have their own data format, compatibility and amplifier parameters); iii) processing pipelines are variable and not shared; iv) difficulties to store and share large datasets (Martinez-Cancino et al., 2020). We propose that following a general pipeline for setting-up experiments, collecting data, processing them and sharing files can increase the number of standardized datasets thus facilitating AI algorithms development for generalizability. This pipeline should be based on open access tools (LSL, BIDS, MNE, etc.). Sharing data allows AI researchers to improve modelling and machine learning algorithms quality, standardize algorithm performances comparison and allows laboratories with less means to perform quality research; but also it generalizes the possibility to get feedback from the whole community in an easier manner.

16:20 (Paris) / 10:20 (NYC)

Yuri Pavlov

Title TBA

TBA

16:40 (Paris) / 10:40 (NYC)

Break

16:50 (Paris) / 10:50 (NYC)

Aina Puce

Title TBA

TBA

17:15 (Paris) / 11:15 (NYC)

Round Table

17:45 (Paris) / 11:45 (NYC)

Live posters & virtual socials

Tuesday 6 October – Track 2

Two parallel tracks – you can jump from one to the other

Time

Speaker’s name

Talk Link

Data Collection (Chair: Anne-Sophie Dubarry)

14:00 (Paris) / 8:00 (NYC)

John Mosher

Title TBA

TBA

14:25 (Paris) / 8:25 (NYC)

Katia Lehongre

Collection of continuous and long term multiscale iEEG recordings

Patients with pharmaco-resistant focal epilepsy who are candidates for surgical treatment may need a presurgical evaluation with intracerebral electrodes to define the seizure-onset-zone. During this intracranial exploration, it is now possible to add microelectrodes without impact on the clinical investigation. Those microelectrode recordings give a rare access to single unit activity in humans. At the Pitié Salpétrière Hospital in Paris, we use microelectrodes since 2010 and collect and store long term and continuous recording of macro and micro electrodes simultaneously since 2012. The precious data acquired however also give rise to a data storage challenge, because of the high sampling rate of microelectrodes (32 kHz) and the large number of channels (up to 160) . During this presentation, I will describe our recording environment as well as the data management that has been setup, with its limits and possible improvements.

14:50 (Paris) / 8:50 (NYC)

Break

15:00 (Paris) / 9:00 (NYC)

Giovanni Mento

Title TBA

TBA

15:25 (Paris) / 9:25 (NYC)

Emily Kappenmann

Title TBA

TBA

15:50 (Paris) / 9:50 (NYC)

Round Table

16:20 (Paris) / 10:20 (NYC)

Break

Coded Tools (Chair: TBA)

16:35 (Paris) / 10:35 (NYC)

Marijn Van Vliet

Designing analysis code that scales

Small projects grow up to be large projects. That simple analysis script you started out with was so innocent, so uncomplicated, so pure. Where have those days gone? Staring at your code folder now is like staring into the Abyss…and it stares back!

16:50 (Paris) / 10:50 (NYC)

Laurens Krol

Title TBA

TBA

17:05 (Paris) / 11:05 (NYC)

Break

17:15 (Paris) / 11:15 (NYC)

Andrea Brovelli

Cognitive brain network discovery using hierarchical functional connectivity analysis of high-gamma activity

We present analysis pipelines that aim at characterising the dynamics of cortico-cortical interactions underlying cognitive brain networks. Our hierarchical Functional Connectivity (FC) approach is able to isolate first regions whose linear correlation and mutual information (i.e., the total interdependence between neural signals) increases transiently or statically in relation to task conditions, then parses the relative direction of this influence using covariance-based Granger causality methods. Covariance-based Granger causality measures are particularly suited, because they are computationally simple to estimate, provide an intuitive link between directed and undirected FC measures and can be used on short neural signals (e.g., single-trials) and large networks (e.g., hundreds of brain areas). The workflows are applied to human source-level high-gamma activity (HGA) estimated from MEG data (Brovelli et al, JNeurosci 2015; JNeurosci 2017). The workflows exploit an anatomical atlas named MarsAtlas (Auzias et al, Human Brain Mapp 2016), and provide single-trial and ROI-based estimates of HGA time courses. Statistical analysis searching for task-related modulations in FC and group-level inference is performed using gaussian copula mutual information approaches combined with cluster-based statistics and permutation tests. The end results are time-resolved and task-related information routing patterns characterising the dynamics of cognitive brain networks.

17:30 (Paris) / 11:30 (NYC)

Round Table

17:45 (Paris) / 11:45 (NYC)

Live posters & virtual socials

Wednesday 7 October

Time

Speaker’s name

Talk Link

Software Use (Chair: Deirdre Bolger)

14:00 (Paris) / 8:00 (NYC)

Alex Gramfort

Title TBA

TBA

14:15 (Paris) / 8:15 (NYC)

Arno Delorme

Title TBA

TBA

14:30 (Paris) / 8:30 (NYC)

Break

14:40 (Paris) / 8:40 (NYC)

Sylvain Baillet

How good software enables good scientific practices.

Good scientific practices are glorified, for obvious good reasons, but can be simply impractical to us mortals. Scientific software is key to enable the adoption and adherence to righteous practices in practice. I will show features available in Brainstorm that aim to facilitating everyone’s virtuous data management and data analytics life: from data organization with BIDS, to building pipelines that are shareable and reproducible.

14:55 (Paris) / 8:55 (NYC)

Robert Oostenveld

Title TBA

TBA

15:10 (Paris) / 9:10 (NYC)

Round Table

15:40 (Paris) / 9:40 (NYC)

Break

Signal processing (Chair: Christian Bénar)

15:55 (Paris) / 9:55 (NYC)

Karim Jerbi

Title TBA

TBA

16:20 (Paris) / 10:20 (NYC)

Jean-Marc Lina

Title TBA

TBA

16:45 (Paris) / 10:45 (NYC)

Break

16:55 (Paris) / 10:55 (NYC)

Alex Gramfort

Title TBA

TBA

17:20 (Paris) / 11:20 (NYC)

Round Table

17:45 (Paris) / 11:45 (NYC)

Live posters & virtual socials

Thursday 8 October

Time

Speaker’s name

Talk Link

Statistics (Chair: TBA)

14:00 (Paris) / 8:00 (NYC)

Maximilien Chaumon

Title TBA

TBA

14:20 (Paris) / 8:20 (NYC)

Aaron Caldwell

Title TBA

TBA

14:40 (Paris) / 8:40 (NYC)

Break

14:50 (Paris) / 8:50 (NYC)

Steven Luck

Standardized measurement error: A universal measure of data quality for averaged event-related potentials

Event-related potentials (ERPs) can be very noisy, and yet there is no widely accepted metric of ERP data quality. Here we present a universal measure of data quality for averaged ERPs: the standardized measurement error (SME). Whereas some potential measures of data quality provide a generic quantification of the noise level, the SME quantifies the expected error in the specific amplitude or latency value being measured in a given study (e.g., the peak latency of the P3 wave). It can be applied to virtually any value that is derived from averaged ERP waveforms, making it a universal measure of data quality. In addition, the SME quantifies the data quality for each individual participant, making it possible to identify participants with low-quality data and “bad” channels. When appropriately aggregated across individuals, SME values can be used to quantify the impact of the single-trial EEG variability and the number of trials being averaged together on the effect size and statistical power in a given experiment. If SME values were regularly included in published papers, researchers could identify the recording and analysis procedures that produce the highest data quality, which could ultimately lead to increased effect sizes and greater replicability across the field. The SME can be easily calculated using the latest version of ERPLAB Toolbox (v8.0).

15:10 (Paris) / 9:10 (NYC)

Guillaume Rousselet

Title TBA

TBA

15:30 (Paris) / 9:30 (NYC)

Round Table

15:55 (Paris) / 9:55(NYC)

Break

Guest session: Agreeing on a reporting framework (Chair: Anđela Šoškić)

16:10 (Paris) / 10:10 (NYC)

Anđela Šoškić

Towards an Agreed Reporting Framework for ERP Methodology: A draft proposal and roundtable discussion

For decades, researchers in the fields of EEG and ERP have made calls for better transparency in the arena of reporting EEG/ERP data (e.g. Donchin et al., 1977, Picton et al., 2000; Keil et al., 2014; Duncan et al., 2009; Kappenman & Luck, 2016, Taylor & Baldeweg, 2002; Luck, 2014; Luck & Gaspelin, 2017; Gelman & Loken, 2013). Despite the availability of guidelines for transparent and accurate reporting, the state of the published literature suggests that authors are either unable or unwilling to follow current reporting guidelines. We propose that by leveraging the collective expertise of stakeholders in the ERP community, we can create an agreed reporting framework that is both easier to use and more transparent than current reporting models.

Background. One recent systematic review (Šoškić, Jovanović, Styles, Kappenman & Ković, 2019: Preprint https://psyarxiv.com/jp6wy/) has demonstrated that published journal articles rarely contain all of the information that would be required to replicate an N400 study, and that the reporting was inconsistent and often ambiguous, making it hard to critically assess, metaanalyse and replicate work. This project and its sister projects (Šoškić, Kappenman, Styles & Ković, Preregistration 2019: https://osf.io/6qbjs/; Ke, Kovic, Šoškić, & Styles, Preregistration, 2020: osf.io/5evc8) show that improvements to reporting standards in the ERP field will be necessary to increase replicability and reduce questionable research practices.

Current Presentation. On the basis of our metanalytic work, we are drafting a metadata template (i.e., a digital form or template to be filled), containing all of the metadata critical to an ERP recording and the subsequent processing and analysis chain. In contrast to the valuable work of Keil et al. (2014) and Pernet et al. (2018), who developed checklists for authors to indicate whether they have reported appropriate methodology details in the body of their article, the metadata template requires precise numerical/categorical data to be filled, thereby ensuring searchability and simplicity in future metascience. To facilitate transparency in this process, we want to engage the community of stakeholders in EEG/ERP research, in a consultative process to refine the template, and move towards an agreed reporting framework – we envisage a consultative process that involves frontline researchers, software developers, data archivists, journal editors and researchers with experience in open, replicable science.

16:30 (Paris) / 10:30 (NYC)

Suzy Styles

Title TBA

TBA

16:45 (Paris) / 10:45 (NYC)

Break

16:55 (Paris) / 10:55 (NYC)

Round Table

17:45 (Paris) / 11:45 (NYC)

Live posters & virtual socials

Friday 9 October

Time

Speaker’s name

Talk Link

Beyond The Signal (Chair: Manuel Mercier)

14:00 (Paris) / 8:00 (NYC)

Daniele Schon

Title TBA

TBA

14:25 (Paris) / 8:25 (NYC)

Yseult Héjja Brichard

Rethinking our Narratives: The Challenge of “Slow Science”

In response to a replication crisis over the last decade, the Open Science movement has attempted to promote different tools and research practices. New incentives and platforms were designed to improve the replicability of studies and to address different biases, such as the positive result bias. For many, this is the new golden path to follow if we want our scientific productions to be trustworthy. However, by offering concrete solutions to problems so-far identified by the scientific community, the open science movement might only cover the superficial layer of a deeper malfunction of our research practices and structures. In this talk, I propose to review the main concepts of the slow science philosophy to discuss why and how we need to rethink our research frameworks, both inside and outside academia. More importantly, the slow science philosophy also questions what it means to do science and to be a researcher and what is or should be the position of a researcher in our societies.

14:50 (Paris) / 8:50 (NYC)

Break

15:00 (Paris) / 9:00 (NYC)

David Poeppel

Title TBA

TBA

15:30 (Paris) / 9:30 (NYC)

Round Table

15:50 (Paris) / 9:50 (NYC)

Closing Remarks

How To

Attending the event

LiveMEEG is an online conference using the crowdcast platform. Registration is free but compulsory to participate fully. Instructions for registration will be provided shortly, stay tuned.

Poster submission

Submit your poster here. The deadline for submission is Sept. 23rd. Poster acceptance will be communicated shortly before the conference.

Social events

The posters will be displayed during social events open to registered participants.

Special Issue Warm-up

 

We have an in principle agreement of NeuroImage editors to create a special issue on “Advances in Scientific Practices”. The whys and wherefores of this special issue, as well as potential contributions will be discussed during the conference.

Organizing team

Maximilien Chaumon

ICM / CENIR

Adrien Schramm

Independent Event Organizer

Anne-Sophie Dubarry

LPL, CNRS, ILCB, AMU (France)

Clément François

LPL, CNRS, ILCB, AMU (France)

Manuel Mercier

INS, Inserm, AMU (France)

Our Partners