Toolboxes Bouquet
Instructions
The Toolbox Bouquet is half a day online session running on Oct 30th PM (CET). Several exciting practical courses are offered on selected flowers (toolboxes) for cutting-edge MEEG data analysis
This event will take place Online. More information coming soon in the meantime check the previous PracticalMEEG 2022 bouquet to get an impression of what this entails.
— Registration opens soon —
Legend
Flowers (courses) have different teaching approaches composed of the following attributes:

Lecture

Hands-On

Demo
AnyWave and Epitools
Bruno Colombet
Aix Marseille Université, France
More information
In the field of cognitive and clinical neuroscience, handling and analyzing EEG and MEG data from various acquisition systems is often a technical challenge due to heterogeneous formats and processing requirements. To address this, we developed a software tool designed to provide intuitive visualization, flexible pre and post processing, and interoperability with current neuroimaging standards. The software is fully compatible with the Brain Imaging Data Structure (BIDS) for EEG/MEG, enabling structured data organization and seamless integration with standard analysis pipelines. By integrating with FreeSurfer and GARDEL(shown in the 2nd part) the software allows visualization of brain activity on subject-specific cortical surfaces. We will present various processing operations:
• Bandpass and notch filters
• Independant component analysis
• Power Spectral Density (PSD) estimation
• Time-frequency analysis (wavelets, STFT)
• BIDS interaction with SEEG activity mapping (raw signal or ICA topographies) and Interactive visualization
In a second part we will present GARDEL, a companion tool for co-registration of CT and MRI scans and semi-automatic detection of intracranial electrodes. GARDEL facilitates accurate anatomical localization of implanted electrodes by segmenting and mapping them onto brain structures.
We will also present the ability to create and execute custom analysis modules written in MATLAB or Python directly from within the software. This offers flexibility for users who rely on existing scripts or research pipelines.
A quick demonstration/tutorial will expose how to create a plugin in MATLAB/Python.
We will finish by making a demo of available plugins for signal processing such as Delphos module, which detects spikes and fast oscillations in EEG signals—an essential tool for clinical and research applications in epilepsy.

Improving ERP Method Reporting with ARTEM-IS: A Hands-On Introduction
Katarina Stekić, Nastassja Lopes Fischer, Dejan Pajić
University of Belgrade, Serbia
More information
Transparent and detailed reporting of ERP methods is essential but often insufficiently addressed, impacting clarity and reproducibility of research. Existing guidelines and checklists have not fully resolved these issues.
This workshop presents ARTEM-IS (Agreed Reporting Template for EEG Methodology – International Standard), a community-driven, web-based tool designed to help researchers systematically document ERP methodologies using a standardized metadata template.
We will begin by sharing the story behind ARTEM-IS, its origins, challenges, and the collaborative effort shaping it, emphasizing why better ERP method documentation represents both a technical need and a cultural shift toward scientific transparency.
Next, we’ll provide a guided walkthrough of the ARTEM-IS tool, demonstrating how to input detailed study information from design to visualization and generate both human- and machine-readable reports. We’ll also discuss current features and planned extensions, including support for complex designs and open science integration.
The core of the workshop is a practical challenge. Participants will use ARTEM-IS to document one of their own ERP studies in real time, with guidance throughout. Attendees should prepare a relevant paper to efficiently extract methodological details.
By the end, participants will have hands-on experience, a completed or nearly completed documentation template for their study, and insights on integrating ARTEM-IS into future publications.
Prerequisite: Everyone should prepare one ERP research paper in advance for populating the ARTEM-IS template.

Braindecode: harnessing deep learning and foundation models for brain signals decoding
Pierre Guetschel
Donders Institute for Brain, Cognition and Behaviour, Radboud University , The Netherlands
More information

DISCOVER-EEG: an open, fully automated EEG pipeline for biomarker discovery in clinical neuroscience
Cristina Gil Avila
Universidad Complutense de Madrid, Spain
More information
Biomarker discovery in neurological and psychiatric disorders critically depends on reproducible and transparent methods applied to large-scale datasets. Electroencephalography (EEG) is a promising tool for identifying biomarkers. However, recording, preprocessing, and analysis of EEG data is time-consuming and researcher-dependent. Therefore, we developed DISCOVER-EEG, an open and fully automated pipeline that enables easy and fast preprocessing, analysis, and visualization of resting state EEG data. Data in the Brain Imaging Data Structure (BIDS) standard are automatically preprocessed, and physiologically meaningful features of brain function (including oscillatory power, connectivity, and network characteristics) are extracted and visualized using two open-source and widely used Matlab toolboxes (EEGLAB and FieldTrip). We tested the pipeline in two large, openly available datasets containing EEG recordings of healthy participants and patients with a psychiatric condition. Additionally, we performed an exploratory analysis that could inspire the development of biomarkers for healthy aging. Thus, the DISCOVER-EEG pipeline facilitates the aggregation, reuse, and analysis of large EEG datasets, promoting open and reproducible research on brain function.
This session will demonstrate the use of DISCOVER-EEG in a small EEG dataset and invite users to test it on their own.

Introduction to the EP Toolkit
Joseph Dien
University of Maryland, College Park, USA
More information
This three session workshop will demonstrate how to use my free open-source Matlab EEG analysis suite (Dien, 2010) to analyze ERP data, with an emphasis on its strengths for performing cutting edge artifact correction (Dien, 2024), robust ANOVA (Dien, 2017), and two-step PCA (Dien, 2012). Each session will consist of a brief presentation of the core concepts, followed by a demonstration of how to perform them using the EP Toolkit, and ending in a short hands-on period allowing for questions and answers.
Prerequisite: None

HappyFeat – an interactive BCI framework for optimal feature selection
Arthur Desbois
Inria Paris, ICM, France
More information
Due to the high level of variability in EEG signals, the performance of a BCI system is closely linked to the choice of appropriate, customized classification features. The HappyFeat Python software simplifies BCI experiments, providing extraction, automation, visualization and machine-learning tools, and interfacing with recognized BCI software (OpenViBE, Timeflux), allowing experimenters to concentrate on the essentials: fine-tuning the BCI.
After a presentation of the constraints of Motor imagery (MI)-based BCI in experimental and clinical settings, we will explain the main mechanics of HappyFeat, followed by a demonstration/tutorial, which spectators will be able to follow and replicate on their own system. We will conclude with a more in-depth explanation of how to customize BCI pipelines in HappyFeat (using template scenarios), and an open discussion.

Documenting Events in Time Series Recordings using HED Tools
Scott Makeig
Institute for Computational Neuroscience, UCSD, USA
More information
Documenting Events in Time Series Data using HED Tools
Cutting M/EEG Tutorials,
October, 2025
Cognitive neuroscience and functional neuroimaging seek to relate brain dynamics to experience and to behavior. Cognitive neuroimaging seeks to model the relationship of recorded time series data to concurrent sensory experience and behavior of the imaged participants. Documenting details of this experience and behavior is thus essential. In the current era of growing public data archives, documenting sensory, behavioral, and other events in neuroimaging data using common terms and syntax allows efficient data search, retrieval, and joint analysis (including AI-empowered mega-analysis). Unfortunately, a common system for event annotation has not been adequately addressed in emerging data storage standards (e.g., BIDS or NWB).
A dozen years ago, Nima Bigdely-Shamlo at UCSD proposed development of a standard for annotating events occurring during time series recordings, naming it the system of Hierarchical Event Descriptors (HED). Following a decade of development, the HED standard and its growing array of associated user tools was accepted (in 2024) by the INCF as the (now only) international standard for event annotation of time series data.
The HED tutorial will consist of compact lectures on HED purpose and structure, the process of HED annotation, and using HED annotations in M/EEG data analysis. Example analyses will use EEGLAB and Fieldtrip. These will alternate with HED tool demonstrations and periods for attendees to test applying the demonstrated tools to readily downloaded data. Tutors will be available to answer attendee questions (making use of whatever videochat options are available). HED tools now include a HED annotation assistant using AI resources.
We also hope to be able to report on a proposed NeurIPS competition using a very large (~3k subject) EEG dataset (Healthy Brain Network data, available on NEMAR.org). We hope that this competition will provide stimulating examples of using HED annotations to mine M/EEG data.
Prerequisite: Some understanding of current data archiving systems (BIDS or NWB).
Websites
https://www.HEDtags.org
https://www.youtube.com/@HierarchicalEventDescriptors
References
Makeig, S. and Robbins, K., 2024. Events in context—The HED framework for the study of brain, experience and behavior. Frontiers in Neuroinformatics, 18, p.1292667.
Robbins, K., Truong, D., Jones, A., Callanan, I. and Makeig, S., 2022. Building FAIR functionality: annotating events in time series data using hierarchical event descriptors (HED). Neuroinformatics, 20(2), pp.463-481.
Robbins, K., Truong, D., Appelhoff, S., Delorme, A. and Makeig, S., 2021. Capturing the nature of events and event context using hierarchical event descriptors (HED). NeuroImage, 245, p.118766.

Hidden multivariate patterns to locate cognitive events on a by-trial basis
Gabriel Weindel
Institut de psychologie – Université de Lausanne, Switzerland
More information
In this course, participants will learn how to use hidden multivariate pattern models (HMP, Weindel, van Maanen & Borst, 2024, Imag. Neuro.) to identify and locate cognitive events in time-series.
The HMP method assumes that task-relevant operations performed by the brain are represented as multivariate patterns in neural signals such as electro- or magneto-encephalograpic data. Unlike typical multivariate pattern analysis methods, HMP assumes that events are variable in time over trials yet sequential to another. Leveraging these assumptions, the method recovers the location of sequential brain responses by-trial. This estimation allows one to go beyond the epoching of data based on an external events, such as stimulus or response onset, and center analyses around a functional period of interest and, thus, be used as starting points for many applications.
This flower of the toolbox bouquet is made of a lecture about the method and several tutorials. The tutorials will be based on the dedicated Python package and will guide participants in the use of HMP. First, participants will learn how to simulate events in EEG data using a dedicated simulation module. These simulations will then serve as input data for HMP to illustrates the method’s benefits and limitations. Finally, we will use public EEG datasets to illustrate the use of HMP in the wild and how participants can readily apply HMP to their own data.
Prerequisite: python (>3.10)

Human Neocortical Neurosolver (HNN): An open-source software for cellular and circuit-level interpretation of human MEG/EEG
Stephanie Jones
Brown University USA
More information
The Human Neocortical Neurosolver (HNN) is a user-friendly neural modeling software designed to provide a cell- and microcircuit-level interpretation of macroscale magneto- and electroencephalography (M/EEG) signals (https://hnn.brown.edu, Neymotin et al. 2020). The foundation of HNN is a biophysically-detailed neocortical model, representing a patch of neocortex receiving thalamic and corticocortical drive. The HNN model was designed to simulate the time course of primary current dipoles and enables direct comparison, in nAm units, to source-localized M/EEG data, along with layer-specific cellular activity. HNN workflows are constructed around simulating commonly measured Event Related Potentials (ERPs) and low-frequency oscillations. The HNN model can be accessed through a user-friendly interactive graphical user interface (GUI) or through a Python scripting interface.
The foundation of HNN, referred to as HNN-core (Jas et al. 2023), is a Python package containing all of the core functionality of HNN, and is implemented with a clear application programming interface (API). A new GUI has recently been implemented. Tutorials on how to simulate ERPs and low-frequency oscillations in the alpha, beta, and gamma bands are distributed for both the interactive GUI and the Python API. HNN was created with best practices in open-source software to allow the computational and human neuroscience communities to understand and contribute to its development. The HNN API contains additional functionality beyond that accessible through the GUI, including the ability to modify local network connectivity, perform parameter optimization, and simulate layer-specific local field potential signals and current source density. The package is available to install with a single command on PyPI (“pip install hnn_core”), is unit tested, and extensively documented. HNN is additionally accessible through computing resources offered by the Neuroscience Gateway (NSG), enabling large simulation workloads. Overall, HNN is a one-of-a-kind, openly-distributed tool designed for a broad community to develop and test hypotheses on the multiscale origins of localized human M/EEG signals.
In this session, we will begin with a didactic overview of the background and development of HNN. We will then introduce users to the GUI and Python API through lectures and demo investigations of ERPs.
Prerequisite: basic neuroscience background

HyPyP – the Hyperscanning Python Pipeline
Guillaume Dumas
Université de Montréal, Canada
More information
Discover the potential of hyperscanning analysis with HyPyP, an open-source Python toolbox designed specifically for multi-brain neuroscience research (EEG, MEG, & fNIRS). This 1-hour hands-on workshop will introduce researchers to practical computational methods for analyzing data collected simultaneously from multiple participants during social interactions.
Hyperscanning—the simultaneous recording of brain activity from multiple individuals—represents a paradigm shift in social neuroscience, allowing researchers to move beyond traditional single-brain stimulus-response approaches to study real-time neural dynamics during natural social exchanges between multiple individuals. However, these complex datasets require specialized analytic techniques that conventional neuroimaging software packages do not typically offer.
This workshop will provide participants with:
– An overview of hyperscanning methodologies and their analytical challenges
– Hands-on experience with HyPyP’s core functions for multi-brain data preprocessing
– Practical implementation of inter-brain connectivity measures
– Visualization techniques for inter-brain synchrony analysis
– Statistical approaches specific to hyperscanning experiments
The session will combine brief theoretical explanations with live coding demonstrations using sample datasets in EEG and fNIRS. Participants will work through practical examples illustrating HyPyP’s capabilities for capturing neural signatures of social coordination.
Prerequisite: A laptop with Python installed (Anaconda distribution recommended) Basic knowledge of Python and neuroimaging concepts Pre-installation of HyPyP and dependencies (installation instructions will be provided to registered participants)

iElectrodes Toolbox: Fast, Robust, and Open-Source Localization of Intracranial Electrodes
Alejandro O. Blenkmann
RITMO, Department of Psychology, University of Oslo, Norway
More information
Precise anatomical localization of intracranial electrodes is crucial for interpreting invasive recordings in clinical and cognitive neuroscience research. The open-source iElectrodes toolbox offers a fast, semi-automated, and robust solution for localizing subdural grids, depth electrodes, and strips from MRI and CT images, supporting automatic anatomical labeling. iElectrodes was initially introduced in Blenkmann et al. (2017), and has been updated with major methodological innovations in Blenkmann et al. (2024). To date, it has >2000 downloads.
In this 90-minute session, I will first provide an introductory lecture on the core functionalities of iElectrodes, including image pre-processing steps, semi-automatic electrode localization, brain shift compensation, and standardized anatomical registration. We will cover the recent major upgrades to the toolbox: the GridFit algorithm for robust localization of SEEG and ECoG electrodes under challenging conditions (e.g., noise, overlaps, and high-density implants), and CEPA (Combined Electrode Projection Algorithm) for smooth compensation methods for grids, addressing brain deformations based on mechanical modeling principles. These developments significantly enhanced the robustness and precision of intracranial electrode localization.
In the second part of the session, we will move into a hands-on tutorial, where participants will learn how to use the toolbox through practical exercises. Using real patient datasets (anonymized), we will cover:
• Preprocessing MRI and CT images.
• Semi-automatic detection and localization of electrode coordinates using clustering and GridFit algorithms.
• Brain shift correction using CEPA.
• Automatic anatomical labeling of electrodes.
• Generation of an iElectrodes localization project file.
• Exporting electrode coordinates into formats compatible with Fieldtrip, EEGLAB, and text reports.
• Integration with further analysis workflows.
This session is intended for both clinical and cognitive neuroscience research users working with SEEG or ECoG. Attendees will leave with practical skills for reliable and reproducible electrode localization, ready to apply to their own datasets.
Required Materials:
• Participants should install MATLAB (requires a license) and download the open-source iElectrodes toolbox (available at https://sourceforge.net/projects/ielectrodes/) ahead of the session.
• Example datasets of pre-processed images will be provided before the event.
References:
• Blenkmann AO, et al. (2017). iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization. Frontiers in Neuroinformatics, 11:14. doi:10.3389/fninf.2017.00014
• Blenkmann AO, et al. (2024). Anatomical registration of intracranial electrodes. Robust model-based localization and deformable smooth brain-shift compensation methods. Journal of Neuroscience Methods, 404:110056. doi:10.1016/j.jneumeth.2024.110056

LaMEG: A toolbox for laminar MEG simulations and analyses
James Bonaiuto
CNRS, France
More information
Recent years have witnessed a transformative shift in studying human cortical circuit dynamics, propelled by advancements in magnetoencephalography (MEG) techniques. In particular, high-precision, head-cast MEG offers a tantalizing opportunity to measure neural activity in different cortical layers. This presentation introduces the laMEG (“la MEG”) toolbox, designed to allow laminar simulation and analyses of MEG data through a unified Python interface. laMEG seamlessly interfaces with the Statistical Parametric Mapping (SPM) toolbox via the MATLAB Python engine (no MATLAB license required), enabling users to leverage the powerful source reconstruction algorithms implemented in SPM using the flexibility of Python. This session will cover the core functionalities of the laMEG toolbox, including cortical surface processing, laminar signal simulation, and model comparison and ROI-based laminar inference techniques. I will then demonstrate its application to motor and visual event-related fields in human MEG data, and finally I will discuss the potential research applications and the impact of laMEG on current and future studies. By providing examples from recent and ongoing research, I aim to demonstrate the versatility and power of the laMEG toolbox in bridging the gap between circuit-level understanding in animal models and large-scale brain networks in humans.

MEEGsim: building blocks for simulating M/EEG activity and connectivity with MNE-Python
Nikolai Kapralov, Alina Studenova
Max Planck Institute for Human Cognitive and Brain Sciences, Germany
More information
Have you ever wondered what will happen to the results if you change a parameter of your analysis method? Did you want to test whether your results could be explained by a trivial effect? Or did you need to generate a toy example to illustrate your idea in a presentation? For all these questions, simulated M/EEG data can be of great help! In fact, it brings more fun if the simulations can be assembled easily and in a flexible way. This is exactly the aim of the MEEGsim toolbox, which provides building blocks for simulations, mostly focusing on connectivity (for now): template waveforms of source activity, simulation of phase-phase coupling, and adjustment of the signal-to-noise ratio. Come to the session to learn more about the toolbox and try it out in your (maybe even first) simulation! In the meantime, feel free to read more about the toolbox in the documentation: https://meegsim.readthedocs.io/en/stable/.
Prerequisite: Python >= 3.9 as well as MNE-Python and MEEGsim packages

MEGqc: A Standardized and Scalable Pipeline for MEG Data Quality Control
Karel Mauricio López Vilaret
Carl von Ossietzky Universität Oldenburg, Germany
More information
Magnetoencephalography (MEG) recordings are highly susceptible to noise and artifacts originating from environmental sources, physiological processes, and technical issues, all of which compromise data quality and interpretability. Current MEG quality control (QC) methods largely rely on manual and subjective procedures, reducing reproducibility and complicating data sharing. To address these challenges, we introduce MEGqc, an automated, open-source, and BIDS-compatible pipeline designed to provide standardized and scalable assessment of raw MEG signal quality.
MEGqc, developed in Python, leverages popular libraries such as MNE-Python, NumPy, and Plotly to compute established QC metrics, including signal variability (e.g., standard deviation, peak-to-peak amplitude), spectral noise (e.g., power-line interference), high-frequency muscle activity, and physiological artifacts from eye movements and cardiac activity. When available, head movement is also quantified. All metrics are saved as machine-readable BIDS derivatives and summarized in interactive HTML reports.
MEGqc efficiently identifies poor-quality recordings and integrates seamlessly into existing neuroimaging workflows, demonstrated through successful application on large datasets such as CamCAN (>600 subjects), promoting data transparency and reproducibility. Its modular architecture, user-friendly graphical interface, and parallel processing capabilities support effective QC in both small laboratories and large-scale consortia. MEGqc thus facilitates informed decision-making for data inclusion and enhances the quality of datasets used for advanced analyses, including machine learning.
Prerequisite: Windows OS, Linux Ubuntu (16, 18, 22), python 3.10

Exploring EEG Signal Complexity with Modified Multiscale Entropy
Niels Kloosterman
Department of Psychology, University of Lübeck, Germany
More information
This workshop introduces participants to advanced M/EEG analysis techniques focused on quantifying neural signal complexity using modified Multiscale Entropy (mMSE). Drawing on the FieldTrip toolbox and an extended entropy analysis pipeline, I will explore how mMSE can capture meaningful temporal irregularities in brain activity that are often missed by traditional analysis approaches like spectral power or event-related potentials.
The session is structured into two main components: a conceptual lecture and a hands-on demonstration (~45 minutes each). In the lecture portion, I will begin by reviewing the theoretical basis of entropy as a measure of brain signal variability, emphasizing its relevance for characterizing brain states that dynamically change over time. I will then introduce the mMSE algorithm, a method designed to estimate moment-to-moment brain signal entropy over multiple timescales using discontinuous EEG data. Compared to conventional multiscale entropy (MSE), mMSE allows for computation of entropy over time by concatenating short segments of interest, using a similar sliding window approach as is done in traditional time-frequency (spectral) analysis. This makes mMSE especially useful for investigating short-lived cognitive processes as they occur in event-related designs.
I will highlight recent applications of mMSE in cognitive and systems neuroscience to demonstrate its utility in linking entropy profiles to individual differences and task-related neural variability. The lecture will also cover key practical considerations, including preprocessing steps such as high-pass filtering, removal of the event-related potential, and segment selection, all of which are critical for reliable entropy estimation.
In the practical demonstration, participants will follow a step-by-step walkthrough of an mMSE analysis using a FieldTrip-based pipeline in MATLAB. I will begin with loading and preprocessing sample EEG data, proceed to entropy computation using the ft_entropyanalysis function, and finish with result visualization and interpretation. Special attention will be given to configuring the mMSE-specific parameters (e.g., time scales, pattern length and pattern similarity), and understanding how different parameter choices affect entropy estimates. Participants will gain hands-on experience that enables them to apply entropy-based analyses to their own FieldTrip-preprocessed datasets.
This session is aimed at M/EEG researchers and students with basic familiarity in EEG data processing and MATLAB. By the end of the workshop, attendees will have a solid conceptual and practical foundation for using mMSE as a tool to investigate brain signal variability. All code, example data, and links to the mMSE toolbox will be provided. See https://shorturl.at/NWhso for the online tutorial, and https://github.com/LNDG/mMSE for the MATLAB function.

OPM-MEG FLUX toolkit
Tara Ghafari, Arnab Rakshit
Department of Experimental Psychology, Department of Psychiatry, University of Oxford, UK
More information
The OPM-FLUX toolkit course provides a practical, hands-on introduction to analysing OPM-MEG data using OPM-FLUX—an advanced, Python-based analysis pipeline adapted from traditional SQUID-MEG workflows. Built on the MNE-Python framework, FLUX supports a wide range of analysis methods tailored for OPM data, including those from FieldLine and Cerca/QuSpin systems.
In this 4-hour session, participants will work through the FLUX material on their own laptops, guided step-by-step by the instructors. Each chapter of the FLUX toolkit will be introduced briefly before participants execute the corresponding code and analysis themselves. During each segment, we will ask targeted questions to reinforce key concepts, while also addressing any questions the participants may have.
The session will cover core aspects of OPM-MEG analysis, including BIDS formatting, preprocessing, event-related fields, spectral analysis, source modelling, and multivariate pattern analysis. While data acquisition itself will not be demonstrated hands-on, we will outline the general requirements and workflows involved.
This interactive format is designed to provide participants with a strong working knowledge of the FLUX pipeline and confidence in analysing their own OPM-MEG data.
Prerequisite: Basic familiarity with python

Introducing PhysioEx, a new Python library for deep-learning based sleep staging
Guido Gagliardi
KU Leuven, Belgium
More information
In this lesson, we will explore PhysioEx, an open-source Python library designed to facilitate explainable deep learning for automated sleep staging. The session will guide participants through the core design principles of PhysioEx and demonstrate how it supports the complete deep learning sleep staging pipeline, from data loading and preprocessing to model training, evaluation, and explainability.
We will begin by discussing the motivation behind PhysioEx: the growing need for standardized, modular, and accessible tools to develop and evaluate sleep staging models that are both accurate and interpretable. Emphasis will be placed on how PhysioEx integrates Explainable AI (XAI) methods directly into the pipeline, bridging the gap between raw physiological data (EEG, EOG, EMG) and clinically meaningful decisions.
The lesson will then walk through the structure of the library, covering its extensible API and command-line interface to train, test and finetune deep learning models on a large variety of datasets. We will detail how PhysioEx manages (big-)data loading and preprocessing with a focus on how the library allow to dynamically merge multiple.
We will explore the training and testing workflow, highlighting how PhysioEx supports a variety of state-of-the-art neural architectures for sleep staging with a focus on models resembling the sequence-to-sequence framework, such as SeqSleepNet, TinySleepNet and SleepTransformer. Particular attention will be given to cross-dataset training and generalization experiments, showing how PhysioEx enables fair and reproducible evaluation of model robustness across domains.
In the final part of the lesson, we will focus on explainability, introducing the set of post-hoc XAI algorithms implemented in the library and suited for time-series classification. These include techniques for saliency mapping, relevance propagation, and concept-based explanations that help interpret model predictions in alignment with AASM-defined sleep staging rules. We will show how these tools can provide meaningful insights into model behavior, promote transparency, and support clinical adoption.
By the end of the lesson, participants will have a comprehensive understanding of PhysioEx’s capabilities and its role in promoting reproducible, interpretable, and clinically aligned research in sleep medicine.
Prerequisite: Good familarity with Python and Pytorch

Specparam 2.0: spectral parameterization with time-resolved estimates & updated models
Thomas Donoghue
University of Manchester, UK
More information
Spectral Parameterization (specparam; formerly fooof) is a method for parameterizing neural power spectra into aperiodic and periodic components. This tool is implemented and available in an open-source Python module. The original version of the tool (fooof v1.X) proposed an algorithm and model form for separating periodic (frequency specific; putatively oscillatory) activity from aperiodic (across all frequencies; broadband) activity, each of which are physiologically interesting features, but which require dedicated methods to appropriately disentangle these overlapping features. The tool is extensively supported by a documentation website (https://fooof-tools.github.io/) which features code based tutorials, as well examples, motivations and an FAQ covering common topics and examining why parameterizing neural power spectra is a useful approach. This tool has been widely applied to M/EEG data, with applications across clinical and cognitive psychology and neuroscience.
The new version of the tool (specparam v2.0) extends this capacity in two main ways: first by adding the capacity to parameterize time-resolved spectral estimates (spectrograms) allowing for better analyses of spectral features across time and in relation to task events, as well as a rewrite of the module which allows for more flexibility in the model fitting, including the addition of new fit functions and a new procedure that allows for customization of the fitting algorithm. Collectively, this allows for model testing and comparison between different potential models of spectral features (e.g. comparing between different forms of the aperiodic component).
This presentation will start with an overview of the topic of spectral parameterization, examining the motivations for this approach, including showing how common methods can potentially conflate aperiodic and periodic features of the data. After this introduction, a live code demo will be presented, which will introduce the method, with a focus on demonstration the new functionality available in the new 2.0 version of the tool, including how to apply spectral parameterization to time-resolved and event-related analysis designs and how to fit and compare different model forms. Time permitting, the session will end with a hands-on section in which participants can try out the method, including to their own data if available.
Prerequisite: The tool is available as an open-source Python toolbox. Participants who wish to follow along with the live code will be provided download and installation instructions.

Spikeinterface
Samuel Garcia ♥
CNRS, CRNL, France

Building Brain-Computer Interfaces and multimodal applications with Timeflux
Pierre Clisson
Independent researcher, France
More information
In recent years, multimodal biosignal acquisition has become significantly easier and cheaper. Simultaneously, more and more people have been relying on the thriving Python data science and machine learning ecosystem. Attendees will learn about Timeflux, an open-source framework for real-time data processing, stimulus presentation, and machine learning pipelines.
This workshop offers practical guidelines for designing and conducting neurophysiological experiments using biosignals such as EEG, ECG, PPG, EDA, and EMG, as well as stimulus presentation.
We will first discuss the Timeflux theory of operation, including graph execution, latency management, writing processing and classification pipelines, and creating user interfaces. We will also review the main modules and how to write your own. Finally, we will discuss where Timeflux is headed and its upcoming new features.
The second part will cover a practical application: a state-of-the-art, peer-reviewed cVEP speller that can be accessed through a web browser. It achieved 97.5% accuracy with just 88 seconds of calibration data using dry electrodes. We will discuss the general architecture of the application and the principles of Code Visual Evoked Potentials, demonstrate sub-millisecond synchronization between stimuli and EEG data, and detail the classification pipeline and our probability accumulation method.
The workshop agenda will include dedicated time for a Q&A session. Participants are encouraged to discuss their own use cases.
Prerequisite: We invite anyone working with biosignals to join these presentations, including neuropsychologists and neurosociologists, BCI researchers, data scientists, and research engineers. Basic knowledge of the Python programming language is helpful but not strictly required.

Simulating continuous event-based EEG data using UnfoldSim.jl
Judith Schepers
University of Stuttgart, Germany
More information
When testing analysis pipelines, comparing different analysis approaches, or validating statistical methods, one often needs EEG data with a known ground truth. Simulating EEG data based on known parameters addresses this need.
Here, we present UnfoldSim.jl, a free and open-source Julia package designed for simulating continuous EEG data based on (potentially overlapping) event-related potentials. Using regression formulas, users can flexibly determine the relationship between the experimental design and the response functions. UnfoldSim.jl also provides support for multi-channel simulations via EEG-forward models and allows for the simulation of both single-subject and multi-subject data. One of its core design principles is modularity, enabling users to tailor the simulation to their specific research applications.
In this session, we will introduce UnfoldSim.jl and provide a brief overview of its key features. The core part of the session will guide you through a simple simulation example to introduce the different simulation ingredients and illustrate the simulation workflow. Afterwards, there will be a hands-on part in which you can explore the effect of different parameters on the simulated data and create your own simulations.
This workshop uses the Julia programming language and the practical part will be conducted using Pluto.jl notebooks. While programming experience in Julia is not required, experience in MATLAB, R, or Python is recommended.
If you want to learn more about UnfoldSim.jl already, have a look at its documentation (https://unfoldtoolbox.github.io/UnfoldDocs/UnfoldSim.jl/stable/) or our JOSS paper (https://joss.theoj.org/papers/10.21105/joss.06641).
Prerequisite: For the hands-on session, please install Julia (at least version 1.11) and Pluto.jl on your computer. We recommend you to follow this installation guide (from our Unfold workshop earlier this year): https://www.s-ccs.de/workshop_unfold_2025/installation.html

Unfold the Mysteries of EEG: Analyzing rERPs in Complex Paradigms with Unfold.jl
René Skukies
University of Stuttgart – Centre for Simulation Technology, Germany
More information
Are you interested in analyzing EEG while someone reads a book, listens to music while walking around, navigates the streets of a city, or appreciates a piece of art?
These naturalistic paradigms, where the experimenter does not control the subject’s sensory input, are becoming more popular, but present difficult challenges for data analysis and interpretation:
As a result, researchers are increasingly confronted with data where brain signals (such as ERPs) overlap in time, and are confounded by continuous variables.
To address these complexities, we developed Unfold.jl[1], a toolbox operating within the regression ERP framework (rERP).
In this 2h-workshop, we (René Skukies & Benedikt Ehinger) will work on – in theory and hands on – mass-univariate rERPs, interactions and marginal effects, and overlap correction. Additionally, we will provide further material to learn about continuous and non-linear effect modelling.
The workshop will be held in the Julia programming language. However, we provide notebooks that will satisfy those with little coding experience, but optionally provide challenges to more experienced users, and thus you will have no problems following the hands-on sessions if you have (some) experience in Python and/or MATLAB. Additionally, Julia and Unfold.jl can optionally be called from Python directly, making it easy for you to apply the learned concepts to your existing workflows.
It’s time to unfold your potential(s)!
[1] https://unfoldtoolbox.github.io/UnfoldDocs/Unfold.jl/stable/)
Prerequisite: To partake in the hands-on sessions, you must install Julia (at least version 1.11) and Pluto.jl on your computer. To do this, you can follow the installation guide from our workshop earlier this year here: https://www.s-ccs.de/workshop_unfold_2025/installation.html

UnfoldMixedModels.jl – LMMs & EEG
Benedikt Ehinger
University of Stuttgart, Germany
More information
Linear mixed models are versatile and increasingly popular in cognitive psychology to analyze behavioral datasets with within-subject trial-repetitions. Some brave researchers have already applied these hierarchical models to EEG data, typically on the averaged space/time region of interest.

Canon Fire – A Barrage of Canonical Filters for Whole Head and Source Montaging
John Mosher
University of Texas Health Science Center at Houston, USA
More information
Canon Fire is a software tool for the rapid computation of linear reconstruction weights, designed to integrate seamlessly with Brainstorm’s Montage Panel. Given a baseline (noise) covariance, a data covariance, a head (forward) model, and labeled source-space regions of interest (ROIs), Canon Fire enables fast switching between visualizations based on distinct inverse methods: minimum norm, beamforming, or weighted subspace fitting.
Reconstructed activity can be visualized either at the sensor level (i.e., denoised or “cleaned” sensor data) or as ROI-based source time series, facilitating direct comparisons across modeling strategies. The data covariance may be as short as a single time slice, while the baseline covariance should typically span several seconds or more. Canon Fire supports arbitrary head models (e.g., spherical, BEM, FEM) and label parcellations (e.g., Desikan-Killiany, Schaefer 100), ranging from single dipoles to whole hemispheres.
By enabling rapid recomputation, Canon Fire allows users to focus on experimental design decisions, such as selecting baseline and data intervals or refining anatomical label definitions, rather than getting bogged down in computational overhead. This workflow supports more flexible, interpretable, and hypothesis-driven analysis of neurophysiological data.
We demonstrate Canon Fire across multiple modalities (MEG, EEG, SEEG), head models, and acquisition platforms, including data from optically pumped magnetometers (OPMs).
Prerequisite: Some familiarity with Brainstorm’s interface


“Take that bouquet of Alpha waves in your face”
Hans Berger (alleged)