{"id":2390,"date":"2022-10-25T17:47:55","date_gmt":"2022-10-25T15:47:55","guid":{"rendered":"https:\/\/practicalmeeg2022.org\/?page_id=2390"},"modified":"2025-11-25T14:23:54","modified_gmt":"2025-11-25T13:23:54","slug":"bouquet","status":"publish","type":"page","link":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/bouquet\/","title":{"rendered":"Bouquet"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; next_background_color=&#8221;#ffffff&#8221; _builder_version=&#8221;4.27.4&#8243; background_enable_color=&#8221;off&#8221; use_background_color_gradient=&#8221;on&#8221; background_color_gradient_direction=&#8221;169deg&#8221; background_color_gradient_stops=&#8221;#93004c 12%|rgba(163, 20, 83, 1) 25%|#ad2955 53%|#521354 100%&#8221; background_color_gradient_start=&#8221;#d17900&#8243; background_color_gradient_start_position=&#8221;56%&#8221; background_color_gradient_end=&#8221;#d36d00&#8243; bottom_divider_style=&#8221;curve2&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_row _builder_version=&#8221;4.27.4&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.16&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.17.6&#8243; header_font=&#8221;Comfortaa|700||on|||||&#8221; background_layout=&#8221;dark&#8221; custom_padding=&#8221;20px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h1 style=\"text-align: center;\">Toolboxes Bouquet<\/h1>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;||1px|||&#8221; top_divider_color=&#8221;#70004f&#8221; bottom_divider_color=&#8221;#f7edd2&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_row column_structure=&#8221;3_5,2_5&#8243; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;55px||13px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;3_5&#8243; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;Quicksand||||||||&#8221; header_3_font=&#8221;Lato|900|||||||&#8221; header_4_font=&#8221;Oswald|600|||||||&#8221; header_4_font_size=&#8221;25px&#8221; header_4_line_height=&#8221;1.4em&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4>Instructions<\/h4>\n<p>The <em>Toolbox Bouquet<\/em> is half a day online session running on <strong>Oct 30th PM<\/strong> (CET).\u00a0 Several exciting practical courses are offered on selected <em>flowers<\/em> (toolboxes) for cutting-edge MEEG data analysis<\/p>\n<p>This event will take place Online. More information coming soon in the meantime check the previous <a href=\"https:\/\/practicalmeeg2022.org\/bouquet\/\">PracticalMEEG <strong>2022<\/strong> bouquet<\/a> <a href=\"https:\/\/practicalmeeg2022.org\/bouquet\/\"><\/a>to get an impression of what this entails.<\/p>\n<p>[\/et_pb_text][et_pb_button button_url=&#8221;https:\/\/docs.google.com\/forms\/d\/e\/1FAIpQLSf88Q8At6QSssaX52rsLNxvlWIdZ5oB6efu3za_ncDnuL0o3g\/viewform&#8221; url_new_window=&#8221;on&#8221; button_text=&#8221;Online Bouquet Registration&#8221; button_alignment=&#8221;left&#8221; disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_button=&#8221;on&#8221; button_text_color=&#8221;#fbe8da&#8221; button_bg_color=&#8221;#008caf&#8221; button_bg_use_color_gradient=&#8221;on&#8221; button_bg_color_gradient_direction=&#8221;174deg&#8221; button_bg_color_gradient_stops=&#8221;#93004c 21%|#ba1655 46%|#93004c 71%&#8221; button_border_width=&#8221;3px&#8221; button_border_color=&#8221;#fbe8da&#8221; button_border_radius=&#8221;11px&#8221; button_font=&#8221;|600||||on|||&#8221; background_layout=&#8221;dark&#8221; disabled=&#8221;on&#8221; global_colors_info=&#8221;{}&#8221; custom_margin=&#8221;||-11px|||&#8221;][\/et_pb_button][\/et_pb_column][et_pb_column type=&#8221;2_5&#8243; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_3_font=&#8221;Lato|900|||||||&#8221; header_4_font=&#8221;Oswald|600||on|||||&#8221; header_4_line_height=&#8221;1.4em&#8221; custom_margin=&#8221;40px||0px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4>Legend<\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;Quicksand||||||||&#8221; header_3_font=&#8221;Lato|700|||||||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><em>Flowers<\/em> (courses) have different teaching approaches composed of the following attributes:<\/p>\n<p>[\/et_pb_text][et_pb_blurb title=&#8221;Lecture&#8221; image=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/L-1.png&#8221; icon_placement=&#8221;left&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;Quicksand||||||||&#8221; header_font_size=&#8221;16px&#8221; header_line_height=&#8221;2em&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_blurb][et_pb_blurb title=&#8221;Hands-On&#8221; image=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/H-1.png&#8221; icon_placement=&#8221;left&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;Quicksand||||||||&#8221; header_font_size=&#8221;16px&#8221; header_line_height=&#8221;2em&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_blurb][et_pb_blurb title=&#8221;Demo&#8221; image=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/D.png&#8221; icon_placement=&#8221;left&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;Quicksand||||||||&#8221; header_font_size=&#8221;16px&#8221; header_line_height=&#8221;2em&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_blurb][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;72%&#8221; hover_enabled=&#8221;0&#8243; custom_css_free_form=&#8221;.selector {||    display: flex;||    align-items: center;||}||.selector .et_pb_column {||    display: flex;||    flex-direction: column;||    justify-content: center;||}&#8221; global_colors_info=&#8221;{}&#8221; sticky_enabled=&#8221;0&#8243; custom_padding=&#8221;0px|||||&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/11\/Capture-decran-2025-11-25-112400.png&#8221; title_text=&#8221;Capture d&#8217;\u00e9cran 2025-11-25 112400&#8243; url=&#8221;https:\/\/play.workadventu.re\/@\/cuttingeeg\/universes\/bouquet-toolboxes-2025\/office&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;63%&#8221; module_alignment=&#8221;right&#8221; hover_enabled=&#8221;0&#8243; border_radii=&#8221;on|500px|500px|500px|500px&#8221; box_shadow_style=&#8221;preset2&#8243; global_colors_info=&#8221;{}&#8221; sticky_enabled=&#8221;0&#8243;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;Quicksand||||||||&#8221; header_3_font=&#8221;Lato|900|||||||&#8221; header_4_font=&#8221;Oswald|600|||||||&#8221; header_4_font_size=&#8221;25px&#8221; header_4_line_height=&#8221;1.4em&#8221; custom_margin=&#8221;|1vw|||false|false&#8221; hover_enabled=&#8221;0&#8243; global_colors_info=&#8221;{}&#8221; sticky_enabled=&#8221;0&#8243;]<\/p>\n<h4>Online world<\/h4>\n<p style=\"text-align: justify;\">You wanna get a fresh smell at the Toolbox flowers? Just click on the image to go back to the bouquet. Feel free to use this space to meet on fowerly topics (up to 15 concurent people).<\/p>\n<p style=\"text-align: justify;\"><span style=\"text-decoration: underline;\">Note<\/span>: This session wasn&#8217;t recorded, contact the florists of each toolox for further details.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; prev_background_color=&#8221;#ffffff&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.27.4&#8243; background_color=&#8221;#f7edd2&#8243; use_background_color_gradient=&#8221;on&#8221; background_color_gradient_direction=&#8221;153deg&#8221; background_color_gradient_stops=&#8221;#f7edd2 30%|#ffe5ad 100%&#8221; background_enable_image=&#8221;off&#8221; parallax=&#8221;on&#8221; parallax_method=&#8221;off&#8221; background_blend=&#8221;hue&#8221; z_index=&#8221;14&#8243; custom_padding=&#8221;54px|0px|53px|0px&#8221; top_divider_style=&#8221;waves2&#8243; top_divider_flip=&#8221;horizontal|vertical&#8221; saved_tabs=&#8221;all&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;60px||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>AnyWave and Epitools<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Bruno Colombet<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Aix Marseille Universit\u00e9, France<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>In the field of cognitive and clinical neuroscience, handling and analyzing EEG and MEG data from various acquisition systems is often a technical challenge due to heterogeneous formats and processing requirements. To address this, we developed a software tool designed to provide intuitive visualization, flexible pre and post processing, and interoperability with current neuroimaging standards. The software is fully compatible with the Brain Imaging Data Structure (BIDS) for EEG\/MEG, enabling structured data organization and seamless integration with standard analysis pipelines. By integrating with FreeSurfer and GARDEL(shown in the 2nd part) the software allows visualization of brain activity on subject-specific cortical surfaces. We will present various processing operations:<br \/>\u2022 Bandpass and notch filters<br \/>\u2022 Independant component analysis<br \/>\u2022 Power Spectral Density (PSD) estimation<br \/>\u2022 Time-frequency analysis (wavelets, STFT)<br \/>\u2022 BIDS interaction with SEEG activity mapping (raw signal or ICA topographies) and Interactive visualization<\/p>\n<p>In a second part we will present GARDEL, a companion tool for co-registration of CT and MRI scans and semi-automatic detection of intracranial electrodes. GARDEL facilitates accurate anatomical localization of implanted electrodes by segmenting and mapping them onto brain structures.<\/p>\n<p>We will also present the ability to create and execute custom analysis modules written in MATLAB or Python directly from within the software. This offers flexibility for users who rely on existing scripts or research pipelines.<br \/>A quick demonstration\/tutorial will expose how to create a plugin in MATLAB\/Python.<\/p>\n<p>We will finish by making a demo of available plugins for signal processing such as Delphos module, which detects spikes and fast oscillations in EEG signals\u2014an essential tool for clinical and research applications in epilepsy.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: None<\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/flux\/6\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/07\/D.png&#8221; title_text=&#8221;D&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Improving ERP Method Reporting with ARTEM-IS: A Hands-On Introduction<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Katarina Steki\u0107, Nastassja Lopes Fischer, Dejan Paji\u0107<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">University of Belgrade, Serbia<\/span><\/span><\/i><\/h6>\n<p><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\"><\/span><\/span><\/i><\/p>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Transparent and detailed reporting of ERP methods is essential but often insufficiently addressed, impacting clarity and reproducibility of research. Existing guidelines and checklists have not fully resolved these issues.<\/p>\n<p>This workshop presents ARTEM-IS (Agreed Reporting Template for EEG Methodology \u2013 International Standard), a community-driven, web-based tool designed to help researchers systematically document ERP methodologies using a standardized metadata template.<\/p>\n<p>We will begin by sharing the story behind ARTEM-IS, its origins, challenges, and the collaborative effort shaping it, emphasizing why better ERP method documentation represents both a technical need and a cultural shift toward scientific transparency.<\/p>\n<p>Next, we\u2019ll provide a guided walkthrough of the ARTEM-IS tool, demonstrating how to input detailed study information from design to visualization and generate both human- and machine-readable reports. We\u2019ll also discuss current features and planned extensions, including support for complex designs and open science integration.<\/p>\n<p>The core of the workshop is a practical challenge. Participants will use ARTEM-IS to document one of their own ERP studies in real time, with guidance throughout. Attendees should prepare a relevant paper to efficiently extract methodological details.<\/p>\n<p>By the end, participants will have hands-on experience, a completed or nearly completed documentation template for their study, and insights on integrating ARTEM-IS into future publications.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite:\u00a0Everyone should prepare one ERP research paper in advance for populating the ARTEM-IS template.<\/em><\/span><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/meggie\/7\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Braindecode: harnessing deep learning and foundation models for brain signals decoding<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Pierre Guetschel<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Donders Institute for Brain, Cognition and Behaviour, Radboud University , The Netherlands<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Braindecode is an open-source toolbox for training and deploying deep learning decoding models on M\/EEG signals. It contains a large collection of state-of-the-art deep learning models.<\/p>\n<p>In this session, we will see how to harness deep learning methods for M\/EEG decoding.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Python<\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/frites\/8\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>DISCOVER-EEG: an open, fully automated EEG pipeline for biomarker discovery in clinical neuroscience<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Cristina Gil Avila<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Universidad Complutense de Madrid, Spain<\/span><\/span><\/i><\/h6>\n<p><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\"><\/span><\/span><\/i><\/p>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Biomarker discovery in neurological and psychiatric disorders critically depends on reproducible and transparent methods applied to large-scale datasets. Electroencephalography (EEG) is a promising tool for identifying biomarkers. However, recording, preprocessing, and analysis of EEG data is time-consuming and researcher-dependent. Therefore, we developed DISCOVER-EEG, an open and fully automated pipeline that enables easy and fast preprocessing, analysis, and visualization of resting state EEG data. Data in the Brain Imaging Data Structure (BIDS) standard are automatically preprocessed, and physiologically meaningful features of brain function (including oscillatory power, connectivity, and network characteristics) are extracted and visualized using two open-source and widely used Matlab toolboxes (EEGLAB and FieldTrip). We tested the pipeline in two large, openly available datasets containing EEG recordings of healthy participants and patients with a psychiatric condition. Additionally, we performed an exploratory analysis that could inspire the development of biomarkers for healthy aging. Thus, the DISCOVER-EEG pipeline facilitates the aggregation, reuse, and analysis of large EEG datasets, promoting open and reproducible research on brain function.<\/p>\n<p>This session will demonstrate the use of DISCOVER-EEG in a small EEG dataset and invite users to test it on their own.<\/p>\n<p>&nbsp;<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite:\u00a0MATLAB 2019 or higher, EEGLAB, Fieldtrip<\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/neuropycon-ephypype\/9\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/H-1.png&#8221; title_text=&#8221;H&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Introduction to the EP Toolkit<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Joseph Dien<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">University of Maryland, College Park, USA<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: justify;\">This three session workshop will demonstrate how to use my free open-source Matlab EEG analysis suite (Dien, 2010) to analyze ERP data, with an emphasis on its strengths for performing cutting edge artifact correction (Dien, 2024), robust ANOVA (Dien, 2017), and two-step PCA (Dien, 2012). Each session will consist of a brief presentation of the core concepts, followed by a demonstration of how to perform them using the EP Toolkit, and ending in a short hands-on period allowing for questions and answers.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: None<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>HappyFeat &#8211; an interactive BCI framework for optimal feature selection<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Arthur Desbois<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Inria Paris, ICM, France<\/span><\/span><\/i><\/h6>\n<p><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\"><\/span><\/span><\/i><\/p>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Due to the high level of variability in EEG signals, the performance of a BCI system is closely linked to the choice of appropriate, customized classification features. The HappyFeat Python software simplifies BCI experiments, providing extraction, automation, visualization and machine-learning tools, and interfacing with recognized BCI software (OpenViBE, Timeflux), allowing experimenters to concentrate on the essentials: fine-tuning the BCI.<\/p>\n<p>After a presentation of the constraints of Motor imagery (MI)-based BCI in experimental and clinical settings, we will explain the main mechanics of HappyFeat, followed by a demonstration\/tutorial, which spectators will be able to follow and replicate on their own system. We will conclude with a more in-depth explanation of how to customize BCI pipelines in HappyFeat (using template scenarios), and an open discussion.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Some basic knowledge in BCI and Motor Imagery<\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/git-and-github\/11\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/07\/LD-1.png&#8221; title_text=&#8221;LD-1&#8243; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Documenting Events in Time Series Recordings using HED Tools<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Scott Makeig<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Institute for Computational Neuroscience, UCSD, USA<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Documenting Events in Time Series Data using HED Tools<br \/>Cutting M\/EEG Tutorials,<br \/>October, 2025<\/p>\n<p>Cognitive neuroscience and functional neuroimaging seek to relate brain dynamics to experience and to behavior. Cognitive neuroimaging seeks to model the relationship of recorded time series data to concurrent sensory experience and behavior of the imaged participants. Documenting details of this experience and behavior is thus essential. In the current era of growing public data archives, documenting sensory, behavioral, and other events in neuroimaging data using common terms and syntax allows efficient data search, retrieval, and joint analysis (including AI-empowered mega-analysis). Unfortunately, a common system for event annotation has not been adequately addressed in emerging data storage standards (e.g., BIDS or NWB).<br \/>A dozen years ago, Nima Bigdely-Shamlo at UCSD proposed development of a standard for annotating events occurring during time series recordings, naming it the system of Hierarchical Event Descriptors (HED). Following a decade of development, the HED standard and its growing array of associated user tools was accepted (in 2024) by the INCF as the (now only) international standard for event annotation of time series data.<br \/>The HED tutorial will consist of compact lectures on HED purpose and structure, the process of HED annotation, and using HED annotations in M\/EEG data analysis. Example analyses will use EEGLAB and Fieldtrip. These will alternate with HED tool demonstrations and periods for attendees to test applying the demonstrated tools to readily downloaded data. Tutors will be available to answer attendee questions (making use of whatever videochat options are available). HED tools now include a HED annotation assistant using AI resources.<br \/>We also hope to be able to report on a proposed NeurIPS competition using a very large (~3k subject) EEG dataset (Healthy Brain Network data, available on NEMAR.org). We hope that this competition will provide stimulating examples of using HED annotations to mine M\/EEG data.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Some understanding of current data archiving systems (BIDS or NWB).<br \/><\/em><\/span><\/p>\n<p><span style=\"text-decoration: underline;\"><span style=\"color: #999999; text-decoration: underline;\"><em>Websites<\/em><\/span><\/span><br \/><a href=\"https:\/\/www.HEDtags.org\" target=\"_blank\" rel=\"noopener\"><em>https:\/\/www.HEDtags.org<\/em><\/a><br \/><a href=\"https:\/\/www.youtube.com\/@HierarchicalEventDescriptors\" target=\"_blank\" rel=\"noopener\"><em>https:\/\/www.youtube.com\/@HierarchicalEventDescriptors<\/em><\/a><\/p>\n<p><span style=\"text-decoration: underline;\"><span style=\"color: #999999; text-decoration: underline;\"><em>References<\/em><\/span><\/span><br \/><span style=\"color: #999999;\"><em>Makeig, S. and Robbins, K., 2024. Events in context\u2014The HED framework for the study of brain, experience and behavior. Frontiers in Neuroinformatics, 18, p.1292667.<\/em><\/span><br \/><span style=\"color: #999999;\"><em>Robbins, K., Truong, D., Jones, A., Callanan, I. and Makeig, S., 2022. Building FAIR functionality: annotating events in time series data using hierarchical event descriptors (HED). Neuroinformatics, 20(2), pp.463-481.<\/em><\/span><br \/><span style=\"color: #999999;\"><em>Robbins, K., Truong, D., Appelhoff, S., Delorme, A. and Makeig, S., 2021. Capturing the nature of events and event context using hierarchical event descriptors (HED). NeuroImage, 245, p.118766.<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/07\/LD-1.png&#8221; title_text=&#8221;LD-1&#8243; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Hidden multivariate patterns to locate cognitive events on a by-trial basis<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Gabriel Weindel<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Institut de psychologie &#8211; Universit\u00e9 de Lausanne, Switzerland <\/span><\/span><\/i><\/h6>\n<p><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\"><\/span><\/span><\/i><\/p>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>In this course, participants will learn how to use hidden multivariate pattern models (HMP, Weindel, van Maanen &amp; Borst, 2024, Imag. Neuro.) to identify and locate cognitive events in time-series. <br \/>The HMP method assumes that task-relevant operations performed by the brain are represented as multivariate patterns in neural signals such as electro- or magneto-encephalograpic data. Unlike typical multivariate pattern analysis methods, HMP assumes that events are variable in time over trials yet sequential to another. Leveraging these assumptions, the method recovers the location of sequential brain responses by-trial. This estimation allows one to go beyond the epoching of data based on an external events, such as stimulus or response onset, and center analyses around a functional period of interest and, thus, be used as starting points for many applications.<\/p>\n<p>This flower of the toolbox bouquet is made of a lecture about the method and several tutorials. The tutorials will be based on the dedicated Python package and will guide participants in the use of HMP. First, participants will learn how to simulate events in EEG data using a dedicated simulation module. These simulations will then serve as input data for HMP to illustrates the method&#8217;s benefits and limitations. Finally, we will use public EEG datasets to illustrate the use of HMP in the wild and how participants can readily apply HMP to their own data.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite:\u00a0python (&gt;3.10) <\/em><\/span><\/p>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/human-neocortical-neurosolver\/13\" target=\"_blank\" rel=\"noopener\">See tutorial discussion forum<\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LH-1.png&#8221; title_text=&#8221;LH&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand|700|||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Human Neocortical Neurosolver (HNN): An open-source software for cellular and circuit-level interpretation of human MEG\/EEG<\/h3>\n<h4 style=\"text-align: justify;\">Stephanie Jones<\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Brown University USA<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>The Human Neocortical Neurosolver (HNN) is a user-friendly neural modeling software designed to provide a cell- and microcircuit-level interpretation of macroscale magneto- and electroencephalography (M\/EEG) signals (<a href=\"https:\/\/hnn.brown.edu\" target=\"_blank\" rel=\"noopener\">https:\/\/hnn.brown.edu<\/a>, <a href=\"https:\/\/doi.org\/10.7554\/eLife.51214\" target=\"_blank\" rel=\"noopener\">Neymotin et al. 2020<\/a>). The foundation of HNN is a biophysically-detailed neocortical model, representing a patch of neocortex receiving thalamic and corticocortical drive. The HNN model was designed to simulate the time course of primary current dipoles and enables direct comparison, in nAm units, to source-localized M\/EEG data, along with layer-specific cellular activity. HNN workflows are constructed around simulating commonly measured Event Related Potentials (ERPs) and low-frequency oscillations. The HNN model can be accessed through a user-friendly interactive graphical user interface (GUI) or through a Python scripting interface.<\/p>\n<p>The foundation of HNN, referred to as HNN-core (<a href=\"https:\/\/doi.org\/10.21105\/joss.05848\" target=\"_blank\" rel=\"noopener\">Jas et al. 2023<\/a>), is a Python package containing all of the core functionality of HNN, and is implemented with a clear application programming interface (API). A new GUI has recently been implemented. Tutorials on how to simulate ERPs and low-frequency oscillations in the alpha, beta, and gamma bands are distributed for both the interactive GUI and the Python API. HNN was created with best practices in open-source software to allow the computational and human neuroscience communities to understand and contribute to its development. The HNN API contains additional functionality beyond that accessible through the GUI, including the ability to modify local network connectivity, perform parameter optimization, and simulate layer-specific local field potential signals and current source density. The package is available to install with a single command on PyPI (\u201cpip install hnn_core\u201d), is unit tested, and extensively documented. HNN is additionally accessible through computing resources offered by the Neuroscience Gateway (NSG), enabling large simulation workloads. Overall, HNN is a one-of-a-kind, openly-distributed tool designed for a broad community to develop and test hypotheses on the multiscale origins of localized human M\/EEG signals.<\/p>\n<p>In this session, we will begin with a didactic overview of the background and development of HNN. We will then introduce users to the GUI and Python API through lectures and demo investigations of ERPs.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: basic neuroscience background<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>HyPyP \u2013 the Hyperscanning Python Pipeline<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Guillaume Dumas<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Universit\u00e9 de Montr\u00e9al, Canada<\/span><\/span><\/i><\/h6>\n<p><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\"><\/span><\/span><\/i><\/p>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Discover the potential of hyperscanning analysis with HyPyP, an open-source Python toolbox designed specifically for multi-brain neuroscience research (EEG, MEG, &amp; fNIRS). This 1-hour hands-on workshop will introduce researchers to practical computational methods for analyzing data collected simultaneously from multiple participants during social interactions.<\/p>\n<p>Hyperscanning\u2014the simultaneous recording of brain activity from multiple individuals\u2014represents a paradigm shift in social neuroscience, allowing researchers to move beyond traditional single-brain stimulus-response approaches to study real-time neural dynamics during natural social exchanges between multiple individuals. However, these complex datasets require specialized analytic techniques that conventional neuroimaging software packages do not typically offer.<\/p>\n<p>This workshop will provide participants with:<br \/>&#8211; An overview of hyperscanning methodologies and their analytical challenges<br \/>&#8211; Hands-on experience with HyPyP&#8217;s core functions for multi-brain data preprocessing<br \/>&#8211; Practical implementation of inter-brain connectivity measures<br \/>&#8211; Visualization techniques for inter-brain synchrony analysis<br \/>&#8211; Statistical approaches specific to hyperscanning experiments<\/p>\n<p>The session will combine brief theoretical explanations with live coding demonstrations using sample datasets in EEG and fNIRS. Participants will work through practical examples illustrating HyPyP&#8217;s capabilities for capturing neural signatures of social coordination.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite:\u00a0A laptop with Python installed (Anaconda distribution recommended) Basic knowledge of Python and neuroimaging concepts Pre-installation of HyPyP and dependencies (installation instructions will be provided to registered participants)<\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/anywave\/15\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>iElectrodes Toolbox: Fast, Robust, and Open-Source Localization of Intracranial Electrodes <b><br \/><\/b><\/h3>\n<h4 style=\"text-align: justify;\"><strong>Alejandro O. Blenkmann<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">RITMO, Department of Psychology, University of Oslo, Norway<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: justify;\">Precise anatomical localization of intracranial electrodes is crucial for interpreting invasive recordings in clinical and cognitive neuroscience research. The open-source iElectrodes toolbox offers a fast, semi-automated, and robust solution for localizing subdural grids, depth electrodes, and strips from MRI and CT images, supporting automatic anatomical labeling. iElectrodes was initially introduced in Blenkmann et al. (2017), and has been updated with major methodological innovations in Blenkmann et al. (2024). To date, it has &gt;2000 downloads.<br \/>In this 90-minute session, I will first provide an introductory lecture on the core functionalities of iElectrodes, including image pre-processing steps, semi-automatic electrode localization, brain shift compensation, and standardized anatomical registration. We will cover the recent major upgrades to the toolbox: the GridFit algorithm for robust localization of SEEG and ECoG electrodes under challenging conditions (e.g., noise, overlaps, and high-density implants), and CEPA (Combined Electrode Projection Algorithm) for smooth compensation methods for grids, addressing brain deformations based on mechanical modeling principles. These developments significantly enhanced the robustness and precision of intracranial electrode localization.<\/p>\n<p style=\"text-align: justify;\">In the second part of the session, we will move into a hands-on tutorial, where participants will learn how to use the toolbox through practical exercises. Using real patient datasets (anonymized), we will cover:<\/p>\n<p style=\"text-align: justify; padding-left: 40px;\">\u2022 Preprocessing MRI and CT images.<br \/>\u2022 Semi-automatic detection and localization of electrode coordinates using clustering and GridFit algorithms.<br \/>\u2022 Brain shift correction using CEPA.<br \/>\u2022 Automatic anatomical labeling of electrodes.<br \/>\u2022 Generation of an iElectrodes localization project file.<br \/>\u2022 Exporting electrode coordinates into formats compatible with Fieldtrip, EEGLAB, and text reports.<br \/>\u2022 Integration with further analysis workflows.<\/p>\n<p style=\"text-align: justify;\">This session is intended for both clinical and cognitive neuroscience research users working with SEEG or ECoG. Attendees will leave with practical skills for reliable and reproducible electrode localization, ready to apply to their own datasets.<br \/><span style=\"text-decoration: underline;\">Required Materials<\/span>:<\/p>\n<p style=\"text-align: justify; padding-left: 40px;\">\u2022 Participants should install MATLAB (requires a license) and download the open-source iElectrodes toolbox (available at https:\/\/sourceforge.net\/projects\/ielectrodes\/) ahead of the session.<br \/>\u2022 Example datasets of pre-processed images will be provided before the event.<br \/>References:<br \/>\u2022 Blenkmann AO, et al. (2017). iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization. Frontiers in Neuroinformatics, 11:14. doi:10.3389\/fninf.2017.00014<br \/>\u2022 Blenkmann AO, et al. (2024). Anatomical registration of intracranial electrodes. Robust model-based localization and deformable smooth brain-shift compensation methods. Journal of Neuroscience Methods, 404:110056. doi:10.1016\/j.jneumeth.2024.110056<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Matlab license required<\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/apice\/5\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LH-1.png&#8221; title_text=&#8221;LH&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><b>LaMEG: A toolbox for laminar MEG simulations and analyses<\/b><\/h3>\n<h4 style=\"text-align: justify;\"><strong>James Bonaiuto<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">CNRS, France<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: justify;\">Recent years have witnessed a transformative shift in studying human cortical circuit dynamics, propelled by advancements in magnetoencephalography (MEG) techniques. In particular, high-precision, head-cast MEG offers a tantalizing opportunity to measure neural activity in different cortical layers. This presentation introduces the laMEG (\u201cla MEG\u201d) toolbox, designed to allow laminar simulation and analyses of MEG data through a unified Python interface. laMEG seamlessly interfaces with the Statistical Parametric Mapping (SPM) toolbox via the MATLAB Python engine (no MATLAB license required), enabling users to leverage the powerful source reconstruction algorithms implemented in SPM using the flexibility of Python. This session will cover the core functionalities of the laMEG toolbox, including cortical surface processing, laminar signal simulation, and model comparison and ROI-based laminar inference techniques. I will then demonstrate its application to motor and visual event-related fields in human MEG data, and finally I will discuss the potential research applications and the impact of laMEG on current and future studies. By providing examples from recent and ongoing research, I aim to demonstrate the versatility and power of the laMEG toolbox in bridging the gap between circuit-level understanding in animal models and large-scale brain networks in humans.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite:\u00a0\u00a0<\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/model-based-neuroscience\/16\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>MEEGsim: building blocks for simulating M\/EEG activity and connectivity with MNE-Python<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Nikolai Kapralov, Alina Studenova<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Max Planck Institute for Human Cognitive and Brain Sciences, Germany<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Have you ever wondered what will happen to the results if you change a parameter of your analysis method? Did you want to test whether your results could be explained by a trivial effect? Or did you need to generate a toy example to illustrate your idea in a presentation? For all these questions, simulated M\/EEG data can be of great help! In fact, it brings more fun if the simulations can be assembled easily and in a flexible way. This is exactly the aim of the MEEGsim toolbox, which provides building blocks for simulations, mostly focusing on connectivity (for now): template waveforms of source activity, simulation of phase-phase coupling, and adjustment of the signal-to-noise ratio. Come to the session to learn more about the toolbox and try it out in your (maybe even first) simulation! In the meantime, feel free to read more about the toolbox in the documentation: https:\/\/meegsim.readthedocs.io\/en\/stable\/.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Python &gt;= 3.9 as well as MNE-Python and MEEGsim packages<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/07\/LH-1.png&#8221; title_text=&#8221;LH-1&#8243; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>MEGqc: A Standardized and Scalable Pipeline for MEG Data Quality Assessment<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Karel Mauricio L\u00f3pez Vilaret<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Carl von Ossietzky Universit\u00e4t Oldenburg, Germany<\/span><\/span><\/i><\/h6>\n<p><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\"><\/span><\/span><\/i><\/p>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Magnetoencephalography (MEG) data are notoriously sensitive to environmental noise, physiological artifacts, and instrumental instabilities that can compromise signal quality. Yet, most current quality assessment (QA) and control (QC) practices remain manual, subjective, and difficult to scale\u2014limiting comparability across studies and laboratories.<\/p>\n<p>MEGqc offers a solution: an automated, open-source, and BIDS-compatible Python pipeline that performs standardized and reproducible QA of raw MEG data. Built on MNE-Python, NumPy, and Plotly, MEGqc computes a broad set of metrics\u2014signal variability, spectral noise, high-frequency muscle activity, ocular and cardiac contamination, and head motion (when available). All results are saved as machine-readable BIDS derivatives and summarized in interactive HTML reports for transparent, visual inspection.<\/p>\n<p>MEGqc performs QA, not correction\u2014but this upstream standardization directly improves subsequent QC decisions, helping researchers objectively flag, compare, and document data issues. With its dual interface\u2014a graphical user interface (GUI) for effortless exploration and a command-line interface (CLI) for scripting and integration\u2014MEGqc adapts to both new users and experienced analysts. Its parallel processing and modular design make it scalable from individual projects to large-scale initiatives.<\/p>\n<p>Applied to large datasets such as CamCAN (&gt;600 participants), MEGqc demonstrates how harmonized QA enables more reliable and reproducible analyses across multi-site collaborations, paving the way for consistent preprocessing, group-level modeling, and machine-learning applications.<\/p>\n<p><img decoding=\"async\" data-emoji=\"\ud83e\udde0\" class=\"an1\" alt=\"\ud83e\udde0\" aria-label=\"\ud83e\udde0\" draggable=\"false\" src=\"https:\/\/fonts.gstatic.com\/s\/e\/notoemoji\/16.0\/1f9e0\/72.png\" loading=\"lazy\" \/> Hands-on session:<br \/>Participants will install MEGqc, run it on a ready-to-download BIDS sample dataset, and learn to interpret the Global Quality Index (GQI) and component metrics in practice.<\/p>\n<p><img decoding=\"async\" data-emoji=\"\u2699\ufe0f\" class=\"an1\" alt=\"\u2699\ufe0f\" aria-label=\"\u2699\ufe0f\" draggable=\"false\" src=\"https:\/\/fonts.gstatic.com\/s\/e\/notoemoji\/16.0\/2699_fe0f\/72.png\" loading=\"lazy\" \/> Please install before the workshop:<br \/><img decoding=\"async\" data-emoji=\"\ud83d\udc49\" class=\"an1\" alt=\"\ud83d\udc49\" aria-label=\"\ud83d\udc49\" draggable=\"false\" src=\"https:\/\/fonts.gstatic.com\/s\/e\/notoemoji\/16.0\/1f449\/72.png\" loading=\"lazy\" \/> <a href=\"https:\/\/ancplaboldenburg.github.io\/megqc_documentation\/book\/installation.html\" target=\"_blank\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/ancplaboldenburg.github.io\/megqc_documentation\/book\/installation.html&amp;source=gmail&amp;ust=1761057279071000&amp;usg=AOvVaw3hSnsUChKoNKc9EFR5gvZK\" rel=\"noopener\"> https:\/\/ancplaboldenburg.<wbr \/>github.io\/megqc_documentation\/<wbr \/>book\/installation.html<\/a><\/p>\n<p><img decoding=\"async\" data-emoji=\"\ud83d\udce6\" class=\"an1\" alt=\"\ud83d\udce6\" aria-label=\"\ud83d\udce6\" draggable=\"false\" src=\"https:\/\/fonts.gstatic.com\/s\/e\/notoemoji\/16.0\/1f4e6\/72.png\" loading=\"lazy\" \/> A sample BIDS dataset:\u00a0<a href=\"https:\/\/cloud.uol.de\/s\/FRpRwyf3P2dbwNS\" id=\"m_8138866366409579598LPlnk716448\" style=\"font-size: 12pt;\" target=\"_blank\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/cloud.uol.de\/s\/FRpRwyf3P2dbwNS&amp;source=gmail&amp;ust=1761057279071000&amp;usg=AOvVaw1JSWNE5yR8nEag4cGyd0Nj\" rel=\"noopener\">https:\/\/cloud.uol.de\/<wbr \/>s\/FRpRwyf3P2dbwNS<\/a><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Windows OS, Linux Ubuntu (16, 18, 22), python 3.10<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>UnfoldMixedModels.jl &#8211; LMMs &amp; EEG <b><br \/><\/b><\/h3>\n<h4 style=\"text-align: justify;\"><strong>Benedikt Ehinger<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">University of Stuttgart, Germany<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LH-1.png&#8221; title_text=&#8221;LH&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: justify;\">Linear mixed models are versatile and increasingly popular in cognitive psychology to analyze behavioral datasets with within-subject trial-repetitions. Some brave researchers have already applied these hierarchical models to EEG data, typically on the averaged space\/time region of interest.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Basics of multiple regression<\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/apice\/5\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><b>OPM-MEG FLUX toolkit<\/b><\/h3>\n<h4 style=\"text-align: justify;\"><strong>Tara Ghafari, Arnab Rakshit<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">Department of Experimental Psychology, Department of Psychiatry, University of Oxford, UK<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>The OPM-FLUX toolkit course provides a practical, hands-on introduction to analysing OPM-MEG data using OPM-FLUX\u2014an advanced, Python-based analysis pipeline adapted from traditional SQUID-MEG workflows. Built on the MNE-Python framework, FLUX supports a wide range of analysis methods tailored for OPM data, including those from FieldLine and Cerca\/QuSpin systems.<\/p>\n<p>In this 4-hour session, participants will work through the FLUX material on their own laptops, guided step-by-step by the instructors. Each chapter of the FLUX toolkit will be introduced briefly before participants execute the corresponding code and analysis themselves. During each segment, we will ask targeted questions to reinforce key concepts, while also addressing any questions the participants may have.<\/p>\n<p>The session will cover core aspects of OPM-MEG analysis, including BIDS formatting, preprocessing, event-related fields, spectral analysis, source modelling, and multivariate pattern analysis. While data acquisition itself will not be demonstrated hands-on, we will outline the general requirements and workflows involved.<\/p>\n<p>This interactive format is designed to provide participants with a strong working knowledge of the FLUX pipeline and confidence in analysing their own OPM-MEG data.<a href=\"https:\/\/www.neuosc.com\/fluxtoolkit2025\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Basic familiarity with python<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/07\/H-1.png&#8221; title_text=&#8221;H-1&#8243; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Introducing PhysioEx, a new Python library for deep-learning based sleep staging<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Guido Gagliardi<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">KU Leuven, Belgium<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>In this lesson, we will explore PhysioEx, an open-source Python library designed to facilitate explainable deep learning for automated sleep staging. The session will guide participants through the core design principles of PhysioEx and demonstrate how it supports the complete deep learning sleep staging pipeline, from data loading and preprocessing to model training, evaluation, and explainability.<br \/>We will begin by discussing the motivation behind PhysioEx: the growing need for standardized, modular, and accessible tools to develop and evaluate sleep staging models that are both accurate and interpretable. Emphasis will be placed on how PhysioEx integrates Explainable AI (XAI) methods directly into the pipeline, bridging the gap between raw physiological data (EEG, EOG, EMG) and clinically meaningful decisions.<br \/>The lesson will then walk through the structure of the library, covering its extensible API and command-line interface to train, test and finetune deep learning models on a large variety of datasets. We will detail how PhysioEx manages (big-)data loading and preprocessing with a focus on how the library allow to dynamically merge multiple.<br \/>We will explore the training and testing workflow, highlighting how PhysioEx supports a variety of state-of-the-art neural architectures for sleep staging with a focus on models resembling the sequence-to-sequence framework, such as SeqSleepNet, TinySleepNet and SleepTransformer. Particular attention will be given to cross-dataset training and generalization experiments, showing how PhysioEx enables fair and reproducible evaluation of model robustness across domains.<br \/>In the final part of the lesson, we will focus on explainability, introducing the set of post-hoc XAI algorithms implemented in the library and suited for time-series classification. These include techniques for saliency mapping, relevance propagation, and concept-based explanations that help interpret model predictions in alignment with AASM-defined sleep staging rules. We will show how these tools can provide meaningful insights into model behavior, promote transparency, and support clinical adoption.<br \/>By the end of the lesson, participants will have a comprehensive understanding of PhysioEx\u2019s capabilities and its role in promoting reproducible, interpretable, and clinically aligned research in sleep medicine.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: Good familarity with Python and Pytorch<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/07\/LH-1.png&#8221; title_text=&#8221;LH-1&#8243; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Specparam 2.0: spectral parameterization with time-resolved estimates &amp; updated models<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Thomas Donoghue<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">University of Manchester, UK<\/span><\/span><\/i><\/h6>\n<p><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\"><\/span><\/span><\/i><\/p>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Spectral Parameterization (specparam; formerly fooof) is a method for parameterizing neural power spectra into aperiodic and periodic components. This tool is implemented and available in an open-source Python module. The original version of the tool (fooof v1.X) proposed an algorithm and model form for separating periodic (frequency specific; putatively oscillatory) activity from aperiodic (across all frequencies; broadband) activity, each of which are physiologically interesting features, but which require dedicated methods to appropriately disentangle these overlapping features. The tool is extensively supported by a documentation website (https:\/\/fooof-tools.github.io\/) which features code based tutorials, as well examples, motivations and an FAQ covering common topics and examining why parameterizing neural power spectra is a useful approach. This tool has been widely applied to M\/EEG data, with applications across clinical and cognitive psychology and neuroscience.<\/p>\n<p>The new version of the tool (specparam v2.0) extends this capacity in two main ways: first by adding the capacity to parameterize time-resolved spectral estimates (spectrograms) allowing for better analyses of spectral features across time and in relation to task events, as well as a rewrite of the module which allows for more flexibility in the model fitting, including the addition of new fit functions and a new procedure that allows for customization of the fitting algorithm. Collectively, this allows for model testing and comparison between different potential models of spectral features (e.g. comparing between different forms of the aperiodic component).<\/p>\n<p>This presentation will start with an overview of the topic of spectral parameterization, examining the motivations for this approach, including showing how common methods can potentially conflate aperiodic and periodic features of the data. After this introduction, a live code demo will be presented, which will introduce the method, with a focus on demonstration the new functionality available in the new 2.0 version of the tool, including how to apply spectral parameterization to time-resolved and event-related analysis designs and how to fit and compare different model forms. Time permitting, the session will end with a hands-on section in which participants can try out the method, including to their own data if available.<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: The tool is available as an open-source Python toolbox. Participants who wish to follow along with the live code will be provided download and installation instructions. <\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Spikeinterface <b><br \/><\/b><\/h3>\n<h4 style=\"text-align: justify;\"><strong>Samuel Garcia<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">CNRS, CRNL, France<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>SpikeInterface is a Python module to analyze extracellular electrophysiology data.<\/p>\n<p>With a few lines of code, SpikeInterface enables you to load and pre-process the recording, run several state-of-the-art spike sorters, post-process and curate the output, compute quality metrics, and visualize the results.<\/p>\n<p>https:\/\/spikeinterface.readthedocs.io\/en\/stable\/<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: <\/em><\/span><a href=\"https:\/\/practicalmeeg2022.discourse.group\/c\/apice\/5\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/07\/LD-1.png&#8221; title_text=&#8221;LD-1&#8243; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#b28800&#8243; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Unfold the Mysteries of EEG: Analyzing rERPs in Complex Paradigms with Unfold.jl<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Ren\u00e9 Skukies<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">University of Stuttgart &#8211; Centre for Simulation Technology, Germany<\/span><\/span><\/i><\/h6>\n<p><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\"><\/span><\/span><\/i><\/p>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Are you interested in analyzing EEG while someone reads a book, listens to music while walking around, navigates the streets of a city, or appreciates a piece of art?<br \/>These naturalistic paradigms, where the experimenter does not control the subject&#8217;s sensory input, are becoming more popular, but present difficult challenges for data analysis and interpretation:<br \/>As a result, researchers are increasingly confronted with data where brain signals (such as ERPs) overlap in time, and are confounded by continuous variables.<br \/>To address these complexities, we developed Unfold.jl[1], a toolbox operating within the regression ERP framework (rERP).<br \/>In this 2h-workshop, we (Ren\u00e9 Skukies &amp; Benedikt Ehinger) will work on &#8211; in theory and hands on &#8211; mass-univariate rERPs, interactions and marginal effects, and overlap correction. Additionally, we will provide further material to learn about continuous and non-linear effect modelling.<br \/>The workshop will be held in the Julia programming language. However, we provide notebooks that will satisfy those with little coding experience, but optionally provide challenges to more experienced users, and thus you will have no problems following the hands-on sessions if you have (some) experience in Python and\/or MATLAB. Additionally, Julia and Unfold.jl can optionally be called from Python directly, making it easy for you to apply the learned concepts to your existing workflows.<br \/>It\u2019s time to unfold your potential(s)!<br \/>[1] https:\/\/unfoldtoolbox.github.io\/UnfoldDocs\/Unfold.jl\/stable\/)<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: To partake in the hands-on sessions, you must install Julia (at least version 1.11) and Pluto.jl on your computer. To do this, you can follow the installation guide from our workshop earlier this year here: https:\/\/www.s-ccs.de\/workshop_unfold_2025\/installation.html<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/11\/LD-1.png&#8221; title_text=&#8221;LD&#8221; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Simulating continuous event-based EEG data using UnfoldSim.jl<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Judith Schepers<\/strong><\/h4>\n<h6><i><span style=\"font-weight: 400;\"><span data-sheets-formula-bar-text-style=\"font-size:13px;color:#4a86e8;font-weight:normal;text-decoration:none;font-family:'Arial';font-style:normal;text-decoration-skip-ink:none;\">University of Stuttgart, Germany<\/span><\/span><\/i><\/h6>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;More information&#8221; open_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; closed_toggle_background_color=&#8221;rgba(255,255,255,0.9)&#8221; icon_color=&#8221;#0C71C3&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;15px||10px||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; border_radii=&#8221;off||20px||20px&#8221; border_width_all=&#8221;0px&#8221; box_shadow_style=&#8221;preset2&#8243; box_shadow_blur=&#8221;6px&#8221; box_shadow_spread=&#8221;-7px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>When testing analysis pipelines, comparing different analysis approaches, or validating statistical methods, one often needs EEG data with a known ground truth. Simulating EEG data based on known parameters addresses this need.<\/p>\n<p>Here, we present UnfoldSim.jl, a free and open-source Julia package designed for simulating continuous EEG data based on (potentially overlapping) event-related potentials. Using regression formulas, users can flexibly determine the relationship between the experimental design and the response functions. UnfoldSim.jl also provides support for multi-channel simulations via EEG-forward models and allows for the simulation of both single-subject and multi-subject data. One of its core design principles is modularity, enabling users to tailor the simulation to their specific research applications.<\/p>\n<p>In this session, we will introduce UnfoldSim.jl and provide a brief overview of its key features. The core part of the session will guide you through a simple simulation example to introduce the different simulation ingredients and illustrate the simulation workflow. Afterwards, there will be a hands-on part in which you can explore the effect of different parameters on the simulated data and create your own simulations.<\/p>\n<p>This workshop uses the Julia programming language and the practical part will be conducted using Pluto.jl notebooks. While programming experience in Julia is not required, experience in MATLAB, R, or Python is recommended.<\/p>\n<p>If you want to learn more about UnfoldSim.jl already, have a look at its documentation (https:\/\/unfoldtoolbox.github.io\/UnfoldDocs\/UnfoldSim.jl\/stable\/) or our JOSS paper (https:\/\/joss.theoj.org\/papers\/10.21105\/joss.06641).<\/p>\n<p style=\"text-align: justify;\"><span style=\"color: #ff6600;\"><em>Prerequisite: For the hands-on session, please install Julia (at least version 1.11) and Pluto.jl on your computer. We recommend you to follow this installation guide (from our Unfold workshop earlier this year): https:\/\/www.s-ccs.de\/workshop_unfold_2025\/installation.html<\/em><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_image src=&#8221;https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-content\/uploads\/2025\/07\/H-1.png&#8221; title_text=&#8221;H-1&#8243; show_bottom_space=&#8221;off&#8221; align=&#8221;right&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; positioning=&#8221;relative&#8221; vertical_offset=&#8221;0px&#8221; max_width=&#8221;100%&#8221; module_alignment=&#8221;right&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/10\/bansky2.png&#8221; title_text=&#8221;bansky2&#8243; align=&#8221;center&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; width=&#8221;32%&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][et_pb_text _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;-16px||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: center;\"><i><span style=\"font-weight: 400;\">\u201cTake that bouquet of Alpha waves in your face\u201d<br \/>Hans Berger (alleged)<\/span><\/i><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; prev_background_color=&#8221;#f7edd2&#8243; disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.27.4&#8243; background_color=&#8221;#f7edd2&#8243; use_background_color_gradient=&#8221;on&#8221; background_color_gradient_direction=&#8221;153deg&#8221; background_color_gradient_stops=&#8221;#f7edd2 30%|#ffe5ad 100%&#8221; background_enable_image=&#8221;off&#8221; parallax=&#8221;on&#8221; parallax_method=&#8221;off&#8221; background_blend=&#8221;hue&#8221; z_index=&#8221;14&#8243; custom_padding=&#8221;54px|0px|53px|0px&#8221; top_divider_style=&#8221;waves2&#8243; top_divider_flip=&#8221;horizontal|vertical&#8221; disabled=&#8221;on&#8221; saved_tabs=&#8221;all&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;60px||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; text_font=&#8221;Quicksand||||||||&#8221; header_font=&#8221;||||||||&#8221; header_3_font=&#8221;Comfortaa|700|||||||&#8221; header_3_text_color=&#8221;#00659d&#8221; header_3_font_size=&#8221;20px&#8221; header_4_font=&#8221;Quicksand||||||||&#8221; header_4_text_color=&#8221;#000000&#8243; header_6_font=&#8221;Quicksand||||||||&#8221; header_6_text_color=&#8221;#7c7c7c&#8221; header_6_line_height=&#8221;0.3em&#8221; custom_margin=&#8221;||0px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>The beautiful flowers fo the Toolbox bouquet will be shown here a bit later.<\/h3>\n<h4 style=\"text-align: justify;\"><strong>Stay tuned!<\/strong><\/h4>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion_enable=&#8221;on&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: center;\"><iframe loading=\"lazy\" src=\"https:\/\/docs.google.com\/forms\/d\/e\/1FAIpQLScHPNXODfs_FHwqWWnyr1pROMgLf_wf6u3m49Fso3gT5iDBnA\/viewform?embedded=true\" width=\"640\" height=\"3300\" frameborder=\"0\" marginheight=\"0\" marginwidth=\"0\">Loading\u2026<\/iframe><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;4.17.6&#8243; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.17.6&#8243; custom_padding=&#8221;|||&#8221; scroll_horizontal_motion=&#8221;0|50|50|100|-1|0|0&#8243; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_image src=&#8221;https:\/\/practicalmeeg2022.org\/wp-content\/uploads\/2022\/10\/bansky2.png&#8221; title_text=&#8221;bansky2&#8243; align=&#8221;center&#8221; _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; width=&#8221;32%&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][et_pb_text _builder_version=&#8221;4.17.6&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;-16px||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: center;\"><i><span style=\"font-weight: 400;\">\u201cTake that bouquet of Alpha waves in your face\u201d<br \/>Hans Berger (alleged)<\/span><\/i><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Toolboxes BouquetInstructions The Toolbox Bouquet is half a day online session running on Oct 30th PM (CET).\u00a0 Several exciting practical courses are offered on selected flowers (toolboxes) for cutting-edge MEEG data analysis This event will take place Online. More information coming soon in the meantime check the previous PracticalMEEG 2022 bouquet to get an impression [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"class_list":["post-2390","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-json\/wp\/v2\/pages\/2390","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-json\/wp\/v2\/comments?post=2390"}],"version-history":[{"count":34,"href":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-json\/wp\/v2\/pages\/2390\/revisions"}],"predecessor-version":[{"id":3561,"href":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-json\/wp\/v2\/pages\/2390\/revisions\/3561"}],"wp:attachment":[{"href":"https:\/\/cuttingeeg.org\/practicalmeeg2025\/wp-json\/wp\/v2\/media?parent=2390"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}