SiROP
Login   
Language
  • English
    • English
    • German
Home
Menu
  • Login
  • Register
  • Search Opportunity
  • Search Organization
  • Create project alert
Information
  • About SiROP
  • Team
  • Network
  • Partners
  • Imprint
  • Terms & conditions
Register now After registration you will be able to apply for this opportunity online.

Explainable Transformer Pipelines for Imagined-Speech and Motor-Imagery EEG BCIs

Noisy signals, scarce labels, and black-box models hinder EEG-based BCIs for imagined speech and limb movement. We will tackle these issues with hybrid convolution-transformer networks—CTNet (github.com/snailpt/CTNet) and MSCFormer (github.com/snailpt/MSCFormer)—augmented by transfer learning and few-shot adaptation. Attention heat-maps and SHAP explanations (github.com/slundberg/shap) will expose which channels and time windows drive each decision. The work aligns with ViTFOX’s goal of low-power, explainable neuro-AI (vitfox.eu).

Keywords: Transformer, EEG, BCI, AI, Signal processing, Explainable AI, Machine learning, Vision transformer

  • We will train the models on the multi-session imagined-speech / grasp-imagery dataset published in Frontiers in Human Neuroscience (10.3389/fnhum.2022.898300). 1. Architectures: CTNet couples an EEGNet-style CNN front-end with a transformer encoder; MSCFormer adds parallel multi-scale convolutions before the attention block. - Transfer & Few-Shot: Models will be pre-trained on larger EEG corpora, then fine-tuned; meta-learning will enable < 10-trial personalisation for new users. - Explainability: Transformer attention, gradient saliency, and SHAP will highlight task-relevant electrodes and time segments. - Efficiency: Pruning and token-length reduction will keep inference light for future wearable deployment on ViTFOX hardware.

    We will train the models on the multi-session imagined-speech / grasp-imagery dataset published in Frontiers in Human Neuroscience (10.3389/fnhum.2022.898300).

    1. Architectures: CTNet couples an EEGNet-style CNN front-end with a transformer encoder; MSCFormer adds parallel multi-scale convolutions before the attention block.

    - Transfer & Few-Shot: Models will be pre-trained on larger EEG corpora, then fine-tuned; meta-learning will enable < 10-trial personalisation for new users.

    - Explainability: Transformer attention, gradient saliency, and SHAP will highlight task-relevant electrodes and time segments.

    - Efficiency: Pruning and token-length reduction will keep inference light for future wearable deployment on ViTFOX hardware.

  • - Generalisation – maintain high cross-subject accuracy and enable < 10-trial few-shot adaptation. - Explainability – provide attention maps and SHAP reports that reveal physiologically plausible EEG features. - Community Impact – release code, pretrained weights, and low-power deployment guidelines to accelerate ViTFOX-compatible BCIs.

    - Generalisation – maintain high cross-subject accuracy and enable < 10-trial few-shot adaptation.

    - Explainability – provide attention maps and SHAP reports that reveal physiologically plausible EEG features.

    - Community Impact – release code, pretrained weights, and low-power deployment guidelines to accelerate ViTFOX-compatible BCIs.

  • Dr. Nikhil Garg (nigarg@ethz.ch) (H69, ETZ). Available as bachelor/Semester Thesis (milestones and project goals will be adjusted accordingly). Supervisor: Prof. Dr. Laura Bégon-Lours

    Dr. Nikhil Garg (nigarg@ethz.ch) (H69, ETZ). Available as bachelor/Semester Thesis (milestones and project goals will be adjusted accordingly). Supervisor: Prof. Dr. Laura Bégon-Lours

Calendar

Earliest start2025-05-25
Latest end2025-11-01

Location

Neuromorphic Electronics with Oxides (ETHZ)

Labels

Semester Project

Collaboration

Master Thesis

ETH Zurich (ETHZ)

Topics

  • Information, Computing and Communication Sciences
  • Engineering and Technology
SiROP PARTNER INSTITUTIONS