September 16, 2021 09:44
Information Integration for Neuroscience Team (PI: Motoaki Kawanabe) thumbnails

Description

Information Integration for Neuroscience Team (https://aip.riken.jp/labs/goalorient_tech/inf_integr_neurosci/) at RIKEN AIP

Speaker 1: Motoaki Kawanabe (15 min)
Title: Overview of the Information Integraton for Neurosciene Team
Abstract: The goal of our team is to develop monitoring techniques of brain and biomedical information for mental health. In this talk, I will briefly give an overview of our research activities on decoding methods for neuro-imaging data (EEG, fMRI, etc.).

Speaker 2: Reinmar Kobler (30 min)
Title: Inter-session and application transfer to facilitate mutual learning in an EEG-based BCI
Abstract: Inherent variability in magnetoencephalographic (MEG) and electroencephalographic (EEG) activity across subjects, recording sessions and tasks limits generalization of brain-computer interfaces (BCI) and biomarker development. A key challenge is to disentangle task-related activity in the presence of non-stationary background activity and artifacts. Here, we address this challenge from two perspectives. First, we present the results of a long-term case study with at etraplegic end-user where we investigated mutual training on BCI performance. Across a 14 months period, mutual training significantly improved performance in a 4-class discrimination task (7% increase). The improvement was driven by both user and transfer learning. Second, we studied linear Riemannian tangent space (RTS) methods that have recently been shown to be more robust to inter-session/-subject variability than linear component-based methods. However, RTS methods offered so far limited model interpretability. We proposed a method to transform the parameters of linear RTS models into interpretable patterns. Using typical assumptions, we showed that this approach identifies the true patterns of latent sources. In simulations and two real MEG and EEG datasets, we demonstrated the validity of the proposed approach and investigated its behavior when the model assumptions are violated. We found that the robustness property of linear RTS models also transfers to the associated patterns.

Speaker 3: Jun-ichiro Hirayama (30 min)
Title: Investigating multimodal interactions among brain areas by demixed shared component analysis
Abstract: Recent neuroscience research puts emphasis on investigating functional interactions of neural populations across brain areas, with advances in both measurement and data analysis techniques. As a basic approach, linear dimensionality reduction such as canonical correlation analysis and reduced-rank regression has been used to efficiently identify cross-areal interactions based on paired multivariate neural signals, which represents each neural population by a small number of components. However, since neuroscience experiments often involve multiple task or stimulus factors by design, the components obtained tend to reflect mixed neural responses to the multiple factors and thus they are typically not easy to interpret. To overcome this issue, we recently developed demixed shared component analysis (dSCA), a novel supervised dimensionality reduction technique for paired multivariate signals, and demonstrated its validity through several applications. In this talk, I will illustrate the idea of dSCA and show some results on neural population datasets from previous animal studies. I will also briefly introduce its nonlinear extension which enables applications to wider contexts.

Speaker 4: Aapo Hyvärinen (30 min)
Title: Nonlinear Independent Component Analysis: Identifiability, Self-Supervised Learning, and Likelihood
Abstract: Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, especially in the form of independent component analysis (ICA). However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Recently, we have shown that this problem can be solved by using additional information, in particular in the form of temporal structure or some additional observed variable. Our methods were originally based on “self-supervised” learning increasingly used in deep learning, but in more recent work, we have provided likelihood-based approaches. In particular, we have developed computational methods for efficient maximization of the likelihood for two variants of the model, based on variational inference or Riemannian relative gradients, respectively.

Related Laboratories

last updated on April 1, 2022 00:07Laboratory