Jarod Lévy portrait

Jarod Lévy

PhD Student — AI × Neuroscience

MIND @ Inria × Brain&AI @ Meta FAIR Paris

Decoding communication through non-invasive brain recordings.

Paris, France • jarod@meta.com

Hi! My name is Jarod. I’m a PhD student at INRIA and Meta in Paris. I am supervised by Stéphane d’Ascoli and Thomas Moreau. I work with the Brain&AI team led by Jean-Rémi King.

My research is centered on building AI models to better understand and decode the brain through non-invasive recordings. Before this, I was a research intern at Meta, Institut Pasteur and New York Genome Center.

Remaining time until the end of my PhD

0
0

Publications

Lead Publications

Under Review • 2025

Brain-to-text decoding: A non-invasive approach via typing

J. Lévy, M. Zhang, S. Pinet, J. Rapin, H. Banville, S. d’Ascoli, J.-R. King

We present Brain2Qwerty, a deep learning model that decodes full sentences from non-invasive brain recordings (EEG and MEG) as participants type memorized sentences. In a cohort of 35 volunteers, Brain2Qwerty achieved an average character error rate of 32% with MEG and 67% with EEG, reaching 19% for the best individuals and generalizing to unseen sentences. Analyses reveal contributions from both motor and higher cognitive processes. These results bring non-invasive brain decoding closer to invasive neuroprostheses, paving the way for safe communication interfaces for patients unable to speak or move.

GRETSI • 2025
GRETSI main figure GRETSI figure 2

Deep Learning on M/EEG signals: Adapt your model, not your preprocessing

J. Lévy, H. J. Banville, J.-R. King, S. Pinet, J. Rapin, S. d’Ascoli, T. Moreau

This study investigates the impact of preprocessing EEG (electroencephalography) and MEG (magnetoencephalography) signals on the performance of deep learning models. Our results show that minimal preprocessing significantly reduces computational cost while maintaining performance comparable to more complex approaches, across datasets and models. Our observations suggest that model choice has a more decisive influence on the outcome than the complexity of the applied preprocessing.

Collaborations

Under Review • 2025

From Thought to Action: Hierarchy of Neural Dynamics for Language Production

M. Zhang, J. Lévy, S. d’Ascoli, J. Rapin, F. Alario, P. Bourdillon, S. Pinet, J.-R. King

We used magnetoencephalography (MEG) and electroencephalography (EEG) to record the brain activity of 35 skilled typists as they composed sentences. This unique approach reveals how the brain organizes language production across multiple levels—from context to words, syllables, and letters. Each level of representation emerges and overlaps in time, forming a dynamic hierarchy of neural codes that orchestrates the transformation of thought into language.

Brain • 2022
Amygdala circuits illustration

Silencing of amygdala circuits during sepsis prevents anxiety-related behaviours

L. Bourhy, A. Mazeraud, L. H. A. Costa, J. Lévy, D. Rei, E. Hecquet, et al.

Sepsis can trigger lasting psychiatric effects such as anxiety and PTSD, but the brain mechanisms remain unclear. In mice, sepsis caused overactivation of a specific fear circuit linking the amygdala to the bed nucleus of the stria terminalis. This rewiring persisted after recovery and led to anxiety-like behaviors. Temporarily silencing this circuit during the acute phase of sepsis or treating with levetiracetam prevented these effects. Targeting fear circuits early may block post-sepsis psychiatric disorders.

JCO CCI • 2024
Oncology illustration

Patients facing large language models in oncology: A narrative review

C. Raynaud, D. Wu, J. Lévy, M. Marengo, J.-E. Bibault

The integration of large language models (LLMs) into oncology is transforming patients' journeys through education, diagnosis, treatment monitoring, and follow-up. This review examines the current landscape, potential benefits, and associated ethical and regulatory considerations of the application of LLMs for patients in the oncologic domain.

Under Review • 2025
SLM LLM illustration

Benchmarking LLMs and SLMs for patient reported outcomes

M. Marengo, J. Lévy, J.-E. Bibault

Large language models (LLMs) like GPT-4 can already turn patient-reported outcomes into clear medical summaries, helping clinicians focus on what matters most. But privacy remains a major hurdle. This study compares compact small language models (SLMs) to LLMs for summarizing patient Q&A forms in radiotherapy. The results show that while SLMs still trail in accuracy, they offer a compelling path toward secure, efficient, and privacy-preserving AI in healthcare.

Other Works

Towards robust fetus segmentation from MRI imaging: Accelerating annotations and diving into high semantic features

Jarod Lévy, Charlotte Godard, Jean-Baptiste Masson

We designed a pipeline to make the creation and analysis of medical databases more efficient, using a tablet app to ease data annotation and a self-supervised inpainting model to extract meaningful features. Applied to the LUMIERE pregnancy MRI dataset, this work contributes to fetal development research and could be extended to other medical imaging projects.

Writing

During the first 20 weeks, I wrote short articles about what makes me think, smile, or question things during my PhD journey. You can click on them to read!

I also have a Substack account: substack.com/@jarodlevy

Outside Work

This section is under construction. Stay tuned for updates!

Curriculum Vitae

View CV (PDF)