Official repository for "PulseMind: A Multi-Modal Medical Model for Real-World Clinical Diagnosis", accepted as an Oral paper at AAAI 2026.
Datasets, models, and benchmarks for PulseMind.
This repository provides the official codebase and evaluation scripts for the PulseMind project, together with:
- ๐งช MediScope: a large-scale multimodal medical dataset.
In this release, we provide a curated subset of ~1,000 cases (JSON + images). The full dataset is larger and will be gradually released. - ๐ง Models:
PulseMind-72B
- ๐ Benchmarks:
MedDiagnoseโ 237-sample test set (JSON + images)CMtMedQA-testโ 1,000-sample test set (JSON)MedDiagnose-plusโ 937-sample extended test set (JSON + images)
โ ๏ธ Due to size and privacy considerations, all datasets and model checkpoints are hosted externally and are not stored in this GitHub repository.
This repo mainly contains evaluation code.
-
MediScope (curated ~1k subset)
-
MedDiagnose (237 samples)
-
CMtMedQA-test (1,000 samples)
-
MedDiagnose-plus (937 samples)
- PulseMind-72B checkpoint: Download link
After downloading, please follow the recommended directory layout
(e.g., place raw data underdata/, benchmark test sets underBenchmark/,
and model checkpoints undermodel/), so that the provided evaluation scripts can run out of the box.
The GitHub repository mainly contains evaluation code and auxiliary configs:
.
โโโ data/ # (empty by default) place downloaded datasets here
โ
โโโ Benchmark/
โ โโโ CMtMedQA-test/ # Folder for CMtMedQA-test data (JSON, etc.)
โ โโโ MedDiagnose/ # Folder for MedDiagnose data (JSON + images)
โ โโโ MedDiagnose-plus/ # Folder for MedDiagnose-plus data (JSON + images)
โ โโโ Eval/ # Optional: extra evaluation utilities / configs
โ
โโโ model/ # Place downloaded model checkpoints here
โ
โโโ README.md