MRI Brain Screening Tool — Automated tumor-like pathology screening for clinical MRI

A friendly, research-grade software that automatically screens brain MRI exams (FLAIR / T2 / T1ce) and highlights tumor-like anomalies so radiologists can prioritise urgent cases. Request evaluation

</div> 

At a glance


Why researchers & clinicians use it

  1. Faster triage of urgent cases — flags exams likely to contain tumor-like pathology so radiologists can prioritize.
  2. High sensitivity for tumors — demonstrated high detection rate on the evaluation set, minimizing missed severe cases.
  3. Privacy-preserving collaboration — designed for Federated Learning so institutions contribute without exposing raw patient data.
  4. Lightweight, practical architecture — 2D slice U-Net with Inception-v3 encoder to reduce resource needs at client sites.
  5. Explainable output for radiologists — slice-level anomaly scores, a ranked set of most relevant slices and bounding boxes to support human review.

How it works — technical summary


Evaluation & validation (concise)

Note: Comparison experiments showed federated training improved over very small local models and delivered performance comparable to strong baselines on the available dataset.

Federated Learning and Dataset scheme


Technical specs (table)

| Item | Specification / notes | |—|—| | Supported input | FLAIR, T2, T1ce (T1 acceptable substitute) | | Model architecture | 2D U-Net with Inception-v3 encoder (pretrained weights) | | Preprocessing | Skull-strip, co-registration, intensity CDF-matching, resample to ~0.72 mm/px | | Training paradigm | Federated Learning (parameter averaging) across multiple clients; heavy augmentation | | Typical inference time    | ≈ 9 s/exam on GTX 1080 + i7-class CPU (example) | | Validated pathologies | HGG, LGG, lymphoma, meningioma, metastases (tumor-like lesions) | | Validation set | 102 exams held-out (total dataset ≈160 exams) | | Output | Pre-diagnostic report: slice-level scores, ranked slices, bounding boxes, optional segmentation masks | | Intended users | Radiology departments, research groups, multi-site federated ML projects |


Deployment & requirements


Use cases


Pricing & availability


Sample outputs (what the radiologist sees)

Radiological Preview of the results


Limitations & caveats


FAQ (short)

Q1 — Can the tool replace a radiologist?
No. It produces pre-diagnostic reports to help prioritise and flag exams; final diagnosis remains with a qualified radiologist.

Q2 — What MRI sequences are required?
FLAIR, T2 and T1ce are the designed inputs. T1 (non-contrast) can be used if T1ce is missing.

Q3 — How accurate is it on tumors?
On the reported evaluation it achieved high sensitivity (≈98%) with high precision (≈91%) and global Dice >0.88 on the held-out set.

Q4 — Is patient data shared during federated training?
No. Only model updates are shared; raw images remain on the local site.

Q5 — Can we run it offline / behind hospital firewall?
Yes — the model and preprocessing can be run locally; federated participation requires networked aggregation only for training updates.

Q6 — How do we request an evaluation copy?
Contact the presenting author: Roman Stoklasa — stoklasa@fi.muni.cz. Include your institution, intended use (evaluation / research), and technical contact.

References

If you use this tool, please cite the following paper:

Acknowledgement

This project was partially funded by the CERN budget for Knowledge Transfer for the benefit of Medical Applications, by the Ministry of Health of the Czech Republic (Grant No. NU21-08-00359) and the Ministry of Education, Youth and Sports of the Czech Republic (Project No. LM2023050).

Credits