Generative modeling of living cells with implicit neural representations

David Wiesner, Julian Suk, Sven Dummer, Tereza Nečasová, Vladimír Ulman, David Svoboda, and Jelmer M. Wolterink

Journal paper  |  Conference paper  |  Conference slides  |  Conference poster  |  Source code (GitHub)

This is the official web page of MedIA 2024 paper "Generative modeling of living cells with SO(3)-equivariant implicit neural representations" and MICCAI 2022 paper "Implicit Neural Representations for Generative Modeling of Living Cell Shapes". Here, we make available the complete source code, pre-trained models, and data sets presented in the paper.

Introduction

Contemporary methods for cell shape synthesis in biomedical imaging often sample from a pre-determined database or utilize simple geometric objects like ellipsoids. These methods considerably narrow down the variability in the synthetic images. In addition, they are usually difficult to adapt to different cell types, which limits their usability. The methods utilizing voxel-based representations are further limited by the computational and memory demands that prohibit generating shapes at high spatial and temporal resolutions.

In this work, we optimize a neural network as an implicit neural representation of arbitrary time-evolving living cell shapes. We define these time-evolving cell shapes by 3D+time signed distance functions (SDF), rather than voxel volumes directly. The SDF representation allows for non-uniform sampling, which facilitates representing a complex 3D scene with a much lower number of data points than traditional voxel-based representations. Instead of traditional ReLU activations, we utilize periodic functions, which exhibit faster convergence and allow for fitting high frequency shapes with sharp edges. We demonstrate our approach on three distinct cell lines: the Platynereis dumerilii embryo cells that exhibit rapid growth and shape deformations, the C. elegans embryo cells that grow and divide, and the A549 human lung carcinoma cells with growing and branching filopodial protrusions.


Implementation of the Method

The following guide is applicable for Linux-based systems. The versions of the libraries and command line parameters may slightly differ for Windows or macOS systems.

Requirements and Dependencies

The implementation was tested on AMD EPYC 7713 64-Core Processor, 512 GB RAM, NVIDIA A100 80 GB GPU, and Ubuntu 20.04 LTS with the following versions of software:

Downloads

Quick Start Guide

To follow this guide, please download and extract the Source code, pre-Trained models, and examples (1.2 GB) and optionally the training data sets.


Produced Datasets

We used the optimized neural network and randomly sampled latent codes to produce new spatio-temporal SDFs of evolving shapes. A separate model was optimized for each cell line. The produced SDFs were converted to binary voxel volumes representing the cell shapes. These shapes were subsequently textured using a conditional generative model to produce synthetic datasets suitable for benchmarking of image analysis algorithms.

Randomly Generated Synthetic Cell Shapes

Each of the following archives contains 33 synthetic 3D time-lapse sequences of the respective cell line that were generated using the proposed method, where each sequence captures an evolving cell shape at 30 time points. The cell shape at each time point is represented by a binary volume of resolution 256×256×256 voxels that is saved as lossless TIFF.

Synthetic Segmentation Benchmark

The benchmark datasets were produced using the synthetic cell shape sequences discussed in the previous section. We computed maximum intensity projections of the 3D voxel volume at each respective time point to obtain binary masks. After that, we used a conditional generative model pix2pixHD, which was trained on real images acquired using a microscope, to produce plausible-looking grayscale textures for these masks. The obtained masks serve as a ground truth for segmentation of the synthetic microscopy images and thus are suitable for benchmarking of image analysis algorithms. Each archive contains 33 2D time-lapse sequences, with each time point represented by a pair of synthetic texture and the corresponding binary mask, having resolution of 256×256 pixels and saved as lossless TIFF.


Visual Comparison Between Periodic and ReLU Activation Functions

Here, we present a visual comparison of a single filopodial cell reconstructed with the models using periodic activation functions (sine) and ReLU activation functions. The models were trained to fit the ground truth sequence and optimize a latent code representing this time-evolving cell shape. After that, the latent code was used for reconstruction of the cell shape. Notice that the time-evolving shape reconstructed using the ReLU activations exhibits perceivable noise on the cell surface that is not present in the ground truth sequence, and that the respective filopodial protrusions are thicker and less detailed.


Citation

If you find our work useful in your research, please cite:


Acknowledgements

This work was partially funded by the 4TU Precision Medicine programme supported by High Tech for a Sustainable Future, a framework commissioned by the four Universities of Technology of the Netherlands. Jelmer M. Wolterink was supported by the NWO domain Applied and Engineering Sciences VENI grant (18192). We acknowledge the support by the Ministry of Education, Youth and Sports of the Czech Republic (MEYS CR) (Czech-BioImaging Projects LM2023050 and CZ.02.1.01/0.0/0.0/18_046/0016045). This project has received funding from the European High-Performance Computing Joint Undertaking (JU) and from BMBF/DLR under grant agreement No 955811. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and France, the Czech Republic, Germany, Ireland, Sweden and the United Kingdom.

The data set of Platynereis dumerilii embryo cells is courtesy of Mette Handberg-Thorsager and Manan Lalit, who both have kindly shared it with us.

The shape descriptors in the paper were computed and plotted using an online tool for quantitative evaluation, Compyda, available at https://cbia.fi.muni.cz/compyda. We thank its authors Tereza Nečasová and Daniel Múčka for kindly giving us early access to this tool and facilitating the evaluation of the proposed method.

The neural network implementation is based on DeepSDF, MeshSDF, and SIREN.