- Generate pairs image+labels out of the raw tif stacks (100 frames) and xml files (coordinates)
- Use these images to train Stardist models
- Detect particles with the model of choice.
Implementation of Stardist for detection of small particles.
- Raw data: synthetic time-lapses of different particles (microtubules, vesicles) in different SNR and corresponding ground truth coordinates of the particles stored in xml.
- Step 1: generate training data: Image + labeled image. Image will be same as those in the raw dataset, and the labeled image will be generated out of coordinates stored in corresponding xml files.
- Step 2: use the pairs image+labeled image to train Stardist models
Implementation of Stardist for Particle Detection.
- Raw data: synthetic time-lapses of different particles (microtubules, vesicles) in different SNR and corresponding ground truth coordinates of the particles stored in xml.
- Step 1: generate training data: Image + labeled image. Image will be same as those in the raw dataset, and the labeled image will be generated out of coordinates stored in corresponding xml files.
- Step 2: use the pairs image+labeled image to train Stardist models
- Step 3: test the models.
Organization:
./data/1_raw_movies_and_xmls/
Original tifs and xmls.
./data/2_generated/
Training data generated from original tifs and xmls.
./docs/
All related documentation
./models
Contains trained models
./scripts
All related notebooks and scripts
./stardist
Contains the original Stardist iimplementation
1. Notebook "0_generate_training_data":
- This notebook is used to generate the training data.
- It is well commented, short recap: read tif+xml, convert xml to labeld image, save original tif + labeled.
2. Notebook "1_data-test":
- The idea of this notebook is identical to the original stardist notebook "stardist/examples/2D/1_data.ipynb"; read this notebook for more detaiils. Part of the code in our implementation is ommited, since our data is more straightforward than theirs.
- In this notebook they are looking at the labels and study if they can be approximated well with 8,16,32 ray casting technique.
- We simply plug in our data (generated with the notebook above) and do exactly what they did with their nuclei labels.
3. Notebooks with prefix 2,3,4:
- Training and testing the models. The implementation is exactly the same as they have for original Stardist in "stardist/examples/2D/2_training.ipynb"; read this notebook for more details. Part of the code in our implementation is ommited, since our data is more straightforward than theirs.
- There are several training regimes that we tested so far: training on the full dataset (mixing all data with different SNRs, different particle types, denstities), training only on Vesicles of all SNRs and all densities, traiining on Vesicles of SNR=7 and density=middle. Basically, these three notebooks differ only in the way the training data is loaded.
- For each of these 3 training there is also a corresponding testing notebook.
4. Notebook "5_generate_csvs":
- This notebook loads a model, detects particles in test images and generates a csv with predictions
- The format of this csv should match the TrackMate style formatting.