Commit 9ae246f6 authored by amichaut's avatar amichaut
Browse files

updated user_guide

parent 5b543bd2
Pipeline #55490 passed with stages
in 16 seconds
This diff is collapsed.
This diff is collapsed.
......@@ -25,13 +25,19 @@
Welcome to Track Analyzer's documentation!
==========================================
**Track Analyzer** is Python-based data visualization pipeline for tracking data.
It *does not* perform any tracking, but takes as input any kind of tracked data.
It analyzes trajectories by computing standard parameters such as velocity,
acceleration, diffusion coefficient, divergence and curl maps, etc.
This pipeline also offers a trajectory visualization in 2D (and soon in 3D rendering),
using a selection tool allowing to perform some fate mapping and back-tracking.
**Track Analyzer** can be run by means of a Jupyter notebook based graphical interface, so no programming knowledge is required.
| **Track Analyzer** is Python-based data visualization pipeline for tracking data.
It *does not* perform any tracking, but visualizes and quantified any kind of tracked data.
It analyzes trajectories by computing standard quantities such as velocity,
acceleration, diffusion coefficient, local density, etc.
Trajectories can also be plotted on the original image in 2D or 3D using custom color coding.
| **Track Analyzer** provides a filtering section that can extract subsets of data based on spatiotemporal criteria.
The filtered subsets can then be analyzed either independently or compared. This filtering section also provides a tool
for selecting specific trajectories based on spatiotemporal criteria which can be useful to perform fate mapping and back-tracking.
| **Track Analyzer** can be run without any programming knowledge using its graphical interface. The interface is launched by
running a Jupyter notebook containing widgets allowing the user to load data and set parameters without writing any code.
==========
User Guide
......
......@@ -48,8 +48,8 @@ with a virtualenv
you can also use a `virtual environment <https://virtualenv.pypa.io/en/stable/>`_. ::
python3 -m venv track-analyzer
cd track-analyzer
python3 -m venv pyTA
cd pyTA
source bin/activate
pip install track-analyzer
......@@ -59,7 +59,7 @@ to exit from the virtualenv ::
To run track-analyzer ::
cd track-analyzer
cd pyTA
source bin/activate
......
......@@ -31,50 +31,160 @@ Quickstart
Data requirements
=================
**Track Analyzer** needs as input a text file of tracked data containing the positions coordinates (in 2D or 3D) along time and the tracks identifiers.
Optional: data can be plotted on the original image. **Track Analyzer** needs a grayscaled image file which can be a 3D or 4D tiff stack (2D timelapse or 3D timelapse). Other metadata such as time and length scales will be provided by the user through the graphical interface.
**Track Analyzer** needs as input a text file (csv or txt file) of tracked data containing the position coordinates (in 2D or 3D) along time and the tracks identifiers.
Optionally, data can be plotted on the original image provided as a 3D or 4D tiff stack (ie. 2D+time or 3D+time). If the format of your movie is
different (list of images), please convert it to tiff stack using `Fiji <https://fiji.sc/>`_ for instance.
The position file must contain columns with the x, y, (z) positions, a frame column and track id column. The positions coordinates can be in
pixels or in scaled data. The information about the scaling and other metadata such as time and length scales will be provided by the user through the graphical interface.
If **Track Analyzer** is run in command line (see below), the data directory must contain:
- a comma-separated csv file named positions.csv which column names are: x, y, (z), frame, track
- a text file named info.txt containing the metadata (see example)
- (optional) a tiff file named stack.tif
..
add a section descibing config and info files
================
Analysis modules
================
**Track Analyzer** contains a data selection module and three main analysis modules.
- Data selection module
Subsets of the datasets can be selected by spatial or time criteria, or track duration.
A drawing tool offers the possibility to precisely select trajectories at a given frame and inspect either their past (back-tracking)
or their future (fate-mapping).
- Trajectory-based analysis module
It offers trajectory visualization and computes trajectory parameters, such as: instantaneous velocities and accelaration, MSD analysis, trajectory averaging.
- Map-based analysis module
It computes averaging of velocities and acceleration data on a regular grid. These averaged maps can be used to compute 2D divergence and curl maps.
- Comparator module
A series of previously run analyses can be compared by plotting parameters together on the same plot.
=================================
Launching the graphical interface
=================================
Start a Jupyter notebook:
- go to the project folder run `cd <path_to_the_project_folder>`
- if not done yet, activate the environment: run `conda activate pyTA`
- launch a Jupyter notebook, run `jupyter notebook`
- a web browser opens, click on `analyze_traj_gui.ipynb`
====================
Running the pipeline
====================
A `Jupyter notebook <https://jupyter.org/>`_ comprises a series of 'cells' which are blocks of Python code to be run. Each cell can be run by pressing Shift+Enter.
There are two ways of running **Track Analyzer**:
- using a Jupyter notebook based graphical interface (highly recommended)
- using terminal command lines
Using a notebook
================
Documentation about Jupyter notebooks can be found `here <https://jupyter.org/>`_. Briefly, a notebook comprises a series of 'cells' which are blocks
of Python code to be executed. Each cell can be run by pressing Shift+Enter.
Each cell will execute a piece of code generating the pipeline graphical interface. They all depend on each other, therefore, they MUST be run in order.
By default, the code of each cell is hidden but it can be shown by pressing the button at the top of the notebook: 'Click here to toggle on/off the raw code'.
Once the code is hidden, you might miss a cell. This is a common explanation if you get an error. If this happens, start the pipeline again a couple of cells above.
To launch a notebook:
- go to the project folder run :code:`cd <path_to_the_project_folder>`
- if not done yet, activate the environment: run :code:`conda activate <env_name>` (if you use conda)
- launch a Jupyter notebook, run :code:`jupyter notebook`
- a web browser opens, click on :code:`analyze_traj_gui.ipynb`
Using command lines
====================
If you need to run **Track Analyzer** from a terminal without any graphical interface, it is possible, but you won't beneficiate from the interactive
modules. Data filtering and analysis parameters will need to be passed through config files (see examples). **Track Analyzer** comes with two commands:
- :code:`traj_analysis` which runs the trajectory analysis section (see below).
It takes as arguments: path to data directory (optional: use the flag -r or --refresh to refresh the database)
- :code:`map_analysis` which runs the map analysis section (see below).
It takes as arguments: path to data directory (optional: use the flag -r or --refresh to refresh the database)
..
add third way with galaxy
==================
Analysis procedure
==================
**Track Analyzer** contains a data filtering section and three main analysis sections.
Data filtering section
======================
Subsets of the datasets can be filtered on spatiotemporal criteria: x, y, z positions, time subset and track duration.
A drawing tool also offers the possibility to hand-draw regions of interest.
Additionally, specific trajectories can be selected by using their position in a region of interest at a specific time. This feature can be
useful to inspect either their past (back-tracking) or their future (fate-mapping). Trajectories can also be selected just using their ids.
These subsets can then be analyzed separately. The analysis will be run independently on each on them.
Alternatively, they can be analyzed together. Trajectories and computed quantities will then be plotted together using color-coding.
Trajectory analysis section
===========================
Trajectories can be plotted over the original image, frame by frame, with some custom color-coding (z color-coded, t color-coded, subset, or random).
The total trajectories can also be plotted together with the option to center their origin. This can be useful to detect some patterns in trajectories.
Several quantities can be computed and plotted: velocities and acceleration (spatial components and its modulus).
The local cell density can be estimated by performing a Voronoi tesselation. The Voronoi diagram can be plotted and the area of each Voronoi cell can
be calculated and plotted. Currently, only the Voronoi tesselation in 2D (even if the data are 3D) is available.
All these quantities can also be averaged over the whole trajectory and plotted.
Trajectories can also be quantified using the Mean Squared Displacement (MSD) analysis. The MSD can be plotted and fitted with some diffusion models
to compute the diffusion coefficient.
Map analysis section
====================
Data can be averaged on a regular grid to produce maps of such quantities. Two kinds of maps can be plotted: vector fields and scalar fields.
Vector fields
-------------
Velocity and acceleration vectors can be plotted on 2D maps. If 3D data, the z dimension can be color-coded.
Such maps can be superimposed on a scalar field.
Scalar fields
-------------
The velocity and acceleration components and moduli can be plotted as color-coded maps. The vector average moduli can also be computed.
The difference between the velocity mean and the vector average modulus is that the velocity mean is the mean over all velocities in
the grid unit, while the vector average modulus is the modulus of the vector averaged in the grid unit.
Divergence (contraction and expansion) maps, and curl (rotation) maps can also be plotted.
Comparator section
==================
Previously generated data by the trajectory analysis section can be compared by plotting parameters together on the same plots.
======
Output
======
**Track Analyzer** generates several files, plots, data points, and configuration files.
Database and configuration files
================================
Some files are necessary to the pipeline processing:
- data_base.p is a binary collection of python objects generated when the initial tracking file is loaded. It allows the initial loading to be skipped if the pipeline is run several times on the same tracking data. It can be refreshed if necessary.
- info.txt is a text file containing important metadata: 'lengthscale', 'timescale', 'z_step', 'image_width', 'image_height', 'length_unit', 'time_unit', 'table_unit', 'separator'. It can be interactively generated using the notebook
- if the original image stack is 4D (3D+t), a stack_maxproj.tif is generated by performing a maximum projection over the z dimension, so a 2D image can be used for 2D based plotting
- if run using command lines, the parameters are passed using several configuration files stored in the config folder in data directory output
Data output
===========
The trajectory analysis and the map analysis are generated respectively in a traj_analysis and map_analysis directory. Each subset's analysis is saved in a new folder.
In each subset's directory:
- a config folder is generated with the configuration parameters used for this specific analysis
- all_data.csv stores the subset's table of positions
- track_prop.csv stores the averaged quantities along trajectories
- each plot is saved using an image format, size and resolution that can be chosen when the plotting parameters are set in notebook. Additionally, the default colors and color maps can be customized in the plotting parameters sections.
- the data points of each plot is saved in a csv file with the same name as the image file, so you can replot the data using your favorite plotting software
===============
Troubleshouting
===============
- (OUTDATED) The drawing tool depends on the [napari](https://github.com/napari/napari) project.
The installation of this project can be tricky depending on your system.
If you are not able to solve this installation, you can still use **Track Analyzer** without the drawing tool.
You will then have to comment the `import napari` line in `codes/analyze_traj.py` and will not be able to use the ROI option in the data selection module.
\ No newline at end of file
The 3D visualization and the drawing selection tool depend on the `napari <https://napari.org/>`_ package.
The installation of this package can lead to issues depending on your system.
If you are not able to solve this installation, you will not be able to have access to 3D rendering. However, you will still be able to
use **Track Analyzer** without the drawing tool, by using coordinates sliders in the graphical interface.
......@@ -1240,7 +1240,7 @@ def plot_total_traj(data_dir, df, dim=3, plot_dir=None, plot_fn=None, plot_confi
# color
if color_code == "z" and dim == 3:
colors = tpr.get_cmap_color(z, cmap, vmin=z_lim[0], vmax=z_lim[1])
colors = tpr.get_cmap_color(z, cmap, vmin=cmap_lim[0], vmax=cmap_lim[1])
elif color_code == "random":
colors = color_list[i % len(color_list)]
elif color_code == "none":
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment