# midline **Repository Path**: Killjoyss/midline ## Basic Information - **Project Name**: midline - **Description**: No description available - **Primary Language**: Unknown - **License**: GPL-3.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-10-18 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Fish midline extraction from idtracker.ai videos This repository includes scripts to extract the posture angles (midline) of number_of_animals tracked with [idtracker.ai](idtracker.ai) after. The pipeline only requires the *video_object.npy* and the *blobs_collection.npy* generated after tracking a video with [idtracker.ai](idtracker.ai) (see data folder). The general pipeline is as follows (see GIF) ![GIF_2](fishmidline/scripts/midline.gif) The head and tail of the animal are extracted per blob from the end points of the skeleton (blue and red points in the top-right panel). In the general case, the tail and the head are indistinguishable from the skeleton or the blob of pixels. Hence, we assign the point which Y coordinate is the lowest to be the head, and the opposite for the tail. One of these end points will be used to order the midline points to perform the interpolation and the computation of the equidistant points. For blobs that look like fish, we coded the class *FishContour* that detects the nose, and tail based on the first and second maximum of the curvature of the contour (see top-right panel on the GIF below). In particular, the nose is used to order the points of the skeleton from the nose to the tail before the interpolation. This way the midline and angles are always with respect to the nose of the animal. ![GIF_1](fishmidline/scripts/midline_nose.gif) I invite users to code their our nose and tail detectors. If the developer requires information about the inner structure of the body of the animals, the pixel values of the segmented blob can be extracted from the *blob_collection_segmented.py* (in the preprocessing folder of [idtracker.ai](idtracker.ai) or from the frame of the original video. ## Requirements * numpy * scipy * skimage * mahotas (for pruning skeleton and detecting end points) * matplotlib * tqdm * idtracker.ai (https://gitlab.com/polavieja_lab/idtrackerai) ## TODO * Optimize: * Currently all the operations are performed on the whole frame. This is very expensive. The best would be to work on a bounding box arond the blob and then translate the midline to the full-frame coordinates * Also the function to prune the skeleton is pretty slow. Maybe find a better way of prunning the skeleton. * Eignenvalues * Eignenshapes ## Contributors Francisco Romero-Ferrero (francisco.romero@neuro.fchampalimaud.org)