Two small GUIs for annotating fetal heart ultrasound videos. Based on OpenCV.
GPL-3.0 License
This repository contains two small annotation tools for annotating fetal heart ultrasound videos with certain variables of interest. It relates to work performed towards my DPhil thesis (see my website for more information).
There are two tools:
heart_annotations
- For annotating heart videos with 'global' information in each frame. Specfically, the heart visibility, heart centre location, heart_radius, viewing plane, orientation, and cardiac phase variables.substructure_annotations
- For annotating the locations of cardiac structures.First ensure you have installed the dependencies as above.
Compile the two files heart_annotations.cpp
and substructure_annotations.cpp
using your C++ compiler, making sure to specify C++11 if necessary, and link against the OpenCV libraries and the Boost program options, filesystem and system libraries.
There is Makefile in the build/
directory to simplify this process for users with GNU/Linux operating systems or similar. To use this, issue one of the following commands from the build/
directory.
To build both tools:
$ make
To build just heart_annotations
:
$ make heart_annotations
To build just substructure_annotations
:
$ make substructure_annotations
To remove any/all compiled software, just use:
$ make clean
This tool allows you to annotate a .avi
video, and store the results in a .tk
track file (a plain text file with .tk
extension).
To open a video in the annotation tool from the command line, you need to provide two inputs: the path to the video file (the -v
option), and the location of the directory to store the track in (the -t
option). These options can appear in any order. For example, to annotate a video at /path/to/a/video.avi
and store the results in /another/path/to/tracks/
, issue the following:
$ ./heart_annotations -v /path/to/a/video.avi -t /another/path/to/tracks/
This will store the output in a .tk
file in the specified directory and the name of this file will be the same as the video (minus the path and .avi
extension). For example, in the above case, the output file would be called /another/path/to/tracks/video.tk
. If there is already a file of that name in the directory, the software will read in this file and allow you to edit a previous set of annotations and save the changes, overwriting the previous file.
If you do not provide an output directory, the current working directory is used.
When you open the tool, you will see the first frame of the video appear with a circle in the middle. The different variables are displayed as follows:
Within a frame you can use the keyboard to manipulate the variables using the following keys:
The tool stores an array of the annotations for each frame in the video in memory (the 'buffer'). This also remembers which frames you have previously annotated, and which you have not.
You can move through the frames with the following keys:
Whenever you use return or backspace to move to a frame that has no previously stored annotation, the initial value for that annotation will be copied from the value that was just stored in the frame that was previously being annotated. This does not apply the diastole and systole frame labellings, which are reset in the new frame. In this way annotations are propogated through the video, allowing you to make lots of similar annotations quickly in sequences where the heart orientation/position/view does not change by just repeatedly tapping or holding down return/backspace. However if you move to a frame where there is a previous annotation stored in the buffer, this previous annotation will be restored instead of propogating the annotation from the neighbouring frame.
Occasionally you may want to propogate annotations through sequences of frames even when those frames do have previously stored annotations in the buffer. This may happen for example when correcting a mistake you have made over a number of frames. You can do this by activating overwrite mode by pressing the o key. When this mode is active, annotations will always be propogated from one frame to the next when you press return or enter. Use this with caution however, as it is easy to mistakenly overwrite previously annotated frames. You can see when you are in overwrite mode as "OVERWRITE MODE" will appear in yellow text in the bottom right of the image, and return to normal behaviour by pressing o again.
The values for the cardiac phase are not directly annotated but instead are inferred from your end-diastole (ED) and end-systole (ES) labels by interpolating. To do this, annotate the ED/ES frames, and then tap the z key to calculate the circular-valued cardiac values. These can then be seen by the arrowhead moving in and out along the orientation line (in = diastole, out = systole), but only after they have been calculated for the first time. Note that if you then make subsequent changes to the ED/ES frames, you will need to tap z again to update the cardiac phase values.
As well as interpolating values for the cardiac phase, the routine also extrapolates estimated positions for ED and ES frames by assuming a consant phase rate. Therefore you do not need to label every single ED/ES frame in video in order to hit the strongly, although it is stringly recommended that you manually annotate as many as you can. You can see the automatically selected ED/ED frames appear with the text in brackets "ED"/"ES". If the routine has insufficient annotated frames, you will see a warning message output to the terminal and the values will not be updated.
When you are finished, you can exit the application in two ways:
Alternatively, you will exit automatically when you hit Return on the last frame.
The track files that are created by the tool are simple text files that follow the format described on this page (.tk files) and this page (.stk files).
In both cases, there are C++ functions in thesisUtilities.cpp
to read the information from a trackfile. These should be self-explanatory to use.
Furthermore, Python functions to read the files are provided in python-utilities/heart_annotation_python_utilities.py
. The function readHeartTrackFile
takes a filename and returns the track data as a numpy array in which each row corresponds to a frame and each column corresponds to one of the variables.
# Import the module
import heart_annotation_python_utilities as hapu
# An example path to a trackfile of interested
filename = 'path/to/testvideo.tk'
# Read in the file
data_table,image_dimensions,image_flip,heart_radius = hapu.readHeartTrackFile(filename)
# Now the data can be accessed with the column index variables
# For example, to access all the variables relating to the first frame (index 0), use
data_table[0,hapu.tk_frameCol] # the frame number
data_table[0,hapu.tk_labelledCol] # the Boolean 'labelled' variable
data_table[0,hapu.tk_presentCol] # indicates whether the variable is present
data_table[0,hapu.tk_yposCol] # y location of heart centre
data_table[0,hapu.tk_xposCol] # x location of the heart centre
data_table[0,hapu.tk_oriCol] # heart orientation
data_table[0,hapu.tk_viewCol] # heart view
data_table[0,hapu.tk_phasePointsCol] # flags for end-diastole/systole frames
data_table[0,hapu.tk_cardiacPhaseCol] # the circular cardiac phase variable
Before you begin annotating the locations of structures of interest in a video, you first need to define which structures you are interested in and which of the cardiac views they are present in.
This information is stored in a simple text file that you can create in any text editor. The file should consist of a list of structures with each structure appearing on one line of the file. At the start of each line, the structure's name appears, which should contain no spaces. Then after this, the following information should appear, separated by spaces:
An example structure lists file:
crux 3 0 1
apex 0 0 1 2
base 0 0 1
mitral_valve_end 3 0 1 2
mitral_valve_centre 3 1 1
tricuspid_valve_end 3 0 1
tricuspid_valve_centre 3 1 1
aortic_valve 3 0 2
aorta_3V 1 0 3
pulmonary_valve 1 0 3
trachea 0 0 3
SVC 1 0 3
spine 0 0 1 2 3
descending_aorta 0 0 1 2 3
You can open the structures annotation tool from the command line. You need to provide the name of the video using the -v
flag, the directory to store the resulting 'structure track files' (.stk
files) using the -t
flag, the location of the directory containing the heart track files (.tk
files using the -d
flag, and the structures list file (as above) with the -s
flag. The names of the .tk
and .stk
files within their respective directories is automatically chosen to match the name of the video file (but changing the extension). The heart track file must already exist within the directory because the structure annotations process depends upon the information from the whole heart annotation process. The loaded heart track file is not changed by the structures annotation tool. By contrast, the structures annotation file (.stk
) need not already exist. If it does already exist the existing file will be loaded for editing, if it does not already exist it will be created. For example:
./substructure_annotations -v /path/to/a/video.avi -t /another/path/to/structure/tracks/ -d /yet/another/path/to/heart/tracks/ -s /path/to/a/structure/list/file.txt
Annotation consists of labelling the following information for each structure in each frame of the video:
Within a frame you can use the keyboard and mouse to manipulate these variables in the following way:
This works in the same way as in the heart_annotations
tool, with the same shortcuts. In addition there is an optional motion prediction mode toggled via the m key. When it is turned on, the structure locations for the next frame are predicted using a motion estimate.
There are Python functions in the heart_annotation_python_utilities.py
file that read the structure list and structure track files.
readStructureList
reads the structure list file.readStructure
reads the annotations for a given structure from an .stk
file.Read their built-in Python docstrings for more information.
This software is licensed under the GNU Public License. See the licence file for more information.
This software was written by Christopher Bridge at the Insitute of Biomedical Engineering, University of Oxford.