Blood-Cell-Detection-using-TFOD-API

This project demonstrates the use of TensorFlow Object Detection API (along with GCP ML Engine) to automatically detect Red Blood Cells (RBCs), White Blood Cells (WBCs), and Platelets in each image taken via microscopic image readings

MIT License

Stars
22

Blood-Cell-Detection-using-TFOD-API

This project demonstrates the use of TensorFlow Object Detection API (along with GCP ML Engine) to automatically detect Red Blood Cells (RBCs), White Blood Cells (WBCs), and Platelets in each image taken via microscopic image readings.

The dataset used in this project was collected from here. Note that I deleted some of the files from the original dataset directory which I found out to be unnecessary for the project.

The directory structure looks like so:

 BCCD
    Annotations [364 entries exceeds filelimit, not opening dir]
    ImageSets
       Main
           test.txt
           train.txt
           trainval.txt
           val.txt
    JPEGImages [364 entries exceeds filelimit, not opening dir]
 Exported_Graph
    frozen_inference_graph.pb
 Model_Checkpoints
    model.ckpt-50007.data-00000-of-00003
    model.ckpt-50007.data-00001-of-00003
    model.ckpt-50007.data-00002-of-00003
    model.ckpt-50007.index
    model.ckpt-50007.meta
 Notebooks
    Exploration.ipynb
    Inference.ipynb
 Records_and_CSVs
 Sample_Images
    Screen\ Shot\ 2019-08-28\ at\ 1.36.46\ PM.png
    Screen\ Shot\ 2019-08-28\ at\ 1.38.07\ PM.png
    Screen\ Shot\ 2019-08-28\ at\ 1.38.37\ PM.png
 images
    test
    train
 LICENSE
 README.md
 directory_structure.txt
 faster_rcnn_inception_v2_coco.config
 generate_tfrecord.py
 label_map.pbtxt
 xml_to_csv.py

I have intentionally left some files such as:

  • generating TFRecords files which can be generated by running the generate_tfrecord.py script accordingly
  • generating .csv files which can be generated using the xml_to_csv.py script

Notebooks/Exploration.ipynb takes care of putting together the images and annotations in the right directory in the right way. This process was referred from this tutorial and so were the generation of the .csv files and TFRecords.

I followed the official TensorFlow Object Detection API documentation and this article to kickstart the training process on GCP using ML Engine and Cloud TPUs and also to export the inference graph.

I used a Faster R-CNN based architecture since it resolves the problem of selective search pretty elegantly and yields a pretty good mAP of 86%. The Model_Checkpoints folder contains the latest checkpointed files collected from the training process.

Demo inference

Here are some results after running the trained model on some test images:


Additional references

On the roadmap

I plan to further optimize this model using the OpenVINO toolkit and deploy this model on a Neural Compute Stick.

A note of thanks :)

I used GCP ML Engine for training the custom object detection model. I am very thanksful to Google Developers Expert Program for providing me with GCP Credits. I am also thankful to them for providing me with Qwiklabs Credits to learn more about GCP. I am thankful to the TensorFlow Research Cloud team for providing me with Cloud TPU access which indeed helped me speed up the training process to a great extent.

I presented this work at TensorFlow Roadhshow Bengaluru: