CellQuant·AI

By Harshitha Govindaraju · Hassan Lab · Rutgers ECE

Counting fluorescent leukocytes,
one tile at a time.

A convolutional neural network trained on micrographs of human leukocytes and spherical particles. Drop a fluorescent image — the model slices it into roughly 100 px tiles, regresses a per-tile count, and sums them.

  • 7,878,529parameters
  • 128 × 128tile input
  • 12conv layers

Method

A small VGG with a regression head.

The architecture is a sequential convolutional network with four progressively deeper blocks. Each block holds two or three 3×3 convolutions, max-pooled and dropped at 0.3. The dense head regresses a single non-negative scalar — the predicted cell count for that 128×128 tile.

A ZeroPadding2D(40, 40) after the first convolution expands the receptive field before the deeper layers compress it. The full image is split into a uniform grid of approximately 100 px tiles, each resized to 128×128×3, batched through the network. The total cell count is the ceiling of the summed predictions.

Training: MSE loss under Adam at lr = 1e-3, batch size 16, on fluorescent micrographs of human leukocytes and spherical particles. Architecture, dataset curation, and training pipeline by Harshitha Govindaraju (M.S. thesis, Rutgers, 2021).

Layer            Output
input                128 × 128 ×   3
conv2d   32f 3×3     126 × 126 ×  32
zeropad      +40     206 × 206 ×  32
conv2d   32f 3×3     204 × 204 ×  32
conv2d   32f 3×3     202 × 202 ×  32
maxpool  2×2         101 × 101 ×  32
dropout  0.3
─────────────────────────────────────
conv2d   64f 3×3      99 ×  99 ×  64
conv2d   64f 3×3      97 ×  97 ×  64
conv2d   64f 3×3      95 ×  95 ×  64
maxpool  2×2          47 ×  47 ×  64
dropout  0.3
─────────────────────────────────────
conv2d  128f 3×3      45 ×  45 × 128
conv2d  128f 3×3      43 ×  43 × 128
conv2d  128f 3×3      41 ×  41 × 128
maxpool  2×2          20 ×  20 × 128
dropout  0.3
─────────────────────────────────────
conv2d  128f 3×3      18 ×  18 × 128
conv2d  128f 3×3      16 ×  16 × 128
conv2d  128f 3×3      14 ×  14 × 128
maxpool  2×2           7 ×   7 × 128
dropout  0.3
─────────────────────────────────────
flatten                          6272
dense  + LReLU + BN              1024
dense  + LReLU + BN               512
dense  + ReLU                       1
─────────────────────────────────────
total parameters          7,878,529

Best results

The model was trained on fluorescent images of human leukocytes and spherical particles at resolutions above 1024² px. This is a research demonstration — not a clinical instrument.

Author

Designed and built by Harshitha Govindaraju.

Portrait of Harshitha Govindaraju

Lead author · Architecture · Training · Web application

Harshitha Govindaraju

M.S., Biomedical Engineering · Rutgers University

Designed the convolutional architecture, curated and labeled the training dataset, ran the training pipeline, and built the original web application. The cell_count_v2 model and the 2021 Rutgers M.S. thesis on which this site is based are her work.

Acknowledgments

  • Portrait of Muhammad Ahsan Sami

    Muhammad Ahsan Sami

    Image acquisition support · Ph.D., Electrical & Computer Engineering, Rutgers

  • Portrait of Dr. Umer Hassan

    Dr. Umer Hassan

    Faculty advisor · Assistant Professor, Hassan Lab, Rutgers ECE