Skip to content

Latest commit

 

History

History
executable file
·
65 lines (50 loc) · 3.22 KB

File metadata and controls

executable file
·
65 lines (50 loc) · 3.22 KB

MobileDet-EdgeTPU

Environment Setup

Setup AI Model Efficiency Toolkit (AIMET)

Please install and setup AIMET before proceeding further. This evaluation was run using AIMET 1.25 for TensorFlow 2.4 i.e. please set release_tag="1.25" and AIMET_VARIANT="tf_gpu" in the above instructions.

Additional dependencies:

pip install pycocotools
pip install --upgrade tf_slim
pip install numpy==1.19.5

Append the repo location to your PYTHONPATH by doing the following:

export PYTHONPATH=$PYTHONPATH:/<path to parent>/aimet-model-zoo

Dataset

TFRecord format of 2017 COCO dataset is needed. There are two options for downloading and processing MSCOCO dataset:

cd models/research/object_detection/dataset_tools
./download_and_preprocess_mscoco.sh <mscoco_dir>
  • Option 2: If COCO dataset is already available or you want to download COCO dataset separately
python object_detection/dataset_tools/create_coco_tf_record.py --logtostderr --include_masks --train_image_dir=./MSCOCO_PATH/images/train2017/ --val_image_dir=./MSCOCO_PATH/images/val2017/ --test_image_dir=./MSCOCO_PATH/images/test2017/ --train_annotations_file=./MSCOCO_PATH/annotations/instances_train2017.json --val_annotations_file=./MSCOCO_PATH/annotations/instances_val2017.json --testdev_annotations_file=./MSCOCO_PATH/annotations/image_info_test2017.json --output_dir=./OUTPUT_DIR/

Note: The --include_masks option must be used.


Model checkpoint for AIMET optimization

  • Downloading of model checkpoints is handled by evaluation script.
  • Checkpoint used for AIMET quantization can be downloaded from the Releases page.

Usage

python aimet_zoo_tensorflow/mobiledetedgetpu/evaluators/mobiledet_edgetpu_quanteval.py 
 --model-config <model configuration to test> \ 
 --dataset-path <path to tfrecord dataset> \
 --annotation-json-file <path to instances json file>/instances_val2017.json

Supported model configurations are:

  • mobiledet_w8a8

Quantization configuration

In the evaluation script included, we have manually configured the quantizer ops with the following assumptions:

  • Weight quantization: 8 bits, per-tensor symmetric quantization
  • Bias parameters are not quantized
  • Activation quantization: 8 bits, asymmetric quantization
  • Model inputs are not quantized
  • TF was used for weight quantization scheme
  • TF was used for activation quantization scheme
  • Weights are optimzied by per-tensor Adaround in TF_enhanced scheme