Please install and setup AIMET before proceeding further.
This model was tested with the torch_gpu
variant of AIMET 1.23.
Install skimage as follows
pip install scikit-image
export PYTHONPATH=$PYTHONPATH:<path to parent of aimet_model_zoo_path>
The Cityscape Dataset can be downloaded from here:
To run evaluation with QuantSim in AIMET, use the following
python3 aimet_zoo_torch/ffnet/evaluators/ffnet_quanteval.py \
--model-config <configuration to be tested> \
--dataset-path <path to directory containing CityScapes> \
--batch-size <batch size as an integer value, defaults to 2> \
Available model configurations are:
- segmentation_ffnet40S_dBBB_mobile
- segmentation_ffnet54S_dBBB_mobile
- segmentation_ffnet78S_BCC_mobile_pre_down
- segmentation_ffnet78S_BCC_mobile_pre_down
- segmentation_ffnet122NS_CCC_mobile_pre_down
- The original prepared FFNet checkpoint can be downloaded from here:
- The Quantization Simulation (Quantsim) Configuration file can be downloaded from here: default_config_per_channel.json (Please see this page for more information on this file).
- Weight quantization: 8 bits, per channel symmetric quantization
- Bias parameters are not quantized
- Activation quantization: 8 bits, asymmetric quantization
- Model inputs are quantized
- TF-Enhanced was used as quantization scheme
- Cross layer equalization (CLE) has been applied on optimized checkpoint
- for low resolution models with pre_down suffix, the GaussianConv2D layer is disabled for quantization.
Below are the mIoU results of the PyTorch FFNet model for the Cityscapes dataset:
Model Configuration | FP32 (%) | INT8 (%) |
---|---|---|
segmentation_ffnet78S_dBBB_mobile | 81.3 | 80.7 |
segmentation_ffnet54S_dBBB_mobile | 80.8 | 80.1 |
segmentation_ffnet40S_dBBB_mobile | 79.2 | 78.9 |
segmentation_ffnet78S_BCC_mobile_pre_down | 80.6 | 80.4 |
segmentation_ffnet122NS_CCC_mobile_pre_down | 79.3 | 79.0 |