我是靠谱客的博主 活力黑夜,最近开发中收集的这篇文章主要介绍【tensorrt】——batch推理对比,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

关键词:tensorrt, int8, float16,batch推理

该测试结果有问题,正确的测试请移步:【tensorrt】——trtexec动态batch支持与batch推理耗时评测

int8量化,这篇文章中nvidia tensorrt的int8推理在batch大的时候有推理速度的提升,这里实测一下。

  1. 采用float16精度的ddrnet23模型,tensorrt的python api进行推理。可以看到采用batch的推理方式并没有什么提升。

with batch:1, inference time:0.0089 s
with batch:2, inference time:0.0078 s
with batch:3, inference time:0.0076 s
with batch:4, inference time:0.0074 s
with batch:5, inference time:0.0075 s
with batch:6, inference time:0.0072 s
with batch:7, inference time:0.0075 s
with batch:8, inference time:0.0073 s
with batch:9, inference time:0.0077 s
with batch:10, inference time:0.0080 s
with batch:11, inference time:0.0089 s
with batch:12, inference time:0.0090 s
with batch:13, inference time:0.0089 s
with batch:14, inference time:0.0105 s
with batch:15, inference time:0.0087 s
with batch:16, inference time:0.0083 s
with batch:17, inference time:0.0079 s
with batch:18, inference time:0.0080 s
with batch:19, inference time:0.0080 s
with batch:20, inference time:0.0079 s
with batch:21, inference time:0.0079 s
with batch:22, inference time:0.0079 s
with batch:23, inference time:0.0078 s
with batch:24, inference time:0.0078 s

  1. 采用int8精度的hrnet_ocrw18

with batch:1, inference time:0.0109 s
with batch:2, inference time:0.0088 s
with batch:3, inference time:0.0081 s
with batch:4, inference time:0.0078 s
with batch:5, inference time:0.0076 s
with batch:6, inference time:0.0074 s
with batch:7, inference time:0.0077 s
with batch:8, inference time:0.0075 s
with batch:9, inference time:0.0075 s
with batch:10, inference time:0.0083 s
with batch:11, inference time:0.0081 s
with batch:12, inference time:0.0080 s
with batch:13, inference time:0.0080 s
with batch:14, inference time:0.0082 s
with batch:15, inference time:0.0085 s
with batch:16, inference time:0.0080 s
with batch:17, inference time:0.0083 s
with batch:18, inference time:0.0082 s
with batch:19, inference time:0.0083 s
with batch:20, inference time:0.0082 s
with batch:21, inference time:0.0084 s
with batch:22, inference time:0.0089 s
with batch:23, inference time:0.0091 s
with batch:24, inference time:0.0089 s
with batch:25, inference time:0.0084 s
with batch:26, inference time:0.0079 s
with batch:27, inference time:0.0079 s
with batch:28, inference time:0.0081 s
with batch:29, inference time:0.0086 s
with batch:30, inference time:0.0086 s
with batch:31, inference time:0.0084 s

总结:
在int8和float16上实测是没有什么提升的。

1. 从:https://blog.csdn.net/zhou_438/article/details/112823818,可以看到batch size到32以上后单张推理才有提升
2. 从这里可以看到 batch_size1,2 也是没有变换的

最后

以上就是活力黑夜为你收集整理的【tensorrt】——batch推理对比的全部内容,希望文章能够帮你解决【tensorrt】——batch推理对比所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(48)

评论列表共有 0 条评论

立即
投稿
返回
顶部