实现TensorRT-7.0插件自由!(如果不踩坑使用TensorRT插件功能) (5)

官方的batchednms最大支持的topk为4096,太大也会崩溃。不过可以修改源代码实现突破这个数值,但仍然有bug:

void (*kernel[])(const int, const int, const int, const int, const float, const bool, const bool, float *, T_SCORE *, int *, T_SCORE *, int *, bool) = { P(1), P(2), P(3), P(4), P(5), P(6), P(7), P(8), P(9), P(10), P(11), P(12), P(13), P(14), P(15), P(16) }; 关于plugin的注册

简单说下plugin的注册流程。

在加载NvInferRuntimeCommon.h头文件的时候会得到一个getPluginRegistry,这里类中包含了所有已经注册了的IPluginCreator,在使用的时候我们通过getPluginCreator函数得到相应的IPluginCreator。

注册插件有两种方式,第一种可以看官方的plugin代码。

extern "C" { bool initLibNvInferPlugins(void* logger, const char* libNamespace) { initializePlugin<nvinfer1::plugin::GridAnchorPluginCreator>(logger, libNamespace); initializePlugin<nvinfer1::plugin::NMSPluginCreator>(logger, libNamespace); initializePlugin<nvinfer1::plugin::ReorgPluginCreator>(logger, libNamespace); ... return true; }

其中initializePlugin函数执行了addPluginCreator函数:

template <typename CreatorType> void initializePlugin(void* logger, const char* libNamespace) { PluginCreatorRegistry::getInstance().addPluginCreator<CreatorType>(logger, libNamespace); }

addPluginCreator函数又执行了getPluginRegistry()->registerCreator对pluginCreator进行了注册,这样就完成注册任务了:

void addPluginCreator(void* logger, const char* libNamespace) { ... if (mRegistryList.find(pluginType) == mRegistryList.end()) { bool status = getPluginRegistry()->registerCreator(*pluginCreator, libNamespace); if (status) { mRegistry.push(std::move(pluginCreator)); mRegistryList.insert(pluginType); verboseMsg = "Plugin creator registration succeeded - " + pluginType; } else { errorMsg = "Could not register plugin creator: " + pluginType; } } else { verboseMsg = "Plugin creator already registered - " + pluginType; } ... }

另一种注册可以直接通过REGISTER_TENSORRT_PLUGIN来注册:

//! //! \brief Return the plugin registry //! // 在加载`NvInferRuntimeCommon.h`头文件的时候会得到一个`getPluginRegistry` extern "C" TENSORRTAPI nvinfer1::IPluginRegistry* getPluginRegistry(); namespace nvinfer1 { template <typename T> class PluginRegistrar { public: PluginRegistrar() { getPluginRegistry()->registerCreator(instance, ""); } private: T instance{}; }; #define REGISTER_TENSORRT_PLUGIN(name) \ static nvinfer1::PluginRegistrar<name> pluginRegistrar##name {} } // namespace nvinfer1

也就是说,如果我们已经在plugin的.h文件中执行了REGISTER_TENSORRT_PLUGIN(BatchedNMSPluginCreator);就不需要再创建一个类似于官方的initLibNvInferPlugins()函数去一个一个注册了。

参考链接

https://github.com/NVIDIA/TensorRT/tree/release/7.0/plugin
https://github.com/triton-inference-server/server/issues/767
https://blog.csdn.net/u010552731/article/details/106520241

https://forums.developer.nvidia.com/t/tensorrt-5-1-6-custom-plugin-with-fp16-issue/84132/4
https://forums.developer.nvidia.com/t/tensorrt-cask-error-in-checkcaskexecerror-false-7-cask-convolution-execution/109735
https://github.com/NVIDIA/TensorRT/tree/release/7.0/samples/opensource/samplePlugin
https://forums.developer.nvidia.com/t/unable-to-run-two-tensorrt-models-in-a-cascade-manner/145274/2

DCNv2-github

https://github.com/CharlesShang/DCNv2
https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch

交流

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wpwjpf.html