使用conan进行源码以及依赖包管理时候的注意事项-找不到所需要的依赖包的时候该怎么做

问题

在之前的一篇实现中:深度学习之用CelebA_Spoof数据集搭建一个活体检测-用MNN来推理时候如何利用Conan对软件包进行管理,有的人进行了尝试,问我为什么安装了conan和根据步骤进行操作后没办法进行编译,提示的是下面图示的信息。
在这里插入图片描述

也就是在设置的仓库中并没有MNN这个包和对应的版本存在。安装正常的逻辑说,在编译的过程中,会优先选择本地缓存包,如果有对应的平台和指定的包,则使用,否则会到指定的私有仓库去寻找(私有仓库建立见这里),没有就再次会到Conan的官方中央仓库去寻找。但图示的问题就是没有找到,这就意味着本地也没有,私有仓库也没有。其实原因是之前并没有人将MNN这个包和源码上传到中央仓库去,当然第一次使用私有仓库的时候也并没有上传。但是像opencv的这些库就有,因为这些库都是知名度很高的库,自然就有人传了上去。

如何做

自己做上传一个,也就是将MNN封装成conan包上传到conan服务器可以保存在本地缓存或者上传到自己的私有仓库,这就需要使用conan create conanfile.py这两个东西了,详见大牛的博客《conan入门(十九):封装第三方开源库cpp_redis示例》。
以使用MNN-2.4.0为例(因为我本地有1.2.7了)
首先需要conanfile.py,当然可以自己使用指令 conan new 自己重新来定义一个,也可以使用下面的文件:

from conan import ConanFile
from conan.tools.cmake import CMake, CMakeDeps, CMakeToolchain
from conan.tools.env import VirtualBuildEnv
import os

class MnnConan(ConanFile):
    name = "mnn"
    version = "2.4.0"
    # Optional metadata
    url = "https://github.com/alibaba/MNN"
    description = "a highly efficient and lightweight deep learning framework"
    topics = ("deep learning","ai","mnn")

    tool_requires = "cmake/[>=3.15.7]"
    package_type = "library"
    # Binary configuration
    settings = "os", "compiler", "build_type", "arch"
    options = {
        "use_system_lib": [True, False],
        "build_hard": [True, False],
        "shared": [True, False],
        "win_runtime_mt": [True, False],
        "orbid_multi_thread": [True, False],
        "openmp": [True, False],
        "use_thread_pool": [True, False],
        "build_train": [True, False],
        "build_demo": [True, False],
        "build_tools": [True, False],
        "build_quantools": [True, False],
        "evaluation": [True, False],
        "build_converter": [True, False],
        "support_tflite_quan": [True, False],
        "debug_memory": [True, False],
        "debug_tensor_size": [True, False],
        "gpu_trace": [True, False],
        "portable_build": [True, False],
        "sep_build": [True, False],
        "aapl_fmwk": [True, False],
        "with_plugin": [True, False],
        "build_mini": [True, False],
        "use_sse": [True, False],
        "build_codegen": [True, False],
        "enable_coverage": [True, False],
        "build_protobuffer": [True, False],
        "build_opencv": [True, False],
        "internal_features": [True, False],
        "metal": [True, False],
        "opencl": [True, False],
        "opengl": [True, False],
        "vulkan": [True, False],
        "arm82": [True, False],
        "onednn": [True, False],
        "avx512": [True, False],
        "cuda": [True, False],
        "tensorrt": [True, False],
        "coreml": [True, False],
        "build_benchmark": [True, False],
        "build_test": [True, False],
        "build_for_android_command": [True, False],
        "use_logcat": [True, False],
        "use_cpp11": [True, False],
        }
    default_options = {
        "use_system_lib": False,
        "build_hard": False,
        "shared": False, 
        "win_runtime_mt": False, 
        "orbid_multi_thread": False, 
        "openmp": False, 
        "use_thread_pool": True, 
        "build_train": False, 
        "build_demo": False, 
        "build_tools": True,
        "build_quantools": False,
        "evaluation": False,
        "build_converter": False,
        "support_tflite_quan": True,
        "debug_memory": False,
        "debug_tensor_size": False,
        "gpu_trace": False,
        "portable_build": False,
        "sep_build": False,
        "aapl_fmwk": False,
        "with_plugin": False,
        "build_mini": False,
        "use_sse": True,
        "build_codegen": False,
        "enable_coverage": False,
        "build_protobuffer": True,
        "build_opencv": False,
        "internal_features": False,
        "metal": False,
        "opencl": False,
        "opengl": False,
        "vulkan": False,
        "arm82": False,
        "onednn": False,
        "avx512": False,
        "cuda": False,
        "tensorrt": False,
        "coreml": False,
        "build_benchmark": False,
        "build_test": False,
        "build_for_android_command": False,
        "use_logcat": True,
        "use_cpp11": True,
        }
    # Sources are located in the same place as this recipe, copy them to the recipe
    exports_sources = "CMakeLists.txt",  "*/*"

    def generate(self):
        tc = CMakeToolchain(self)
        tc.variables["MNN_USE_SYSTEM_LIB"] = self.options.use_system_lib
        tc.variables["MNN_BUILD_HARD"] = self.options.build_hard
        tc.variables["MNN_BUILD_SHARED_LIBS"] = self.options.shared
        tc.variables["MNN_WIN_RUNTIME_MT"] = self.options.win_runtime_mt
        tc.variables["MNN_FORBID_MULTI_THREAD"] = self.options.orbid_multi_thread
        tc.variables["MNN_OPENMP"] = self.options.openmp
        tc.variables["MNN_USE_THREAD_POOL"] = self.options.use_thread_pool
        tc.variables["MNN_BUILD_TRAIN"] = self.options.build_train
        tc.variables["MNN_BUILD_DEMO"] = self.options.build_demo
        tc.variables["MNN_BUILD_TOOLS"] = self.options.build_tools
        tc.variables["MNN_BUILD_QUANTOOLS"] = self.options.build_quantools
        tc.variables["MNN_EVALUATION"] = self.options.evaluation
        tc.variables["MNN_BUILD_CONVERTER"] = self.options.build_converter
        tc.variables["MNN_SUPPORT_TFLITE_QUAN"] = self.options.support_tflite_quan
        tc.variables["MNN_DEBUG_MEMORY"] = self.options.debug_memory
        tc.variables["MNN_DEBUG_TENSOR_SIZE"] = self.options.debug_tensor_size
        tc.variables["MNN_GPU_TRACE"] = self.options.gpu_trace
        tc.variables["MNN_PORTABLE_BUILD"] = self.options.portable_build
        tc.variables["MNN_SEP_BUILD"] = self.options.sep_build
        tc.variables["MNN_AAPL_FMWK"] = self.options.aapl_fmwk
        tc.variables["MNN_WITH_PLUGIN"] = self.options.with_plugin
        tc.variables["MNN_BUILD_MINI"] = self.options.build_mini
        tc.variables["MNN_USE_SSE"] = self.options.use_sse
        tc.variables["MNN_BUILD_CODEGEN"] = self.options.build_codegen
        tc.variables["MNN_ENABLE_COVERAGE"] = self.options.enable_coverage
        tc.variables["MNN_BUILD_PROTOBUFFER"] = self.options.build_protobuffer
        tc.variables["MNN_BUILD_OPENCV"] = self.options.build_opencv
        tc.variables["MNN_INTERNAL"] = self.options.internal_features
        # backend options
        tc.variables["MNN_METAL"] = self.options.metal
        tc.variables["MNN_OPENCL"] = self.options.opencl
        tc.variables["MNN_OPENGL"] = self.options.opengl
        tc.variables["MNN_VULKAN"] = self.options.vulkan
        tc.variables["MNN_ARM82"] = self.options.arm82
        tc.variables["MNN_ONEDNN"] = self.options.onednn
        tc.variables["MNN_AVX512"] = self.options.avx512
        tc.variables["MNN_CUDA"] = self.options.cuda
        tc.variables["MNN_TENSORRT"] = self.options.tensorrt
        tc.variables["MNN_COREML"] = self.options.coreml
        # target options
        tc.variables["MNN_BUILD_BENCHMARK"] = self.options.build_benchmark
        tc.variables["MNN_BUILD_TEST"] = self.options.build_test
        tc.variables["MNN_BUILD_FOR_ANDROID_COMMAND"] = self.options.build_for_android_command
        tc.variables["MNN_USE_LOGCAT"] = self.options.use_logcat

        tc.variables["MNN_USE_CPP11"] = self.options.use_cpp11

        tc.generate()

        cd = CMakeDeps(self)
        cd.generate()

        env = VirtualBuildEnv(self)
        env.generate()

    def build(self):
        cmake = CMake(self)
        cmake.configure()
        cmake.build()

    def package(self):
        cmake = CMake(self)
        cmake.install()

    @property
    def _mnn_lib(self):
        _name = "MNN.lib" if self.settings.compiler == "msvc" else "libMNN.a"
        return os.path.join(self.package_folder,"lib","",_name)

    def package_info(self):
        self.cpp_info.libs = ["MNN"]
        if not self.options.shared :
            if self.settings.compiler == "msvc":
                self.cpp_info.sharedlinkflags.extend(["/WHOLEARCHIVE:MNN"])
                self.cpp_info.exelinkflags = self.cpp_info.sharedlinkflags
            elif self.settings.compiler == "gcc":
                # about LINKER: ,see also https://cmake.org/cmake/help/latest/command/target_link_options.html#handling-compiler-driver-differences
                self.cpp_info.sharedlinkflags.extend(["LINKER:--whole-archive",self._mnn_lib,"LINKER:--no-whole-archive"])
                self.cpp_info.exelinkflags = self.cpp_info.sharedlinkflags
            elif self.settings.compiler == "clang":
                self.cpp_info.sharedlinkflags.extend(["LINKER:--whole-archive",self._mnn_lib,"LINKER:--no-whole-archive"])
                self.cpp_info.exelinkflags = self.cpp_info.sharedlinkflags

这个文件来自《conan入门(二十九):对阿里mnn进行Conan封装塈conans.CMake和conan.tools.cmake.CMake的区别》,这个文件主要是针对CMakeLists.txt的编译选项进行conan的option定义,换句话说,这就是conan的CMakeLists.txt文件。
MNN的官方中获取源码,并切换到指定版本的源码

git clone https://github.com/alibaba/MNN.git -b 2.4.0

将上面的conanfile.py保存到MNN的源码根目录下,进行交叉编译,这里我是用之前提供的msvc编译windows的版本。如果需要其他平台的交叉编译工具链文件,更换这个地方就行。

[settings]
os=Windows
compiler=msvc
#Visual Studio 2015 对应 compiler.version=190
#Visual Studio 2017 对应 compiler.version=191
#Visual Studio 2019 对应 compiler.version=192
#Visual Studio 2022 对应 compiler.version=193
compiler.version=192
#对于 Visual Studio 2015,您可以使用:v140或者v140_xp(如果您需要支持 Windows XP)
#compiler.toolset=v190_xp
# static :表示使用静态链接的运行时库。dynamic:表示使用动态链接的运行时库。
compiler.runtime=static  
compiler.cppstd=14
build_type=Release
arch=x86_64
[options]
boost/*:without_stacktrace=False
#True
#[build_requires]
#[env]



cd MNN
conan create . -pr:h msvc -pr:b default

开始进行配置,并保存在当地的缓存,接下来变会根据提供的工具链文件进行编译。
在这里插入图片描述
编译好的库将保存在conan的本地缓存指定目录下:

在这里插入图片描述
其实到了这一步,文章开头的问题就已经解决了,此时此刻在你的源码编译中,就能直接编译啦,因为会优先从缓存中进行获取。当然,为了后面更换环境,或者你的团队中其他人想要用到这个版本的库,最好是上传到私有仓库中。

conan upload mnn/2.4.0 -r conan

指令中的conan是我这边的一个私有仓库。

在这里插入图片描述
这样,你的私有仓库和本地缓存就存在了原来没有的MNN库。这时候编译自己的源码就能顺利进行了
在这里插入图片描述
这里已经找到了缓存的版本,编译正常进行。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值