Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Undefined symbols for architecture arm64" on Apple M1 Max when compiling Inference library #44712

Closed
AzureUni opened this issue Jul 28, 2022 · 5 comments
Labels
status/following-up 跟进中 type/build 编译/安装问题

Comments

@AzureUni
Copy link

AzureUni commented Jul 28, 2022

问题描述 Issue Description

参考文档/This doc was referred:https://paddle-inference.readthedocs.io/en/latest/guides/install/compile/source_compile_under_MacOS.html

制作Makefile命令/Command to make Makefile:cmake .. -DPY_VERSION=3.9 -DPYTHON_INCLUDE_DIR=${PYTHON_INCLUDE_DIRS}
-DPYTHON_LIBRARY=${PYTHON_LIBRARY} -DWITH_GPU=OFF -DWITH_TESTING=OFF
-DWITH_AVX=OFF -DWITH_ARM=ON -DCMAKE_BUILD_TYPE=Release -DON_INFER=ON

编译命令/Command to compile:make inference_lib_dist TARGET=ARMV8 -j8

报错信息
Error Message:

Undefined symbols for architecture arm64:
"void phi::Copyphi::CPUContext(phi::CPUContext const&, phi::DenseTensor const&, phi::Place, bool, phi::DenseTensor*)", referenced from:
void phi::RnnFunc<phi::LSTMCell, phi::Layer, phi::SingleLayer, phi::BidirLayer, float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const*, std::__1::vector<phi::DenseTensor const*, std::__1::allocator<phi::DenseTensor const*> > const&, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, bool, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, float, bool, int, phi::DenseTensor*) in libphi_static_1.a(rnn_kernel.cc.o)
void phi::RnnFunc<phi::SimpleRNNCell<float, phi::funcs::ReluCPUFunctor, (phi::funcs::detail::ActivationType)2>, phi::Layer, phi::SingleLayer, phi::BidirLayer, float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const*, std::__1::vector<phi::DenseTensor const*, std::__1::allocator<phi::DenseTensor const*> > const&, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, bool, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, float, bool, int, phi::DenseTensor*) in libphi_static_1.a(rnn_kernel.cc.o)
void phi::RnnFunc<phi::SimpleRNNCell<float, phi::funcs::TanhFunctor, (phi::funcs::detail::ActivationType)4>, phi::Layer, phi::SingleLayer, phi::BidirLayer, float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const*, std::__1::vector<phi::DenseTensor const*, std::__1::allocator<phi::DenseTensor const*> > const&, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, bool, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, float, bool, int, phi::DenseTensor*) in libphi_static_1.a(rnn_kernel.cc.o)
void phi::RnnFunc<phi::GRUCell, phi::Layer, phi::SingleLayer, phi::BidirLayer, float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const*, std::__1::vector<phi::DenseTensor const*, std::__1::allocator<phi::DenseTensor const*> > const&, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, bool, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, float, bool, int, phi::DenseTensor*) in libphi_static_1.a(rnn_kernel.cc.o)
void phi::RnnFunc<phi::LSTMCell, phi::Layer, phi::SingleLayer, phi::BidirLayer, double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const*, std::__1::vector<phi::DenseTensor const*, std::__1::allocator<phi::DenseTensor const*> > const&, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, bool, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, float, bool, int, phi::DenseTensor*) in libphi_static_1.a(rnn_kernel.cc.o)
void phi::RnnFunc<phi::SimpleRNNCell<double, phi::funcs::ReluCPUFunctor, (phi::funcs::detail::ActivationType)2>, phi::Layer, phi::SingleLayer, phi::BidirLayer, double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const*, std::__1::vector<phi::DenseTensor const*, std::__1::allocator<phi::DenseTensor const*> > const&, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, bool, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, float, bool, int, phi::DenseTensor*) in libphi_static_1.a(rnn_kernel.cc.o)
void phi::RnnFunc<phi::SimpleRNNCell<double, phi::funcs::TanhFunctor, (phi::funcs::detail::ActivationType)4>, phi::Layer, phi::SingleLayer, phi::BidirLayer, double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const*, std::__1::vector<phi::DenseTensor const*, std::__1::allocator<phi::DenseTensor const*> > const&, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, bool, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, float, bool, int, phi::DenseTensor*) in libphi_static_1.a(rnn_kernel.cc.o)
...
"void phi::Copyphi::CPUContext(phi::CPUContext const&, phi::SparseCooTensor const&, phi::Place, bool, phi::SparseCooTensor*)", referenced from:
void phi::sparse::ElementWiseAddCooGradCPUKernel<float, int, phi::CPUContext>(phi::CPUContext const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor*, phi::SparseCooTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCooGradCPUKernel<float, long long, phi::CPUContext>(phi::CPUContext const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor*, phi::SparseCooTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCooGradCPUKernel<float, signed char, phi::CPUContext>(phi::CPUContext const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor*, phi::SparseCooTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCooGradCPUKernel<float, unsigned char, phi::CPUContext>(phi::CPUContext const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor*, phi::SparseCooTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCooGradCPUKernel<float, short, phi::CPUContext>(phi::CPUContext const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor*, phi::SparseCooTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCooGradCPUKernel<double, int, phi::CPUContext>(phi::CPUContext const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor*, phi::SparseCooTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCooGradCPUKernel<double, long long, phi::CPUContext>(phi::CPUContext const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor const&, phi::SparseCooTensor*, phi::SparseCooTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
...
"void phi::Copyphi::CPUContext(phi::CPUContext const&, phi::SparseCsrTensor const&, phi::Place, bool, phi::SparseCsrTensor*)", referenced from:
void phi::sparse::ElementWiseAddCsrGradCPUKernel<float, int, phi::CPUContext>(phi::CPUContext const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor*, phi::SparseCsrTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCsrGradCPUKernel<float, long long, phi::CPUContext>(phi::CPUContext const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor*, phi::SparseCsrTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCsrGradCPUKernel<float, signed char, phi::CPUContext>(phi::CPUContext const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor*, phi::SparseCsrTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCsrGradCPUKernel<float, unsigned char, phi::CPUContext>(phi::CPUContext const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor*, phi::SparseCsrTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCsrGradCPUKernel<float, short, phi::CPUContext>(phi::CPUContext const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor*, phi::SparseCsrTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCsrGradCPUKernel<double, int, phi::CPUContext>(phi::CPUContext const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor*, phi::SparseCsrTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
void phi::sparse::ElementWiseAddCsrGradCPUKernel<double, long long, phi::CPUContext>(phi::CPUContext const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor const&, phi::SparseCsrTensor*, phi::SparseCsrTensor*) in libphi_static_1.a(elementwise_grad_kernel.cc.o)
...
"phi::funcs::SegmentPoolFunctor<phi::CPUContext, double, int>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentKernelLaunchHelper<phi::CPUContext, double, int>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(segment_pool_kernel.cc.o)
"phi::funcs::SegmentPoolFunctor<phi::CPUContext, double, long long>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentKernelLaunchHelper<phi::CPUContext, double, long long>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(segment_pool_kernel.cc.o)
"phi::funcs::SegmentPoolFunctor<phi::CPUContext, float, int>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentKernelLaunchHelper<phi::CPUContext, float, int>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(segment_pool_kernel.cc.o)
"phi::funcs::SegmentPoolFunctor<phi::CPUContext, float, long long>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentKernelLaunchHelper<phi::CPUContext, float, long long>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(segment_pool_kernel.cc.o)
"phi::funcs::SegmentPoolFunctor<phi::CPUContext, int, int>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentKernelLaunchHelper<phi::CPUContext, int, int>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(segment_pool_kernel.cc.o)
"phi::funcs::SegmentPoolFunctor<phi::CPUContext, int, long long>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentKernelLaunchHelper<phi::CPUContext, int, long long>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(segment_pool_kernel.cc.o)
"phi::funcs::SegmentPoolFunctor<phi::CPUContext, long long, int>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentKernelLaunchHelper<phi::CPUContext, long long, int>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(segment_pool_kernel.cc.o)
"phi::funcs::SegmentPoolFunctor<phi::CPUContext, long long, long long>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentKernelLaunchHelper<phi::CPUContext, long long, long long>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(segment_pool_kernel.cc.o)
"phi::funcs::MatrixReduceSumFunctor<double, phi::CPUContext>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor*)", referenced from:
void phi::CholeskySolveGradKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(cholesky_solve_grad_kernel.cc.o)
void phi::TriangularSolveGradKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, bool, bool, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(triangular_solve_grad_kernel.cc.o)
"phi::funcs::MatrixReduceSumFunctor<float, phi::CPUContext>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor*)", referenced from:
void phi::CholeskySolveGradKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(cholesky_solve_grad_kernel.cc.o)
void phi::TriangularSolveGradKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, bool, bool, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(triangular_solve_grad_kernel.cc.o)
"phi::funcs::SegmentPoolGradFunctor<phi::CPUContext, double, int>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, paddle::optionalphi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentPoolGradKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*) in libphi_static_1.a(segment_pool_grad_kernel.cc.o)
"phi::funcs::SegmentPoolGradFunctor<phi::CPUContext, double, long long>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, paddle::optionalphi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentPoolGradKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*) in libphi_static_1.a(segment_pool_grad_kernel.cc.o)
"phi::funcs::SegmentPoolGradFunctor<phi::CPUContext, float, int>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, paddle::optionalphi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentPoolGradKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*) in libphi_static_1.a(segment_pool_grad_kernel.cc.o)
"phi::funcs::SegmentPoolGradFunctor<phi::CPUContext, float, long long>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, paddle::optionalphi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentPoolGradKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*) in libphi_static_1.a(segment_pool_grad_kernel.cc.o)
1 warning generated.
[ 97%] Linking CXX static library libir_analysis_pass.a
[ 97%] Built target ir_analysis_pass
"phi::funcs::SegmentPoolGradFunctor<phi::CPUContext, int, int>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, paddle::optionalphi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
1 warning generated.
void phi::SegmentPoolGradKernel<int, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*) in libphi_static_1.a(segment_pool_grad_kernel.cc.o)
[ 97%] Linking CXX static library libir_params_sync_among_devices_pass.a
[ 97%] Built target ir_params_sync_among_devices_pass
"phi::funcs::SegmentPoolGradFunctor<phi::CPUContext, int, long long>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, paddle::optionalphi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentPoolGradKernel<int, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*) in libphi_static_1.a(segment_pool_grad_kernel.cc.o)
"phi::funcs::SegmentPoolGradFunctor<phi::CPUContext, long long, int>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, paddle::optionalphi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentPoolGradKernel<long long, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*) in libphi_static_1.a(segment_pool_grad_kernel.cc.o)
"phi::funcs::SegmentPoolGradFunctor<phi::CPUContext, long long, long long>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, paddle::optionalphi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >)", referenced from:
void phi::SegmentPoolGradKernel<long long, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, phi::DenseTensor*) in libphi_static_1.a(segment_pool_grad_kernel.cc.o)
"void phi::funcs::ModulatedDeformableIm2col<double, phi::CPUContext>(phi::CPUContext const&, double const*, double const*, double const*, std::__1::vector<long long, std::__1::allocator > const&, std::__1::vector<long long, std::__1::allocator > const&, std::__1::vector<long long, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, int, double*)", referenced from:
void phi::DeformableConvKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, int, int, int, phi::DenseTensor*) in libphi_static_1.a(deformable_conv_kernel.cc.o)
void phi::DeformableConvGradKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, int, int, int, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(deformable_conv_grad_kernel.cc.o)
"void phi::funcs::ModulatedDeformableIm2col<float, phi::CPUContext>(phi::CPUContext const&, float const*, float const*, float const*, std::__1::vector<long long, std::__1::allocator > const&, std::__1::vector<long long, std::__1::allocator > const&, std::__1::vector<long long, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, int, float*)", referenced from:
void phi::DeformableConvKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, int, int, int, phi::DenseTensor*) in libphi_static_1.a(deformable_conv_kernel.cc.o)
void phi::DeformableConvGradKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, paddle::optionalphi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, std::__1::vector<int, std::__1::allocator > const&, int, int, int, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor*) in libphi_static_1.a(deformable_conv_grad_kernel.cc.o)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [paddle/fluid/eager/auto_code_generator/eager_generator] Error 1
make[2]: *** [paddle/fluid/eager/auto_code_generator/CMakeFiles/eager_generator.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
1 warning generated.

之所以未按照文档指示使用-DWITH_INFER=ON是因为develop和release/2.3两个分支的此选项均无效,CMakeList.txt中没有相应的配置,实测编译完成后也不会出现c/c++预测库。所以我选择使用-DON_INFER=ON配合make的inference_lib_dist参数进行编译。
The reason why I did not stick to the doc to use -DWITH_INFER=ON but rather chose to use -DON_INFER=ON instead, is that -DWITH_INFER=ON was not working in either develop or release/2.3 branch. So I choose to use -DON_INFER=ON with "inference_lib_dist" parameter.

之所以选用develop分支是因为release/2.3分支在使用-DON_INFER=ON时编译会报illegal hardware instruction问题,此问题已经在其他issue中提出但是没有见到后续回复。develop分支编译没有遇到此问题。
The reason why I chose the develop branch is that I have encountered with "illegal hardware instruction" issue in release/2.3 branch with -DON_INFER=ON. This issue has been raised earlier by someone else and I do not see a solution as of now. There is no such issue in the current develop branch.

版本&环境信息 Version & Environment Information


Paddle version: None
Paddle With CUDA: None

OS: macOs 12.4
Python version: 3.9.12

CUDA version: None
cuDNN version: None.None.None
Nvidia driver version: None


Branch:develop
Commit:fix logging debug level (#44684) 8aa286d
CPU:Apple M1 Max
GPU:Apple M1 Max

@AzureUni AzureUni added status/new-issue 新建 type/build 编译/安装问题 labels Jul 28, 2022
@AzureUni AzureUni changed the title Apple M1 Max编译预测库失败 Apple M1 Max编译预测库失败/Failed to compile inference lib on Apple M1 Max Jul 28, 2022
@AzureUni AzureUni changed the title Apple M1 Max编译预测库失败/Failed to compile inference lib on Apple M1 Max "Undefined symbols for architecture arm64" on Apple M1 Max when compiling Inference library Jul 29, 2022
@paddle-bot paddle-bot bot added status/following-up 跟进中 and removed status/new-issue 新建 labels Aug 9, 2022
@jzhang533
Copy link
Contributor

try import this PR: #42325

@AzureUni
Copy link
Author

AzureUni commented Aug 23, 2022

try import this PR: #42325

This helps on building the paddle_python target with no problem. But when it comes to c/c++ inference library it still fails.

After the success of the python build, the errors are:

[100%] Built target paddle_python
  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
  "vtable for CryptoPP::SHA1", referenced from:
      paddle::framework::GetSha1(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) in libpaddle_inference.a(program_desc.cc.o)
      CryptoPP::SHA1::SHA1(CryptoPP::SHA1 const&) in libpaddle_inference.a(program_desc.cc.o)
  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
  "vtable for CryptoPP::Filter", referenced from:
      paddle::framework::HexEncoding(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) in libpaddle_inference.a(program_desc.cc.o)
      CryptoPP::StringSource::~StringSource() in libpaddle_inference.a(program_desc.cc.o)
      CryptoPP::HexEncoder::HexEncoder(CryptoPP::BufferedTransformation*, bool, int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in libpaddle_inference.a(program_desc.cc.o)
      CryptoPP::Grouper::Grouper(CryptoPP::BufferedTransformation*) in libpaddle_inference.a(program_desc.cc.o)
      CryptoPP::StringSource::StringSource(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, CryptoPP::BufferedTransformation*) in libpaddle_inference.a(program_desc.cc.o)
      CryptoPP::StringSource::~StringSource() in libpaddle_inference.a(program_desc.cc.o)
      non-virtual thunk to CryptoPP::StringSource::~StringSource() in libpaddle_inference.a(program_desc.cc.o)
      ...
  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
  "vtable for CryptoPP::Source", referenced from:
      CryptoPP::StringSource::StringSource(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, CryptoPP::BufferedTransformation*) in libpaddle_inference.a(program_desc.cc.o)
      CryptoPP::StringSource::StringSource(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, CryptoPP::BufferedTransformation*) in libpaddle_inference.a(aes_cipher.cc.o)
  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
  "vtable for CryptoPP::Grouper", referenced from:
      CryptoPP::Grouper::Grouper(CryptoPP::BufferedTransformation*) in libpaddle_inference.a(program_desc.cc.o)
  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
  "vtable for CryptoPP::GCM_Base::GCTR", referenced from:
      CryptoPP::GCM_Base::GCM_Base() in libpaddle_inference.a(aes_cipher.cc.o)
  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
  "vtable for CryptoPP::GCM_Base", referenced from:
      paddle::framework::AESCipher::BuildAuthEncCipher(bool*, CryptoPP::member_ptr<CryptoPP::AuthenticatedSymmetricCipher>*, CryptoPP::member_ptr<CryptoPP::AuthenticatedEncryptionFilter>*) in libpaddle_inference.a(aes_cipher.cc.o)
      paddle::framework::AESCipher::BuildAuthDecCipher(bool*, CryptoPP::member_ptr<CryptoPP::AuthenticatedSymmetricCipher>*, CryptoPP::member_ptr<CryptoPP::AuthenticatedDecryptionFilter>*) in libpaddle_inference.a(aes_cipher.cc.o)
      CryptoPP::GCM_Base::GCM_Base() in libpaddle_inference.a(aes_cipher.cc.o)
      CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, true>::~GCM_Final() in libpaddle_inference.a(aes_cipher.cc.o)
      CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, false>::~GCM_Final() in libpaddle_inference.a(aes_cipher.cc.o)
  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
  "vtable for CryptoPP::Rijndael::Base", referenced from:
      CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, true>::~GCM_Final() in libpaddle_inference.a(aes_cipher.cc.o)
      CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>::~BlockCipherFinal() in libpaddle_inference.a(aes_cipher.cc.o)
      CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>::~BlockCipherFinal() in libpaddle_inference.a(aes_cipher.cc.o)
      non-virtual thunk to CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>::~BlockCipherFinal() in libpaddle_inference.a(aes_cipher.cc.o)
      non-virtual thunk to CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>::~BlockCipherFinal() in libpaddle_inference.a(aes_cipher.cc.o)
      CryptoPP::Rijndael::Base::Base(CryptoPP::Rijndael::Base const&) in libpaddle_inference.a(aes_cipher.cc.o)
      CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, false>::~GCM_Final() in libpaddle_inference.a(aes_cipher.cc.o)
      ...
  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
  "non-virtual thunk to CryptoPP::AuthenticatedSymmetricCipherBase::ProcessData(unsigned char*, unsigned char const*, unsigned long)", referenced from:
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, true> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, false> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::GCM_Base::OptimalDataAlignment() const", referenced from:
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, true> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, false> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::CTR_ModePolicy::SeekToIteration(unsigned long)", referenced from:
      vtable for CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::CTR_ModePolicy::OperateKeystream(CryptoPP::KeystreamOperation, unsigned char*, unsigned char const*, unsigned long)", referenced from:
      vtable for CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::CTR_ModePolicy::CipherResynchronize(unsigned char*, unsigned char const*, unsigned long)", referenced from:
      vtable for CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::ECB_OneWay::ProcessData(unsigned char*, unsigned char const*, unsigned long)", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ECB_OneWay> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)1, CryptoPP::Rijndael::Dec>, CryptoPP::ECB_OneWay> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::CBC_Decryption::ProcessData(unsigned char*, unsigned char const*, unsigned long)", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)1, CryptoPP::Rijndael::Dec>, CryptoPP::CBC_Decryption> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::CBC_Encryption::ProcessData(unsigned char*, unsigned char const*, unsigned long)", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::CBC_Encryption> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >::ProcessData(unsigned char*, unsigned char const*, unsigned long)", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >::Seek(unsigned long)", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::BufferedTransformation::GetWaitObjects(CryptoPP::WaitObjectContainer&, CryptoPP::CallStack const&)", referenced from:
      vtable for CryptoPP::StringSource in libpaddle_inference.a(program_desc.cc.o)
      vtable for CryptoPP::SourceTemplate<CryptoPP::StringStore> in libpaddle_inference.a(program_desc.cc.o)
      vtable for CryptoPP::StringSource in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::SourceTemplate<CryptoPP::StringStore> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AuthenticatedSymmetricCipherBase::TruncatedFinal(unsigned char*, unsigned long)", referenced from:
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, true> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, false> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AuthenticatedSymmetricCipherBase::Update(unsigned char const*, unsigned long)", referenced from:
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, true> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, false> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >::IsRandomAccess() const", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >::IsSelfInverting() const", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >::OptimalBlockSize() const", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >::OptimalDataAlignment() const", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >::IsForwardTransformation() const", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::BufferedTransformation::GetMaxWaitObjectCount() const", referenced from:
      vtable for CryptoPP::StringSource in libpaddle_inference.a(program_desc.cc.o)
      vtable for CryptoPP::SourceTemplate<CryptoPP::StringStore> in libpaddle_inference.a(program_desc.cc.o)
      vtable for CryptoPP::StringSource in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::SourceTemplate<CryptoPP::StringStore> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::GCM_Base::OptimalDataAlignment() const", referenced from:
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, true> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::GCM_Final<CryptoPP::Rijndael, (CryptoPP::GCM_TablesOption)0, false> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::Rijndael::Dec::ProcessAndXorBlock(unsigned char const*, unsigned char const*, unsigned char*) const", referenced from:
      vtable for CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)1, CryptoPP::Rijndael::Dec> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::Rijndael::Dec::AdvancedProcessBlocks(unsigned char const*, unsigned char const*, unsigned char*, unsigned long, unsigned int) const", referenced from:
      vtable for CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)1, CryptoPP::Rijndael::Dec> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::Rijndael::Enc::ProcessAndXorBlock(unsigned char const*, unsigned char const*, unsigned char*) const", referenced from:
      vtable for CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::Rijndael::Enc::AdvancedProcessBlocks(unsigned char const*, unsigned char const*, unsigned char*, unsigned long, unsigned int) const", referenced from:
      vtable for CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::Rijndael::Base::AlgorithmProvider() const", referenced from:
      vtable for CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)1, CryptoPP::Rijndael::Dec> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::Rijndael::Base::OptimalDataAlignment() const", referenced from:
      vtable for CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc> in libpaddle_inference.a(aes_cipher.cc.o)
      vtable for CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)1, CryptoPP::Rijndael::Dec> in libpaddle_inference.a(aes_cipher.cc.o)
  "non-virtual thunk to CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >::GenerateBlock(unsigned char*, unsigned long)", referenced from:
      vtable for CryptoPP::CipherModeFinalTemplate_CipherHolder<CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>, CryptoPP::ConcretePolicyHolder<CryptoPP::Empty, CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::CTR_ModePolicy> >, CryptoPP::AdditiveCipherAbstractPolicy> > in libpaddle_inference.a(aes_cipher.cc.o)
  "_cblas_caxpy", referenced from:
      std::__1::enable_if<!(std::is_same<phi::dtype::complex<float>, phi::dtype::bfloat16>::value), void>::type paddle::operators::math::scatter::add_sparse_inputs<phi::dtype::complex<float>, phi::CPUContext>(std::__1::vector<phi::SelectedRows const*, std::__1::allocator<phi::SelectedRows const*> > const&, std::__1::unordered_map<long long, unsigned long, std::__1::hash<long long>, std::__1::equal_to<long long>, std::__1::allocator<std::__1::pair<long long const, unsigned long> > > const&, long long, phi::CPUContext const&, phi::dtype::complex<float>*) in libpaddle_inference.a(selected_rows_functor.cc.o)
  "_cblas_cgemm", referenced from:
      void phi::MatMulFunction<phi::CPUContext, phi::dtype::complex<float> >(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_kernel.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::BatchedGEMM<phi::dtype::complex<float> >(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, phi::dtype::complex<float>, phi::dtype::complex<float> const*, phi::dtype::complex<float> const*, phi::dtype::complex<float>, phi::dtype::complex<float>*, int, long long, long long) const in libpaddle_inference.a(matmul_kernel.cc.o)
      void phi::MatMulFunction<phi::CPUContext, phi::dtype::complex<float> >(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_grad_kernel.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<phi::dtype::complex<float> >(phi::dtype::complex<float> const*, phi::funcs::MatDescriptor const&, phi::dtype::complex<float> const*, phi::funcs::MatDescriptor const&, phi::dtype::complex<float>, phi::dtype::complex<float>*, phi::dtype::complex<float>) const in libpaddle_inference.a(matmul_grad_kernel.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::BatchedGEMM<phi::dtype::complex<float> >(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, phi::dtype::complex<float>, phi::dtype::complex<float> const*, phi::dtype::complex<float> const*, phi::dtype::complex<float>, phi::dtype::complex<float>*, int, long long, long long) const in libpaddle_inference.a(matmul_grad_kernel.cc.o)
  "_cblas_cgemv", referenced from:
      void phi::MatMulFunction<phi::CPUContext, phi::dtype::complex<float> >(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_kernel.cc.o)
      void phi::MatMulFunction<phi::CPUContext, phi::dtype::complex<float> >(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_grad_kernel.cc.o)
  "_cblas_daxpy", referenced from:
      paddle::operators::LRNFunctor<phi::CPUContext, double>::operator()(paddle::framework::ExecutionContext const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, int, double, double, double, paddle::experimental::DataLayout) in libpaddle_inference.a(lrn_op.cc.o)
      paddle::operators::CenterLossKernel<phi::CPUContext, double>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(center_loss_op.cc.o)
      paddle::operators::math::ContextProjectGradFunctor<phi::CPUContext, double>::operator()(phi::CPUContext const&, phi::DenseTensor const&, bool, int, int, int, int, int, bool, bool, phi::DenseTensor*, phi::DenseTensor*) in libpaddle_inference.a(sequence_conv_op.cc.o)
      paddle::operators::AttentionLSTMKernel<double>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(attention_lstm_op.cc.o)
      paddle::operators::FusedEmbeddingFCLSTMKernel<double>::SeqCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fused_embedding_fc_lstm_op.cc.o)
      paddle::operators::FusedEmbeddingFCLSTMKernel<double>::BatchCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fused_embedding_fc_lstm_op.cc.o)
      phi::SameDimsAddFunctor<phi::CPUContext, double, void>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*) in libpaddle_inference.a(elementwise_add_kernel.cc.o)
      ...
  "_cblas_dcopy", referenced from:
      paddle::operators::CenterLossKernel<phi::CPUContext, double>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(center_loss_op.cc.o)
      paddle::operators::FuisonLSTMKernel<double>::BatchCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fusion_lstm_op.cc.o)
      paddle::operators::LookupTableKernel<double>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(lookup_table_op.cc.o)
      paddle::operators::AttentionLSTMKernel<double>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(attention_lstm_op.cc.o)
      paddle::operators::FusedEmbeddingFCLSTMKernel<double>::SeqCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fused_embedding_fc_lstm_op.cc.o)
      paddle::operators::FusedEmbeddingFCLSTMKernel<double>::BatchCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fused_embedding_fc_lstm_op.cc.o)
      phi::SameDimsAddFunctor<phi::CPUContext, double, void>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*) in libpaddle_inference.a(elementwise_add_kernel.cc.o)
      ...
  "_cblas_dgemm", referenced from:
      void phi::funcs::Blas<phi::CPUContext>::MatMul<double>(double const*, phi::funcs::MatDescriptor const&, double const*, phi::funcs::MatDescriptor const&, double, double*, double) const in libpaddle_inference.a(fsp_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::BatchedGEMM<double>(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, double, double const*, double const*, double, double*, int, long long, long long) const in libpaddle_inference.a(fsp_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<double>(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, double, phi::DenseTensor*, double) const in libpaddle_inference.a(lstm_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<double>(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, double, phi::DenseTensor*, double) const in libpaddle_inference.a(lstmp_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<double>(double const*, phi::funcs::MatDescriptor const&, double const*, phi::funcs::MatDescriptor const&, double, double*, double) const in libpaddle_inference.a(matmul_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::BatchedGEMM<double>(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, double, double const*, double const*, double, double*, int, long long, long long) const in libpaddle_inference.a(matmul_op.cc.o)
      paddle::operators::GRUUnitKernel<phi::CPUContext, double>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(gru_unit_op.cc.o)
      ...
  "_cblas_dgemv", referenced from:
      void phi::MvKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*) in libpaddle_inference.a(mv_kernel.cc.o)
      phi::KernelImpl<void (*)(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*), &(void phi::MvKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*))>::VariadicCompute(phi::DeviceContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*) in libpaddle_inference.a(mv_kernel.cc.o)
      void phi::KernelImpl<void (*)(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*), &(void phi::MvKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*))>::KernelCallHelper<phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::TypeTag<int> >::Compute<0, 0, 0, 0>(phi::KernelContext*) in libpaddle_inference.a(mv_kernel.cc.o)
      void phi::MatMulFunction<phi::CPUContext, double>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_kernel.cc.o)
      void phi::MvGradKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*) in libpaddle_inference.a(mv_grad_kernel.cc.o)
      void phi::MatMulFunction<phi::CPUContext, double>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_grad_kernel.cc.o)
  "_cblas_dtrsm", referenced from:
      void phi::CholeskyGradKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, phi::DenseTensor*) in libpaddle_inference.a(cholesky_grad_kernel.cc.o)
      void phi::TriangularSolveKernel<double, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, bool, bool, phi::DenseTensor*) in libpaddle_inference.a(triangular_solve_kernel.cc.o)
  "_cblas_saxpy", referenced from:
      paddle::operators::LRNFunctor<phi::CPUContext, float>::operator()(paddle::framework::ExecutionContext const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*, int, int, int, int, int, float, float, float, paddle::experimental::DataLayout) in libpaddle_inference.a(lrn_op.cc.o)
      paddle::operators::CenterLossKernel<phi::CPUContext, float>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(center_loss_op.cc.o)
      paddle::operators::math::ContextProjectGradFunctor<phi::CPUContext, float>::operator()(phi::CPUContext const&, phi::DenseTensor const&, bool, int, int, int, int, int, bool, bool, phi::DenseTensor*, phi::DenseTensor*) in libpaddle_inference.a(sequence_conv_op.cc.o)
      paddle::operators::AttentionLSTMKernel<float>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(attention_lstm_op.cc.o)
      paddle::operators::FusedEmbeddingFCLSTMKernel<float>::SeqCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fused_embedding_fc_lstm_op.cc.o)
      paddle::operators::FusedEmbeddingFCLSTMKernel<float>::BatchCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fused_embedding_fc_lstm_op.cc.o)
      phi::SameDimsAddFunctor<phi::CPUContext, float, void>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*) in libpaddle_inference.a(elementwise_add_kernel.cc.o)
      ...
  "_cblas_scopy", referenced from:
      paddle::operators::CenterLossKernel<phi::CPUContext, float>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(center_loss_op.cc.o)
      paddle::operators::FuisonLSTMKernel<float>::BatchCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fusion_lstm_op.cc.o)
      paddle::operators::LookupTableKernel<float>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(lookup_table_op.cc.o)
      paddle::operators::AttentionLSTMKernel<float>::Compute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(attention_lstm_op.cc.o)
      paddle::operators::FusedEmbeddingFCLSTMKernel<float>::SeqCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fused_embedding_fc_lstm_op.cc.o)
      paddle::operators::FusedEmbeddingFCLSTMKernel<float>::BatchCompute(paddle::framework::ExecutionContext const&) const in libpaddle_inference.a(fused_embedding_fc_lstm_op.cc.o)
      phi::SameDimsAddFunctor<phi::CPUContext, float, void>::operator()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*) in libpaddle_inference.a(elementwise_add_kernel.cc.o)
      ...
  "_cblas_sgemm", referenced from:
      paddle::framework::ir::BuildFusion(paddle::framework::ir::Graph*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, paddle::framework::Scope*, bool)::$_1::operator()(paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*, paddle::framework::ir::Node*) const in libpaddle_inference.a(embedding_fc_lstm_fuse_pass.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<float>(float const*, phi::funcs::MatDescriptor const&, float const*, phi::funcs::MatDescriptor const&, float, float*, float) const in libpaddle_inference.a(fsp_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::BatchedGEMM<float>(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*, int, long long, long long) const in libpaddle_inference.a(fsp_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<float>(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, float, phi::DenseTensor*, float) const in libpaddle_inference.a(lstm_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<float>(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, float, phi::DenseTensor*, float) const in libpaddle_inference.a(lstmp_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<float>(float const*, phi::funcs::MatDescriptor const&, float const*, phi::funcs::MatDescriptor const&, float, float*, float) const in libpaddle_inference.a(matmul_op.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::BatchedGEMM<float>(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*, int, long long, long long) const in libpaddle_inference.a(matmul_op.cc.o)
      ...
  "_cblas_sgemv", referenced from:
      void phi::MvKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*) in libpaddle_inference.a(mv_kernel.cc.o)
      phi::KernelImpl<void (*)(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*), &(void phi::MvKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*))>::VariadicCompute(phi::DeviceContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*) in libpaddle_inference.a(mv_kernel.cc.o)
      void phi::KernelImpl<void (*)(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*), &(void phi::MvKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*))>::KernelCallHelper<phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::TypeTag<int> >::Compute<0, 0, 0, 0>(phi::KernelContext*) in libpaddle_inference.a(mv_kernel.cc.o)
      void phi::MatMulFunction<phi::CPUContext, float>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_kernel.cc.o)
      void phi::MvGradKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor const&, phi::DenseTensor*, phi::DenseTensor*) in libpaddle_inference.a(mv_grad_kernel.cc.o)
      void phi::MatMulFunction<phi::CPUContext, float>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_grad_kernel.cc.o)
  "_cblas_strsm", referenced from:
      void phi::CholeskyGradKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, phi::DenseTensor*) in libpaddle_inference.a(cholesky_grad_kernel.cc.o)
      void phi::TriangularSolveKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, bool, bool, bool, phi::DenseTensor*) in libpaddle_inference.a(triangular_solve_kernel.cc.o)
  "_cblas_zaxpy", referenced from:
      std::__1::enable_if<!(std::is_same<phi::dtype::complex<double>, phi::dtype::bfloat16>::value), void>::type paddle::operators::math::scatter::add_sparse_inputs<phi::dtype::complex<double>, phi::CPUContext>(std::__1::vector<phi::SelectedRows const*, std::__1::allocator<phi::SelectedRows const*> > const&, std::__1::unordered_map<long long, unsigned long, std::__1::hash<long long>, std::__1::equal_to<long long>, std::__1::allocator<std::__1::pair<long long const, unsigned long> > > const&, long long, phi::CPUContext const&, phi::dtype::complex<double>*) in libpaddle_inference.a(selected_rows_functor.cc.o)
  "_cblas_zgemm", referenced from:
      void phi::MatMulFunction<phi::CPUContext, phi::dtype::complex<double> >(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_kernel.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::BatchedGEMM<phi::dtype::complex<double> >(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, phi::dtype::complex<double>, phi::dtype::complex<double> const*, phi::dtype::complex<double> const*, phi::dtype::complex<double>, phi::dtype::complex<double>*, int, long long, long long) const in libpaddle_inference.a(matmul_kernel.cc.o)
      void phi::MatMulFunction<phi::CPUContext, phi::dtype::complex<double> >(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_grad_kernel.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::MatMul<phi::dtype::complex<double> >(phi::dtype::complex<double> const*, phi::funcs::MatDescriptor const&, phi::dtype::complex<double> const*, phi::funcs::MatDescriptor const&, phi::dtype::complex<double>, phi::dtype::complex<double>*, phi::dtype::complex<double>) const in libpaddle_inference.a(matmul_grad_kernel.cc.o)
      void phi::funcs::Blas<phi::CPUContext>::BatchedGEMM<phi::dtype::complex<double> >(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, phi::dtype::complex<double>, phi::dtype::complex<double> const*, phi::dtype::complex<double> const*, phi::dtype::complex<double>, phi::dtype::complex<double>*, int, long long, long long) const in libpaddle_inference.a(matmul_grad_kernel.cc.o)
  "_cblas_zgemv", referenced from:
      void phi::MatMulFunction<phi::CPUContext, phi::dtype::complex<double> >(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_kernel.cc.o)
      void phi::MatMulFunction<phi::CPUContext, phi::dtype::complex<double> >(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, std::__1::vector<long long, std::__1::allocator<long long> > const&, phi::DenseTensor*, bool, bool, bool) in libpaddle_inference.a(matmul_grad_kernel.cc.o)
  "_openblas_set_num_threads", referenced from:
      paddle::platform::SetNumThreads(int) in libpaddle_inference.a(cpu_helper.cc.o)
  "_utf8proc_NFD", referenced from:
      paddle::framework::NFD(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) in libpaddle_inference.a(string_array.cc.o)
  "_utf8proc_category", referenced from:
      paddle::operators::BasicTokenizer::Tokenize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> >, std::__1::allocator<std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > > >*) const in libpaddle_inference.a(faster_tokenizer_op.cc.o)
      paddle::operators::IsPunctuation(wchar_t const&) in libpaddle_inference.a(faster_tokenizer_op.cc.o)
  "_utf8proc_islower", referenced from:
      phi::strings::GetCharcasesMap() in libpaddle_inference.a(unicode.cc.o)
  "_utf8proc_isupper", referenced from:
      phi::strings::GetCharcasesMap() in libpaddle_inference.a(unicode.cc.o)
  "_utf8proc_tolower", referenced from:
      paddle::operators::BasicTokenizer::do_lower_case(wchar_t) const in libpaddle_inference.a(faster_tokenizer_op.cc.o)
      paddle::operators::BasicTokenizer::Tokenize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> >, std::__1::allocator<std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > > >*) const in libpaddle_inference.a(faster_tokenizer_op.cc.o)
      phi::strings::GetCharcasesMap() in libpaddle_inference.a(unicode.cc.o)
  "_utf8proc_toupper", referenced from:
      phi::strings::GetCharcasesMap() in libpaddle_inference.a(unicode.cc.o)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [paddle/fluid/inference/capi_exp/libpaddle_inference_c.dylib] Error 1
make[1]: *** [paddle/fluid/inference/capi_exp/CMakeFiles/paddle_inference_c_shared.dir/all] Error 2
make: *** [all] Error 2

The command I used to build is:

cmake .. -DPY_VERSION=3.9 -DPYTHON_INCLUDE_DIR=${PYTHON_INCLUDE_DIRS} \
-DPYTHON_LIBRARY=${PYTHON_LIBRARY} -DWITH_GPU=OFF -DWITH_TESTING=OFF \
-DWITH_AVX=OFF -DWITH_ARM=ON -DCMAKE_BUILD_TYPE=Release -DON_INFER=ON
make TARGET=ARMV8 -j8

Branch: develop
Commit: 9e5f3a3

@amitchaudhary140
Copy link

I am also facing the same issue. Pls help.

@ariefwijaya
Copy link

any update on this?

Copy link

paddle-bot bot commented May 14, 2024

Since you haven't replied for more than a year, we have closed this issue/pr.
If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up.
由于您超过一年未回复,我们将关闭这个issue/pr。
若问题未解决或有后续问题,请随时重新打开,我们会继续跟进。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/following-up 跟进中 type/build 编译/安装问题
Projects
None yet
Development

No branches or pull requests

4 participants