Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added inference engine dnn model support #820

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Added inference engine dnn model support #820

wants to merge 1 commit into from

Conversation

cansik
Copy link
Member

@cansik cansik commented Nov 29, 2019

As discussed in bytedeco/javacv#1344 I have added the support for the Intels Deep Learning Inference Engine. The compile.sh has been extended by the guide of the wiki:

https://github.com/opencv/opencv/wiki/Intel's-Deep-Learning-Inference-Engine-backend#build-opencv-from-source

There is still an error to (with a workaround) on MacOS. But this seems to be OS specific and a problem of the new System Integrity Protection: bytedeco/javacv#1344 (comment)

@saudet
Copy link
Member

saudet commented Nov 29, 2019

It's not a problem with security or anything, it's just Mac being annoying.
We can apply these kind of workarounds after the build like this:
https://github.com/bytedeco/javacpp-presets/blob/master/mxnet/cppbuild.sh#L184

@saudet
Copy link
Member

saudet commented Nov 30, 2019

It looks like those flags don't do anything unless we have OpenVINO already installed. I can merge this, but the binaries distributed are not going to contain any inference engine. That's alright with you?

@cansik
Copy link
Member Author

cansik commented Nov 30, 2019

Ok, maybe I need a bit more help here, the library structure is the following:

  • /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libMKLDNNPlugin.dylib
  • /opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_mac/lib/libmkl_tiny_tbb.dylib

Now libMKLDNNPlugin wants to read the rpath @rpath/libmkl_tiny_tbb.dylib, which means that we have to specify this path later in the main executable (which would be opencv), right?

Following your example, I would have to specify the rpath realtive to the libMKLDNNPlugin, or should I hand it over absolute?

And what does this stand for @loader_path/.?

install_name_tool -add_rpath @loader_path/. -id @rpath/libmkl_tiny_tbb.dylib ../../external/mkltiny_mac/lib/libmkl_tiny_tbb.dylib

@saudet
Copy link
Member

saudet commented Nov 8, 2020

BTW, "nGraph has moved to OpenVINO", so once we get presets for OpenVINO in place, we can get rid of the old ones for nGraph. /cc @EmergentOrder

@cansik
Copy link
Member Author

cansik commented Mar 9, 2021

@saudet Well with the flag it worked when you had the inference engine installed locally. This is at least something, but yeah of course, shipping the IE would be way better.

Do you think it would make more sense to create a preset for the inference engine itself? Have you already looked into that?

@saudet
Copy link
Member

saudet commented Mar 9, 2021

Haven't looked into it, but the high-level API looks simple enough and shouldn't be too hard to map:
https://github.com/openvinotoolkit/openvino/blob/master/inference-engine/samples/hello_classification/main.cpp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants