Replies: 1 comment
-
Below attachments are more detail for the configurations, please let me know if any extra information is needed. Make inference to the deployed ONNX model using ThunderClient in VSCode |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi folks !
Did any one of you get to make inference service using the ONNX model provided in KServe documentation and deploy in your own K8s cluster?
I can deploy the inference service successfully in my organization provided cluster and everything is fine until I use Thunder Client extension in the VSCode and make inference using a JSON formatted input and the inferenceservice endpoint. ( the provided jupyter notebook code did not work for me)
The problem is when I make the inference, the reponse keep loading...
Beta Was this translation helpful? Give feedback.
All reactions