Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyCUDA ERROR: The context stack was not empty upon module cleanup #299

Open
ishandutta0098 opened this issue Jul 6, 2021 · 1 comment
Open

Comments

@ishandutta0098
Copy link

I have created a Streamlit App to as a demo of a project on Multilingual Text Classification using mBERT in PyTorch. When I run the app with the command python app.py it works fine but when I try to use Streamlit with the command streamlit run app.py it throws a PyCUDA Error.

Following is the code present in app.py:

import torch

from typing import Text
import streamlit as st
import pandas as pd
from textblob import TextBlob

from inference.inference_onnx import run_onnx_inference
from inference.inference_tensorRT import run_trt_inference
from googletrans import Translator

st.title("LinClass: Multilingual Text Classifier")

input_text = st.text_input('Text:')

####################
# Google Translate API
####################

translator = Translator()
input_text = translator.translate(
    input_text,
    dest= "en"
)
    
input_text = input_text.text

####################
#Select Precision and Inference Method
####################

df = pd.DataFrame()
df["lang"] = ["en"]

precision = st.sidebar.selectbox("Select Precision:",
("16 Bit", "32 Bit")
)

inference = st.sidebar.selectbox("Inference Method:",
("ONNX", "TensorRT")
)

if st.button('Show Selected Configuration'):
    st.subheader("Selected Configuration:")
    st.write("Precision: ", precision) 
    st.write("Inference: ", inference)

st.subheader("Results")

def result(x):
    """
    Function to classify the comment toxicity based on the probability and given threshold
    
    params: x(float) - Probability of Toxicity
    """
    if x >= 0.4:
        st.write("Toxic")
        
    else:
        st.write("Non Toxic")
        
####################
# Implement Selected Configuration
####################
        
if precision=="16 Bit":
    if inference=="ONNX":
        df["comment_text"] = [input_text]

        predictions = run_onnx_inference(
                                        onnx_model_path = "/workspace/data/multilingual-text-classifier/output models/mBERT_lightning_fp16_2GPU.onnx",
                                        stage="inference",
                                        df_test = df
                                        )
        predictions = torch.sigmoid(torch.tensor(predictions))
        st.write(input_text)
        st.write(predictions)
        result(predictions)

    if inference=="TensorRT":
        df["content"] = [input_text]

        predictions = run_trt_inference(
                                        trt_model_path = "/workspace/data/multilingual-text-classifier/output models/mBERT_lightning_fp16_bs16.engine",
                                        stage="inference",
                                        df_test = df
                                        )
        
        predictions = predictions.astype("float32")
        predictions = torch.sigmoid(torch.tensor(predictions))
        st.write(input_text)
        st.write(predictions)
        result(predictions)

if precision=="32 Bit":
    if inference=="ONNX":
        df["comment_text"] = [input_text]

        predictions = run_onnx_inference(
                                        onnx_model_path = "/workspace/data/multilingual-text-classifier/output models/mBERT_fp32.onnx",
                                        stage="inference",
                                        df_test = df
                                        )
        predictions = torch.sigmoid(torch.tensor(predictions))
        st.write(input_text)
        st.write(predictions)
        result(predictions)

    if inference=="TensorRT":
        df["content"] = [input_text]

        predictions = run_trt_inference(
                                        trt_model_path = "/workspace/data/multilingual-text-classifier/output models/mBERT_fp32.engine",
                                        stage="inference",
                                        df_test = df
                                        )
        
        predictions = predictions.astype("float32")
        predictions = torch.sigmoid(torch.tensor(predictions))
        st.write(input_text)
        st.write(predictions)
        result(predictions)
        
####################
# Take Feedback
####################
        
st.subheader("Feedback:")
feedback = st.radio(
     "Are you satisfied with the results?",
     ('Yes', 'No'))

st.write("Thanks for the Feedback!")

Error

-------------------------------------------------------------------
PyCUDA ERROR: The context stack was not empty upon module cleanup.
-------------------------------------------------------------------
A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.
-------------------------------------------------------------------
Aborted (core dumped)
@yjcaimeow
Copy link

Any update for the error ?
Have you solved it? @ishandutta0098

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants