Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot be cast to [Lcom.salesforce.op.stages.impl.feature.TextStats; #504

Open
vanlinhnguyen opened this issue Sep 1, 2020 · 5 comments

Comments

@vanlinhnguyen
Copy link

vanlinhnguyen commented Sep 1, 2020

Describe the bug
I try to launch a minimal example (Titanic) from a Jupyter hub with Spark 2.4.4, and got the following exception for string features:

Name: java.lang.ClassCastException
Message: [Lcom.salesforce.op.stages.impl.feature.TextStats; cannot be cast to [Lcom.salesforce.op.stages.impl.feature.TextStats;

The unit test in my local repo seems to work well, with the following dependencies:

// sbt-assembly excludes packages tagged "provided" as below
val sparkVersion = "2.4.4"
val scalaTestVersion = "3.0.8"
libraryDependencies ++= Seq(
  "org.scalatest"        %% "scalatest"            % scalaTestVersion,
  "org.apache.spark"     %% "spark-core"           % sparkVersion % "provided",
  "org.apache.spark"     %% "spark-mllib"          % sparkVersion % "provided",
  "org.apache.spark"     %% "spark-sql"            % sparkVersion % "provided",
  "com.salesforce.transmogrifai" %% "transmogrifai-core" % "0.7.0"
)

To Reproduce

object SimpleLauncher {
    def run (inputDf: DataFrame, targetCol: String): Unit = {
        implicit val spark: SparkSession = getSparkSession(false, "Transmogifai Simple Launcher")
        println("Yarn application id: " + spark.sparkContext.getConf.getAppId)
        import spark.implicits._

        // Automated feature engineering
        val (target, features) = FeatureBuilder.fromDataFrame[RealNN](inputDf, response = targetCol)
        val featureVector: FeatureLike[OPVector] = features.transmogrify()

        // Automated feature selection
        val checkedFeatures: FeatureLike[OPVector] = target.sanityCheck(featureVector, checkSample = 1.0, removeBadFeatures = true)

        // Define the model we want to use (here a simple logistic regression) and get the resulting output
        val prediction: FeatureLike[Prediction] = BinaryClassificationModelSelector.withTrainValidationSplit(
            modelTypesToUse = Seq(OpLogisticRegression)
        ).setInput(target, checkedFeatures).getOutput()

        val model: OpWorkflowModel = new OpWorkflow().setInputDataset(inputDf).setResultFeatures(prediction).train()
        println("Model summary:\n" + model.summaryPretty())
    }
}

This work on local:

  test("Titanic simple") {
    import spark.implicits._

    // Read Titanic data as a DataFrame
    val csvFilePath: String = "src/test/resources/data/PassengerDataAll.csv"
    val passengersData: DataFrame = DataReaders.Simple.csvCase[Passenger](path = Option(csvFilePath), key = _.id.toString)
      .readDataset().toDF()
    val truncatedData = passengersData.select("name", "age", "survived")
    truncatedData.show()
    truncatedData.printSchema()
    SimpleLauncher.run(truncatedData, "survived")
  }

While the same doesn't from jupyter hub:

val passengers = spark.read.schema(schema)
   .option("header","true")
   .csv("path_to_csv)

SimpleLauncher.run(passengers, "survived")

Expected behavior

Name: java.lang.ClassCastException
Message: [Lcom.salesforce.op.stages.impl.feature.TextStats; cannot be cast to [Lcom.salesforce.op.stages.impl.feature.TextStats;
StackTrace:   at com.salesforce.op.stages.impl.feature.SmartTextVectorizer.fitFn(SmartTextVectorizer.scala:91)
  at com.salesforce.op.stages.base.sequence.SequenceEstimator.fit(SequenceEstimator.scala:99)
  at com.salesforce.op.stages.base.sequence.SequenceEstimator.fit(SequenceEstimator.scala:57)
  at com.salesforce.op.utils.stages.FitStagesUtil$$anonfun$20.apply(FitStagesUtil.scala:264)
  at com.salesforce.op.utils.stages.FitStagesUtil$$anonfun$20.apply(FitStagesUtil.scala:263)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
  at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
  at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
  at com.salesforce.op.utils.stages.FitStagesUtil$.com$salesforce$op$utils$stages$FitStagesUtil$$fitAndTransformLayer(FitStagesUtil.scala:263)
  at com.salesforce.op.utils.stages.FitStagesUtil$$anonfun$17.apply(FitStagesUtil.scala:226)
  at com.salesforce.op.utils.stages.FitStagesUtil$$anonfun$17.apply(FitStagesUtil.scala:224)
  at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
  at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
  at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:186)
  at com.salesforce.op.utils.stages.FitStagesUtil$.fitAndTransformDAG(FitStagesUtil.scala:224)
  at com.salesforce.op.OpWorkflow.fitStages(OpWorkflow.scala:407)
  at com.salesforce.op.OpWorkflow.train(OpWorkflow.scala:354)
  at launchers.SimpleLauncher$.run(SimpleLauncher.scala:35)

Logs or screenshots
If applicable, add logs or screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

@tovbinm
Copy link
Collaborator

tovbinm commented Sep 1, 2020

Please make sure you have JVM 1.8.x with Scala 2.11.x in your Jupyter notebook.

FYI, here are the instructions on how to use TransmogrifAI from a Jupyter notebook - https://docs.transmogrif.ai/en/stable/examples/Running-from-Jupyter-Notebook.html

@vanlinhnguyen
Copy link
Author

Thanks @tovbinm for your response. Indeed I dont think it's the problem of compatibility. It works well when there're only numerical features. Whenever I add a string column to input dataframe, I got the same Exception. Do you have any other hint?

@tovbinm
Copy link
Collaborator

tovbinm commented Sep 2, 2020

Not really. We explicitely test for text features with SmartTextVectorizer already - https://github.com/salesforce/TransmogrifAI/blob/master/core/src/test/scala/com/salesforce/op/stages/impl/feature/SmartTextVectorizerTest.scala#L55

Perhaps @leahmcguire / @Jauntbox / @wsuchy would have some ideas?

@leahmcguire
Copy link
Collaborator

This is not about the string type. One point it that we are built on Spark 2.4.5 not 2.4.4. This kind of thing is most likely to pop up with version incompatibilities.

@vanlinhnguyen
Copy link
Author

vanlinhnguyen commented Sep 8, 2020

Yes it might be one of the issue. The problem is that sometime I manage to make it works by adding

val conf = new SparkConf()
conf.setMaster("local[*]")
implicit val spark = SparkSession.builder.config(conf).getOrCreate()
import spark.implicits._

but when I change to a different dataset, same problem came back. Maybe as @leahmcguire mentioned, it's indeed due to version incompatibilities. Any workaround for this? The one I have in mind is to perform one-hot encode all categorical variables (I suppose behind it's the same if I don't want any text transformation, see this).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants