Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-47645][BUILD][CORE][SQL][YARN] Make Spark build with -release instead of -target #45716

Closed
wants to merge 14 commits into from

Conversation

LuciferYang
Copy link
Contributor

@LuciferYang LuciferYang commented Mar 26, 2024

What changes were proposed in this pull request?

This pr makes the following changes to allow Spark to build with -release instead of -target:

  1. Use MethodHandle instead of direct calls to sun.security.action.GetBooleanAction and sun.util.calendar.ZoneInfo, because they are not exports APIs.

  2. Channels.newReader is used instead of ``,StreamDecoder.forDecoder because StreamDecoder.forDecoder is also not `exports` APIs.

  public static Reader newReader(ReadableByteChannel ch,
                                   CharsetDecoder dec,
                                   int minBufferCap)
    {
        Objects.requireNonNull(ch, "ch");
        return StreamDecoder.forDecoder(ch, dec.reset(), minBufferCap);
    }
  1. Adjusted the import of java.io._ in yarn/Client.scala to fix the compilation error:
Error: ] /home/runner/work/spark/spark/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:20: object FileSystem is not a member of package java.io
  1. Replaced -target with -release in pom.xml and SparkBuild.scala, and removed the -source option, because using -release is sufficient.

  2. Upgrade scala-maven-plugin from 4.7.1 to 4.8.1 to fix the error [ERROR] -release cannot be less than -target when executing build/mvn clean install -DskipTests -Djava.version=21

Why are the changes needed?

After Scala 2.13.9, the compile option -target has been deprecated, it is recommended to use -release:

Does this PR introduce any user-facing change?

No

How was this patch tested?

Pass GitHub Actions

Was this patch authored or co-authored using generative AI tooling?

No

@LuciferYang LuciferYang marked this pull request as draft March 26, 2024 06:02
# It uses Maven's 'install' intentionally, see https://github.com/apache/spark/pull/26414.
./build/mvn $MAVEN_CLI_OPTS -DskipTests -Pyarn -Pkubernetes -Pvolcano -Phive -Phive-thriftserver -Phadoop-cloud -Djava.version=${JAVA_VERSION/-ea} install
./build/mvn $MAVEN_CLI_OPTS -DskipTests -Pyarn -Pkubernetes -Pvolcano -Phive -Phive-thriftserver -Phadoop-cloud install
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly used to verify the cross-compilation scenario of Maven, using Java 21 to compile with -release 17

@LuciferYang

This comment was marked as outdated.

pom.xml Show resolved Hide resolved
@github-actions github-actions bot added the CORE label Mar 28, 2024
@github-actions github-actions bot added the SQL label Mar 28, 2024
@@ -215,8 +224,10 @@ trait SparkDateTimeUtils {
val rebasedDays = rebaseGregorianToJulianDays(days)
val localMillis = Math.multiplyExact(rebasedDays, MILLIS_PER_DAY)
val timeZoneOffset = TimeZone.getDefault match {
case zoneInfo: ZoneInfo => zoneInfo.getOffsetsByWall(localMillis, null)
case timeZone: TimeZone => timeZone.getOffset(localMillis - timeZone.getRawOffset)
case zoneInfo: TimeZone if zoneInfo.getClass.getName == zoneInfoClassName =>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for fix:

Error: ] /home/runner/work/spark/spark/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkDateTimeUtils.scala:27: object util is not a member of package sun
Error: ] /home/runner/work/spark/spark/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkDateTimeUtils.scala:218: not found: type ZoneInfo
Error: ] /home/runner/work/spark/spark/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkDateTimeUtils.scala:218: value getOffsetsByWall is not a member of java.util.TimeZone

Maybe we can just use val timeZoneOffset = TimeZone.getDefault.getOffset(localMillis) , but I'm not sure which case can verify the compatibility issue with 2.4

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me investigate whether there are other open APIs that can be used as a substitute here.

val bais = new ByteArrayInputStream(in, 0, length)
val byteChannel = Channels.newChannel(bais)
val decodingBufferSize = Math.min(length, 8192)
val decoder = Charset.forName(enc).newDecoder()

StreamDecoder.forDecoder(byteChannel, decoder, decodingBufferSize)
Channels.newReader(byteChannel, decoder, decodingBufferSize)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for fix:


Error: ] /home/runner/work/spark/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/CreateJacksonParser.scala:27: object nio is not a member of package sun
Error: ] /home/runner/work/spark/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/CreateJacksonParser.scala:61: not found: type StreamDecoder
Error: ] /home/runner/work/spark/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/CreateJacksonParser.scala:67: not found: value StreamDecoder
Error: ] /home/runner/work/spark/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/CreateJacksonParser.scala:27: Unused import

val bais = new ByteArrayInputStream(in, 0, length)
val byteChannel = Channels.newChannel(bais)
val decodingBufferSize = Math.min(length, 8192)
val decoder = Charset.forName(enc).newDecoder()

StreamDecoder.forDecoder(byteChannel, decoder, decodingBufferSize)
Channels.newReader(byteChannel, decoder, decodingBufferSize)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for fix:

Error: ] /home/runner/work/spark/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/xml/CreateXmlParser.scala:27: object nio is not a member of package sun
Error: ] /home/runner/work/spark/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/xml/CreateXmlParser.scala:78: not found: type StreamDecoder
Error: ] /home/runner/work/spark/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/xml/CreateXmlParser.scala:84: not found: value StreamDecoder
Error: ] /home/runner/work/spark/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/xml/CreateXmlParser.scala:27: Unused import

@github-actions github-actions bot added the YARN label Mar 28, 2024
@@ -17,7 +17,7 @@

package org.apache.spark.deploy.yarn

import java.io.{FileSystem => _, _}
import java.io.{File, FileFilter, FileNotFoundException, FileOutputStream, InterruptedIOException, IOException, OutputStreamWriter}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix

Error: ] /home/runner/work/spark/spark/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:20: object FileSystem is not a member of package java.io

@github-actions github-actions bot removed the INFRA label Mar 28, 2024
@LuciferYang LuciferYang changed the title Test scala-maven-plugin 4.9.1 Test compile scala with -release 17 Mar 28, 2024
val mh = lookup.unreflectConstructor(constructor)
val action = mh.invoke("sun.io.serialization.extendedDebugInfo")
.asInstanceOf[PrivilegedAction[Boolean]]
!AccessController.doPrivileged(action).booleanValue()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use MethodHandle to ensure logical consistency. Considering that sun.security is not an exported package, we can also directly use !JBoolean.getBoolean("sun.io.serialization.extendedDebugInfo").

@LuciferYang LuciferYang changed the title Test compile scala with -release 17 [WIP]Test compile scala with -release 17 Mar 28, 2024
@LuciferYang LuciferYang changed the title [WIP]Test compile scala with -release 17 [WIP] Make Scala code build successful with -release 17 Mar 28, 2024
@LuciferYang LuciferYang changed the title [WIP] Make Scala code build successful with -release 17 [WIP] Make Spark build successful with -release 17 Mar 29, 2024
@LuciferYang LuciferYang changed the title [WIP] Make Spark build successful with -release 17 [WIP] Make Spark build with -release instead of -target Mar 29, 2024
@@ -175,8 +174,7 @@
<scala.version>2.13.13</scala.version>
<scala.binary.version>2.13</scala.binary.version>
<scalatest-maven-plugin.version>2.2.0</scalatest-maven-plugin.version>
<!-- don't upgrade scala-maven-plugin to version 4.7.2 or higher, see SPARK-45144 for details -->
<scala-maven-plugin.version>4.7.1</scala-maven-plugin.version>
<scala-maven-plugin.version>4.8.1</scala-maven-plugin.version>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To fix Error: -release cannot be less than -target, it seems necessary to upgrade to use 4.8.1

@LuciferYang LuciferYang changed the title [WIP] Make Spark build with -release instead of -target [SPARK-47645][BUILD][CORE][SQL][YARN] Make Spark build with -release instead of -target Mar 29, 2024
@LuciferYang
Copy link
Contributor Author

  1. If it's not a good time to do this work now, please let me know and I will close this PR.
  2. If the PR needs to be split, such as submitting the code change part first, then changing the compile parameters and upgrading the plugin, please let me know too.

@LuciferYang LuciferYang marked this pull request as ready for review March 29, 2024 06:57
@dongjoon-hyun
Copy link
Member

Could you make the CI happy, @LuciferYang ?

@LuciferYang
Copy link
Contributor Author

Could you make the CI happy, @LuciferYang ?

re-triggered the failed task, let's see if it can pass

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, LGTM.

cc @srowen , @JoshRosen , @HyukjinKwon , too.

@dongjoon-hyun
Copy link
Member

Merged to master. Thank you, @LuciferYang and all.

@LuciferYang
Copy link
Contributor Author

Thanks @dongjoon-hyun ~

@mihailom-db
Copy link
Contributor

mihailom-db commented Apr 1, 2024

This PR seems to break local environment for maven in IntelliJ. Couple of us are getting this response when trying to run anything using maven, even after rebuilding.

java.lang.ExceptionInInitializerError
	at org.apache.spark.sql.catalyst.optimizer.ComputeCurrentTime$.apply(finishAnalysis.scala:111)
	at org.apache.spark.sql.catalyst.optimizer.ComputeCurrentTime$.apply(finishAnalysis.scala:108)
	at org.apache.spark.sql.catalyst.optimizer.Optimizer$FinishAnalysis$.$anonfun$apply$1(Optimizer.scala:306)
	at scala.collection.LinearSeqOps.foldLeft(LinearSeq.scala:183)
	at scala.collection.LinearSeqOps.foldLeft$(LinearSeq.scala:179)
	at scala.collection.immutable.List.foldLeft(List.scala:79)
	at org.apache.spark.sql.catalyst.optimizer.Optimizer$FinishAnalysis$.apply(Optimizer.scala:306)
	at org.apache.spark.sql.catalyst.optimizer.Optimizer$FinishAnalysis$.apply(Optimizer.scala:286)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:222)
	at scala.collection.immutable.ArraySeq.foldLeft(ArraySeq.scala:222)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:219)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:211)
	at scala.collection.immutable.List.foreach(List.scala:334)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:211)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:182)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:89)
	at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:182)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:166)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:233)
	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:571)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:233)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:918)
	at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:232)
	at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:162)
	at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:158)
	at org.apache.spark.sql.execution.QueryExecution.assertOptimized(QueryExecution.scala:176)
	at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:196)
	at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:193)
	at org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:252)
	at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:298)
	at org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:266)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:138)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:241)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:116)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:918)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:72)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:196)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:120)
	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:571)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:119)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:109)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:442)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:442)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:34)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:330)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:326)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:34)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:34)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:418)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:109)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:96)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:94)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:224)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:95)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:918)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:92)
	at org.apache.spark.sql.Dataset.withPlan(Dataset.scala:4453)
	at org.apache.spark.sql.Dataset.createOrReplaceTempView(Dataset.scala:3978)
	at org.apache.spark.sql.connector.DatasourceV2SQLBase.$anonfun$$init$$1(DatasourceV2SQLBase.scala:47)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at org.scalatest.BeforeAndAfter.runTest(BeforeAndAfter.scala:210)
	at org.scalatest.BeforeAndAfter.runTest$(BeforeAndAfter.scala:203)
	at org.apache.spark.sql.CollationSuite.runTest(CollationSuite.scala:37)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:334)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1564)
	at org.scalatest.Suite.run(Suite.scala:1114)
	at org.scalatest.Suite.run$(Suite.scala:1096)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1564)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:273)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:272)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:69)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.sql.CollationSuite.org$scalatest$BeforeAndAfter$$super$run(CollationSuite.scala:37)
	at org.scalatest.BeforeAndAfter.run(BeforeAndAfter.scala:273)
	at org.scalatest.BeforeAndAfter.run$(BeforeAndAfter.scala:271)
	at org.apache.spark.sql.CollationSuite.run(CollationSuite.scala:37)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:47)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1321)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1315)
	at scala.collection.immutable.List.foreach(List.scala:334)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1315)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:992)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:970)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1481)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:970)
	at org.scalatest.tools.Runner$.run(Runner.scala:798)
	at org.scalatest.tools.Runner.run(Runner.scala)
	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2or3(ScalaTestRunner.java:43)
	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:26)
Caused by: java.lang.IllegalAccessException: symbolic reference class is not accessible: class sun.util.calendar.ZoneInfo, from interface org.apache.spark.sql.catalyst.util.SparkDateTimeUtils (unnamed module @5cee5251)
	at java.base/java.lang.invoke.MemberName.makeAccessException(MemberName.java:894)
	at java.base/java.lang.invoke.MethodHandles$Lookup.checkSymbolicClass(MethodHandles.java:3787)
	at java.base/java.lang.invoke.MethodHandles$Lookup.resolveOrFail(MethodHandles.java:3747)
	at java.base/java.lang.invoke.MethodHandles$Lookup.findVirtual(MethodHandles.java:2767)
	at org.apache.spark.sql.catalyst.util.SparkDateTimeUtils.$init$(SparkDateTimeUtils.scala:206)
	at org.apache.spark.sql.catalyst.util.DateTimeUtils$.<clinit>(DateTimeUtils.scala:41)
	... 102 more```

I am looking into how to fix this for me locally, but haven't found a solution yet.

@LuciferYang
Copy link
Contributor Author

@mihailom-db Could you provide a specific build command? The maven compilation in the GA test has passed.

@LuciferYang
Copy link
Contributor Author

LuciferYang commented Apr 1, 2024

image

Could you try the operation above? I think this should not be a new issue.

@mihailom-db
Copy link
Contributor

This seems to have solved the problem. Thanks.

sweisdb pushed a commit to sweisdb/spark that referenced this pull request Apr 1, 2024
…` instead of `-target`

### What changes were proposed in this pull request?
This pr makes the following changes to allow Spark to build with `-release` instead of `-target`:

1. Use `MethodHandle` instead of direct calls to `sun.security.action.GetBooleanAction` and `sun.util.calendar.ZoneInfo`, because they are not `exports` APIs.

2. `Channels.newReader` is used instead of ``,StreamDecoder.forDecoder because `StreamDecoder.forDecoder` is also not `exports` APIs.

```java
  public static Reader newReader(ReadableByteChannel ch,
                                   CharsetDecoder dec,
                                   int minBufferCap)
    {
        Objects.requireNonNull(ch, "ch");
        return StreamDecoder.forDecoder(ch, dec.reset(), minBufferCap);
    }
```

3. Adjusted the import of `java.io._` in `yarn/Client.scala` to fix the compilation error:

```
Error: ] /home/runner/work/spark/spark/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:20: object FileSystem is not a member of package java.io
```

4. Replaced `-target` with `-release` in `pom.xml` and `SparkBuild.scala`, and removed the `-source` option, because using `-release` is sufficient.

5. Upgrade `scala-maven-plugin` from 4.7.1 to 4.8.1 to fix the error `[ERROR] -release cannot be less than -target` when executing `build/mvn clean install -DskipTests -Djava.version=21`

### Why are the changes needed?
After Scala 2.13.9, the compile option `-target` has been deprecated, it is recommended to use `-release`:

- scala/scala#9982

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass GitHub Actions

### Was this patch authored or co-authored using generative AI tooling?
No

Closes apache#45716 from LuciferYang/scala-maven-plugin-491.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants