Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There are some errors when run the test case with spark2.3 #4316

Open
xubo245 opened this issue Apr 10, 2023 · 0 comments
Open

There are some errors when run the test case with spark2.3 #4316

xubo245 opened this issue Apr 10, 2023 · 0 comments

Comments

@xubo245
Copy link
Contributor

xubo245 commented Apr 10, 2023

There are some errors when run the test case with spark2.3

- Test restructured array<timestamp> as index column on SI with compaction
2023-04-10 03:13:52 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:13:53 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- Test restructured array<string> and string columns as index columns on SI with compaction
2023-04-10 03:13:56 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:13:56 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test array<string> on secondary index with compaction
2023-04-10 03:14:00 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:00 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test array<string> and string as index columns on secondary index with compaction
- test load data with array<string> on secondary index
- test SI global sort with si segment merge enabled for complex data types
- test SI global sort with si segment merge enabled for newly added complex column
- test SI global sort with si segment merge enabled for primitive data types
- test SI global sort with si segment merge complex data types by rebuild command
- test SI global sort with si segment merge primitive data types by rebuild command
- test si creation with struct and map type
- test si creation with array
2023-04-10 03:14:26 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:26 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test complex with null and empty data
- test array<date> on secondary index
- test array<timestamp> on secondary index
2023-04-10 03:14:31 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:31 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test array<varchar> and varchar as index columns on secondary index
2023-04-10 03:14:34 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:34 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test multiple SI with array and primitive type
2023-04-10 03:14:40 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
2023-04-10 03:14:40 ERROR CarbonInternalMetastore$:254 - Adding/Modifying tableProperties operation failed: Recursive load
- test SI complex with multiple array contains
TestCarbonInternalMetastore:
- test delete index silent
2023-04-10 03:14:43 ERROR CarbonInternalMetastore$:118 - Exception occurred while drop index table for : Some(test).unknown : Table or view 'unknown' not found in database 'test';
2023-04-10 03:14:43 ERROR CarbonInternalMetastore$:131 - Exception occurred while drop index table for : Some(test).index1 : Table or view 'index1' not found in database 'test';
- test delete index table silently when exception occur
org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'index1' not found in database 'test';
	at org.apache.spark.sql.hive.client.HiveClient$$anonfun$getTable$1.apply(HiveClient.scala:81)
	at org.apache.spark.sql.hive.client.HiveClient$$anonfun$getTable$1.apply(HiveClient.scala:81)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:81)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:83)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getRawTable$1.apply(HiveExternalCatalog.scala:118)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getRawTable$1.apply(HiveExternalCatalog.scala:118)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
	at org.apache.spark.sql.hive.HiveExternalCatalog.getRawTable(HiveExternalCatalog.scala:117)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:684)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:684)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
	at org.apache.spark.sql.hive.HiveExternalCatalog.getTable(HiveExternalCatalog.scala:683)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupRelation(SessionCatalog.scala:674)
	at org.apache.spark.sql.hive.CarbonFileMetastore.lookupRelation(CarbonFileMetastore.scala:197)
	at org.apache.spark.sql.hive.CarbonFileMetastore.lookupRelation(CarbonFileMetastore.scala:191)
	at org.apache.spark.sql.secondaryindex.events.SIDropEventListener$$anonfun$onEvent$1.apply(SIDropEventListener.scala:69)
	at org.apache.spark.sql.secondaryindex.events.SIDropEventListener$$anonfun$onEvent$1.apply(SIDropEventListener.scala:65)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at org.apache.spark.sql.secondaryindex.events.SIDropEventListener.onEvent(SIDropEventListener.scala:65)
	at org.apache.carbondata.events.OperationListenerBus.fireEvent(OperationListenerBus.java:83)
	at org.apache.carbondata.events.package$.withEvents(package.scala:26)
	at org.apache.carbondata.events.package$.withEvents(package.scala:22)
	at org.apache.spark.sql.execution.command.table.CarbonDropTableCommand.processMetadata(CarbonDropTableCommand.scala:93)
	at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:160)
	at org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:159)
	at org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:118)
	at org.apache.spark.sql.execution.command.AtomicRunnableCommand.runWithAudit(package.scala:155)
	at org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:159)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
	at org.apache.spark.sql.Dataset$$anonfun$51.apply(Dataset.scala:3265)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3264)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
	at org.apache.spark.sql.test.SparkTestQueryExecutor.sql(SparkTestQueryExecutor.scala:37)
	at org.apache.spark.sql.test.util.QueryTest.sql(QueryTest.scala:123)
	at org.apache.carbondata.spark.testsuite.secondaryindex.TestCarbonInternalMetastore.beforeEach(TestCarbonInternalMetastore.scala:49)
	at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:220)
	at org.apache.carbondata.spark.testsuite.secondaryindex.TestCarbonInternalMetastore.runTest(TestCarbonInternalMetastore.scala:33)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite$class.run(Suite.scala:1147)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
	at org.apache.carbondata.spark.testsuite.secondaryindex.TestCarbonInternalMetastore.org$scalatest$BeforeAndAfterAll$$super$run(TestCarbonInternalMetastore.scala:33)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
	at org.apache.carbondata.spark.testsuite.secondaryindex.TestCarbonInternalMetastore.run(TestCarbonInternalMetastore.scala:33)
	at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1210)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1257)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1255)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1255)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:30)
	at org.scalatest.Suite$class.run(Suite.scala:1144)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:30)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1340)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1334)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1334)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1011)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1010)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1500)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1010)
	at org.scalatest.tools.Runner$.main(Runner.scala:827)
	at org.scalatest.tools.Runner.main(Runner.scala)
- test show index when SI were created before the change CARBONDATA-3765
- test refresh index with different value of isIndexTableExists
- test refresh index with indexExists as false and empty index table
- test refresh index with indexExists as null
Run completed in 15 minutes, 36 seconds.
Total number of tests run: 283
Suites: completed 32, aborted 0
Tests: succeeded 282, failed 1, canceled 0, ignored 1, pending 0
*** 1 TEST FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache CarbonData :: Parent ........................ SUCCESS [  2.509 s]
[INFO] Apache CarbonData :: Common ........................ SUCCESS [ 15.990 s]
[INFO] Apache CarbonData :: Format ........................ SUCCESS [ 32.657 s]
[INFO] Apache CarbonData :: Core .......................... SUCCESS [01:32 min]
[INFO] Apache CarbonData :: Processing .................... SUCCESS [ 33.903 s]
[INFO] Apache CarbonData :: Hadoop ........................ SUCCESS [ 22.800 s]
[INFO] Apache CarbonData :: Materialized View Plan ........ SUCCESS [01:15 min]
[INFO] Apache CarbonData :: Hive .......................... SUCCESS [02:05 min]
[INFO] Apache CarbonData :: SDK ........................... SUCCESS [02:03 min]
[INFO] Apache CarbonData :: CLI ........................... SUCCESS [05:03 min]
[INFO] Apache CarbonData :: Lucene Index .................. SUCCESS [ 22.601 s]
[INFO] Apache CarbonData :: Bloom Index ................... SUCCESS [ 12.992 s]
[INFO] Apache CarbonData :: Geo ........................... SUCCESS [ 23.719 s]
[INFO] Apache CarbonData :: Streaming ..................... SUCCESS [ 33.608 s]
[INFO] Apache CarbonData :: Spark ......................... FAILURE [  01:27 h]
[INFO] Apache CarbonData :: Secondary Index ............... FAILURE [16:28 min]
[INFO] Apache CarbonData :: Index Examples ................ SUCCESS [ 11.280 s]
[INFO] Apache CarbonData :: Flink Proxy ................... SUCCESS [ 15.864 s]
[INFO] Apache CarbonData :: Flink ......................... SUCCESS [05:29 min]
[INFO] Apache CarbonData :: Flink Build ................... SUCCESS [  5.949 s]
[INFO] Apache CarbonData :: Presto ........................ SUCCESS [02:37 min]
[INFO] Apache CarbonData :: Examples ...................... SUCCESS [02:54 min]
[INFO] Apache CarbonData :: Flink Examples ................ SUCCESS [  8.313 s]
[INFO] Apache CarbonData :: Assembly ...................... FAILURE [ 14.763 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:54 h (Wall Clock)
[INFO] Finished at: 2023-04-10T03:14:53+08:00
[INFO] Final Memory: 245M/2221M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test (test) on project carbondata-spark_2.3: There are test failures -> [Help 1]
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project carbondata-assembly: Error creating shaded jar: /Users/xubo/Desktop/xubo/git/carbondata1/integration/spark/target/classes (Is a directory) -> [Help 2]
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test (test) on project carbondata-secondary-index: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :carbondata-spark_2.3
[INFO] Build failures were ignored.

Process finished with exit code 0

Spark:


�[32mAlterTableColumnRenameTestCase:�[0m
�[32m- test only column rename operation�[0m
�[32m- CARBONDATA-4053 test rename column, column name in table properties changed correctly�[0m
�[32m- Rename more than one column at a time in one operation�[0m
�[32m- rename complex columns with invalid structure/duplicate-names/Map-type�[0m
�[31m- test alter rename struct of (primitive/struct/array) *** FAILED ***�[0m
�[31m  Results do not match for query:�[0m
�[31m  == Parsed Logical Plan ==�[0m
�[31m  'Project ['str33.a22]�[0m
�[31m  +- 'UnresolvedRelation `test_rename`�[0m
  
�[31m  == Analyzed Logical Plan ==�[0m
�[31m  a22: struct<b11:int>�[0m
�[31m  Project [str33#74649.a22 AS a22#74653]�[0m
�[31m  +- SubqueryAlias test_rename�[0m
�[31m     +- Relation[str1#74648,str33#74649,str3#74650,intfield#74651] CarbonDatasourceHadoopRelation�[0m
  
�[31m  == Optimized Logical Plan ==�[0m
�[31m  Project [str33#74649.a22 AS a22#74653]�[0m
�[31m  +- Relation[str1#74648,str33#74649,str3#74650,intfield#74651] CarbonDatasourceHadoopRelation�[0m
  
�[31m  == Physical Plan ==�[0m
�[31m  *(1) Project [str33#74649.a22 AS a22#74653]�[0m
�[31m  +- *(1) Scan CarbonDatasourceHadoopRelation default.test_rename[str33#74649] Batched: false, DirectScan: false, PushedFilters: [], ReadSchema: [str33.a22]�[0m
�[31m  == Results ==�[0m
�[31m  !== Correct Answer - 2 ==   == Spark Answer - 2 ==�[0m
�[31m  ![[2]]                      [[3]]�[0m
�[31m  ![[3]]                      [null] (QueryTest.scala:93)�[0m
�[32m- test alter rename array of (primitive/array/struct)�[0m
�[32m- test alter rename and change datatype for map of (primitive/array/struct)�[0m
�[32m- test alter rename and change datatype for struct integer�[0m
�[32m- test alter rename and change datatype for map integer�[0m




[32m- test LocalDictionary with True�[0m
�[31m- test LocalDictionary with custom Threshold *** FAILED ***�[0m
�[31m  scala.this.Predef.Boolean2boolean(org.apache.carbondata.core.util.CarbonTestUtil.checkForLocalDictionary(org.apache.carbondata.core.util.CarbonTestUtil.getDimRawChunk(TestNonTransactionalCarbonTable.this.writerPath, scala.this.Predef.int2Integer(0)))) was false (TestNonTransactionalCarbonTable.scala:2447)�[0m
�[32m- test Local Dictionary with FallBack�[0m
�[32m- test local dictionary with External Table data load �[0m
�[33m- test inverted index column by API !!! IGNORED !!!�[0m
�[32m- test Local Dictionary with Default�[0m
�[32m- Test with long string columns with 1 MB pageSize�[0m
�[32mIntegerDataTypeTestCase:�[0m
�[32m- select empno from integertypetablejoin�[0m
�[32mVarcharDataTypesBasicTestCase:�[0m
�[32m- long string columns cannot be sort_columns�[0m
�[32m- long string columns can only be string columns�[0m
�[32m- cannot alter sort_columns dataType to long_string_columns�[0m
�[32m- check compaction after altering range column dataType to longStringColumn�[0m
�[32m- long string columns cannot contain duplicate columns�[0m
�[32m- long_string_columns: column does not exist in table �[0m
�[32m- long_string_columns: columns cannot exist in partitions columns�[0m
�[32m- long_string_columns: columns cannot exist in no_inverted_index columns�[0m
�[32m- test alter table properties for long string columns�[0m 


�[32m- test duplicate columns with select query�[0m
�[36mRun completed in 1 hour, 23 minutes, 47 seconds.�[0m
�[36mTotal number of tests run: 3430�[0m
�[36mSuites: completed 302, aborted 0�[0m
�[36mTests: succeeded 3428, failed 2, canceled 0, ignored 82, pending 0�[0m
�[31m*** 2 TESTS FAILED ***�[0m

index:

[32mCarbonIndexFileMergeTestCaseWithSI:�[0m
�[32m- Verify correctness of index merge�[0m
�[33m- Verify command of index merge !!! IGNORED !!!�[0m
�[31m- Verify command of index merge without enabling property *** FAILED ***�[0m
�[31m  org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 2001.0 failed 1 times, most recent failure: Lost task 1.0 in stage 2001.0 (TID 79695, localhost, executor driver): java.lang.RuntimeException: Failed to merge index files in path: /Users/xubo/Desktop/xubo/git/carbondata1/integration/spark/target/warehouse/nonindexmerge/Fact/Part0/Segment_1. Table status update with mergeIndex file has failed�[0m
�[31m	at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:122)�[0m
�[31m	at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:386)�[0m
�[31m	at org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$1.<init>(CarbonMergeFilesRDD.scala:322)�[0m
�[31m	at org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287)�[0m
�[31m	at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)�[0m
�[31m	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)�[0m
�[31m	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)�[0m
�[31m	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)�[0m
�[31m	at org.apache.spark.scheduler.Task.run(Task.scala:109)�[0m
�[31m	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)�[0m
�[31m	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)�[0m
�[31m	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)�[0m
�[31m	at java.lang.Thread.run(Thread.java:748)�[0m
�[31mCaused by: java.io.IOException: Table status update with mergeIndex file has failed�[0m
�[31m	at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.writeMergeIndexFileBasedOnSegmentFile(CarbonIndexFileMergeWriter.java:327)�[0m
�[31m	at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:114)�[0m
�[31m	... 12 more�[0m
�[31m�[0m
�[31mDriver stacktrace:�[0m
�[31m  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1661)�[0m
�[31m  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1649)�[0m
�[31m  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1648)�[0m
�[31m  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)�[0m
�[31m  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)�[0m
�[31m  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1648)�[0m
�[31m  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)�[0m
�[31m  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)�[0m
�[31m  at scala.Option.foreach(Option.scala:257)�[0m
�[31m  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)�[0m
�[31m  ...�[0m
�[31m  Cause: java.lang.RuntimeException: Failed to merge index files in path: /Users/xubo/Desktop/xubo/git/carbondata1/integration/spark/target/warehouse/nonindexmerge/Fact/Part0/Segment_1. Table status update with mergeIndex file has failed�[0m
�[31m  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:122)�[0m
�[31m  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:386)�[0m
�[31m  at org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$1.<init>(CarbonMergeFilesRDD.scala:322)�[0m
�[31m  at org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287)�[0m
�[31m  at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)�[0m
�[31m  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)�[0m
�[31m  at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)�[0m
�[31m  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)�[0m
�[31m  at org.apache.spark.scheduler.Task.run(Task.scala:109)�[0m
�[31m  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)�[0m
�[31m  ...�[0m
�[31m  Cause: java.io.IOException: Table status update with mergeIndex file has failed�[0m
�[31m  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.writeMergeIndexFileBasedOnSegmentFile(CarbonIndexFileMergeWriter.java:327)�[0m
�[31m  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:114)�[0m
�[31m  at org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter.mergeCarbonIndexFilesOfSegment(CarbonIndexFileMergeWriter.java:386)�[0m
�[31m  at org.apache.spark.rdd.CarbonMergeFilesRDD$$anon$1.<init>(CarbonMergeFilesRDD.scala:322)�[0m
�[31m  at org.apache.spark.rdd.CarbonMergeFilesRDD.internalCompute(CarbonMergeFilesRDD.scala:287)�[0m
�[31m  at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:84)�[0m
�[31m  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)�[0m
�[31m  at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)�[0m
�[31m  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)�[0m
�[31m  at org.apache.spark.scheduler.Task.run(Task.scala:109)�[0m
�[31m  ...�[0m
�[32m- Verify index index merge with compaction�[0m
�[32m- Verify index index merge for compacted segments�[0m
�[32m- test refresh index with indexExists as null�[0m
�[36mRun completed in 15 minutes, 36 seconds.�[0m
�[36mTotal number of tests run: 283�[0m
�[36mSuites: completed 32, aborted 0�[0m
�[36mTests: succeeded 282, failed 1, canceled 0, ignored 1, pending 0�[0m
�[31m*** 1 TEST FAILED ***�[0m

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant