Re: Newbie Question: Challenges with Getting Started

2023-04-06 Thread Liang Chen
Hi

Should be ok, please try it again.

Regards


Sandeep N  于2022年4月18日周一 19:47写道:

> Hi all,
>
> I ran into carbondata and started trying it out.  I am following this page
> https://carbondata.apache.org/quick-start-guide.html.
>
> So far I have downloaded
> apache-carbondata-2.3.0-bin-spark3.1.1-hadoop2.7.2.jar. I am trying to use
> it with Spark 3.1.3 (I imagine the micro version difference should not
> matter).
>
> So far I have tried this with OpenJDK 8 and OpenJDK 11 and in both
> instances, create table works however when I attempt to load data from CSV
> it fails with the exception below. This is a different csv from what is
> called out on that quick-start page.
>
> Here is how I am launching Carbondata
> *spark-shell --conf
> spark.sql.extensions=org.apache.spark.sql.CarbonExtensions --jars  to the above jar>*
>
> I am getting failures on both OpenJDK 11 and OpenJDK 8. Open JDK 8 fails
> with a segfault. I am running on a MacBook pro. The OpenJDK 11 errors seem
> to indicate that is not supported but JDK 8 seems to crash and exit. Please
> see the errors below, can someone point out what I am doing wrong?
>
> Error on OpenJDK 8 when I try to load data from a CSV file
> ==
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x000104cbd7bb, pid=17765,
> tid=0xbd03
> #
> # JRE version: OpenJDK Runtime Environment (8.0_282) (build
> 1.8.0_282-bre_2021_01_20_16_06-b00)
> # Java VM: OpenJDK 64-Bit Server VM (25.282-b00 mixed mode bsd-amd64
> compressed oops)
> # Problematic frame:
> # V  [libjvm.dylib+0x5667bb]
>
> Error on OpenJDK 11 when I try to load data from a CSV file
> ==
> 22/04/17 23:57:06 ERROR CarbonFactDataHandlerColumnar: Error in producer
> java.lang.reflect.InaccessibleObjectException: Unable to make public void
> jdk.internal.ref.Cleaner.clean() accessible: module java.base does not
> "exports jdk.internal.ref" to unnamed module @5115e1e6
> at
>
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:340)
> at
>
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:280)
> at
> java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:198)
> at java.base/java.lang.reflect.Method.setAccessible(Method.java:192)
> at
>
> org.apache.carbondata.core.memory.UnsafeMemoryManager.destroyDirectByteBuffer(UnsafeMemoryManager.java:232)
> at
> org.apache.carbondata.core.datastore.page
> .LVByteBufferColumnPage.ensureMemory(LVByteBufferColumnPage.java:125)
> at
> org.apache.carbondata.core.datastore.page
> .LVByteBufferColumnPage.putBytes(LVByteBufferColumnPage.java:97)
> at
> org.apache.carbondata.core.datastore.page
> .LocalDictColumnPage.putBytes(LocalDictColumnPage.java:139)
> at
>
> org.apache.carbondata.core.datastore.page.ColumnPage.putData(ColumnPage.java:413)
> at
>
> org.apache.carbondata.processing.store.TablePage.convertToColumnarAndAddToPages(TablePage.java:241)
> at
> org.apache.carbondata.processing.store.TablePage.addRow(TablePage.java:201)
> at
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processDataRows(CarbonFactDataHandlerColumnar.java:397)
> at
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.access$500(CarbonFactDataHandlerColumnar.java:60)
> at
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:637)
> at
>
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:614)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
>
> Error on OpenJDK 11 when I try to insert a single record
> ==
> 22/04/18 09:46:10 ERROR CarbonFactDataHandlerColumnar: Error in producer
> java.lang.reflect.InaccessibleObjectException: Unable to make public void
> jdk.internal.ref.Cleaner.clean() accessible: module java.base does not
> "exports jdk.internal.ref" to unnamed module @cc4787f
> at
>
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:340)
> at
>
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:280)
> at
> java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:198)
> at java.base/java.lang.reflect.Method.setAccessible(Method.java:192)
> at
>
> org.apache.carbondata.core.memory.UnsafeMemoryManager.destroyDirectByteBuffer(UnsafeMemoryManager.java:232)
> at
> org.apache.carbondata.core.datastore.page
> .LVByteBufferColumnPage.ensureMemory(LVByteBufferColumnPage.java:125)

Re: Error while creating table

2023-04-06 Thread Liang Chen
The user group is  u...@carbondata.apache.org

Regards

Xinyu Zeng  于2022年4月25日周一 11:13写道:

> Hi,
>
> Since there is no user group, I am using this email list to ask
> questions. Please let me know if there are other platforms for users
> to discuss.
>
> I am new to CarbonData and am following the quick start guide. On
> Ubuntu 20.04, I installed spark-3.1.1-bin-hadoop2.7.tgz and
> apache-carbondata-2.3.0-bin-spark3.1.1-hadoop2.7.2.jar. By using
> SparkSQL CLI, I got an error message while following the quick start
> guide(at the end of this email). Could someone give me some help?
> Thanks!
>
> Shawn
>
> java.lang.IncompatibleClassChangeError: class
> org.apache.spark.sql.hive.CarbonRelation has interface
> org.apache.spark.sql.catalyst.plans.logical.LeafNode as super class
> at java.base/java.lang.ClassLoader.defineClass1(Native Method)
> at
> java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
> at
> java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
> at java.base/java.net
> .URLClassLoader.defineClass(URLClassLoader.java:555)
> at java.base/java.net
> .URLClassLoader$1.run(URLClassLoader.java:458)
> at java.base/java.net
> .URLClassLoader$1.run(URLClassLoader.java:452)
> at java.base/java.security.AccessController.doPrivileged(Native
> Method)
> at java.base/java.net
> .URLClassLoader.findClass(URLClassLoader.java:451)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:589)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
> at
> org.apache.spark.sql.hive.CarbonMetaStoreFactory$.createCarbonMetaStore(CarbonMetaStore.scala:189)
> at org.apache.spark.sql.CarbonEnv.init(CarbonEnv.scala:137)
> at org.apache.spark.sql.CarbonEnv$.getInstance(CarbonEnv.scala:176)
> at
> org.apache.spark.sql.parser.CarbonExtensionSqlParser.parsePlan(CarbonExtensionSqlParser.scala:44)
> at
> org.apache.spark.sql.SparkSession.$anonfun$sql$2(SparkSession.scala:616)
> at
> org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
> at
> org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:616)
> at
> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
> at
> org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
> at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384)
> at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504)
> at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:498)
> at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:287)
> at
> org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at org.apache.spark.deploy.SparkSubmit.org
> $apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
> at
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
> at
> org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
> at
> org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
> at
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
> at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> java.lang.IncompatibleClassChangeError: class
> org.apache.spark.sql.hive.CarbonRelation has interface
> org.apache.spark.sql.catalyst.plans.logical.LeafNode as 

Discussion : the Jenkins CI of ASF is not working well , any suggestion ? we need to change a new free CI

2023-04-06 Thread Liang Chen
Dear

The Jenkins CI of ASF is not working well , any suggestion ?  we need to
change a new free CI