[jira] [Commented] (SPARK-21928) ClassNotFoundException for custom Kryo registrator class during serde in netty threads

2023-11-20 Thread Shivam Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788104#comment-17788104
 ] 

Shivam Sharma commented on SPARK-21928:
---

I am getting this intermittent failure on spark 2.4.3 version. Here is the full 
stack trace:
{code:java}
Exception in thread "main" java.lang.reflect.InvocationTargetExceptionat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)at 
org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:65)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)Caused 
by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 75 
in stage 1.0 failed 4 times, most recent failure: Lost task 75.3 in stage 1.0 
(TID 171, phx6-kwq.prod.xyz.internal, executor 71): java.io.IOException: 
org.apache.spark.SparkException: Failed to register classes with Kryoat 
org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1333)at 
org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:208)
at 
org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:66)
at 
org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:66)   
 at 
org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:96) 
   at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:89)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)   
 at org.apache.spark.scheduler.Task.run(Task.scala:121)at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
   at java.lang.Thread.run(Thread.java:748)Caused by: 
org.apache.spark.SparkException: Failed to register classes with Kryoat 
org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:140)
at 
org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:324)
at 
org.apache.spark.serializer.KryoSerializerInstance.(KryoSerializer.scala:309)
at 
org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:218)
at 
org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:305)
at 
org.apache.spark.broadcast.TorrentBroadcast.$anonfun$readBroadcastBlock$3(TorrentBroadcast.scala:235)
at scala.Option.getOrElse(Option.scala:138)at 
org.apache.spark.broadcast.TorrentBroadcast.$anonfun$readBroadcastBlock$1(TorrentBroadcast.scala:211)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1326)... 
14 moreCaused by: java.lang.ClassNotFoundException: 
com.xyz.datashack.SparkKryoRegistrarat 
java.lang.ClassLoader.findClass(ClassLoader.java:530)at 
org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.java:35)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)at 
org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.java:40)
at 
org.apache.spark.util.ChildFirstURLClassLoader.loadClass(ChildFirstURLClassLoader.java:48)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)at 
java.lang.Class.forName0(Native Method)at 
java.lang.Class.forName(Class.java:348)at 
org.apache.spark.serializer.KryoSerializer.$anonfun$newKryo$6(KryoSerializer.scala:135)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)   
 at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33) 
   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)at 
scala.collection.TraversableLike.map(TraversableLike.scala:237)at 
scala.collection.TraversableLike.map$(TraversableLike.scala:230)at 
scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)at 
org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:135)
... 22 more
Driver stacktrace:at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:1889)
at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1877)
at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1876)
at 

[jira] [Commented] (SPARK-29251) intermittent serialization failures in spark

2023-11-20 Thread Shivam Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788101#comment-17788101
 ] 

Shivam Sharma commented on SPARK-29251:
---

I am also getting the same issue, did you find the fix?

> intermittent serialization failures in spark
> 
>
> Key: SPARK-29251
> URL: https://issues.apache.org/jira/browse/SPARK-29251
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.3
> Environment: We are running Spark 2.4.3 on an AWS EMR cluster. Our 
> cluster consists of one driver instance, 3 core instances, and 3 to 12 task 
> instances; the task group autoscales as needed.
>Reporter: Jerry Vinokurov
>Priority: Major
>  Labels: bulk-closed
>
> We are running an EMR cluster on AWS that processes somewhere around 100 
> batch jobs a day. These jobs are running various SQL commands to transform 
> data and the data volume ranges between a dozen or a few hundred MB to a few 
> GB on the high end for some jobs, and even around ~1 TB for one particularly 
> large one. We use the Kryo serializer and preregister our classes like so:
>  
> {code:java}
> object KryoRegistrar {
>   val classesToRegister: Array[Class[_]] = Array(
>     classOf[MyModel],
>    [etc]
>   )
> }
> // elsewhere
> val sparkConf = new SparkConf()
>       .registerKryoClasses(KryoRegistrar.classesToRegister)
> {code}
>  
>  
> Intermittently throughout the cluster's operation we have observed jobs 
> terminating with the following stack trace:
>  
>  
> {noformat}
> org.apache.spark.SparkException: Failed to register classes with Kryo
>   at 
> org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:140)
>   at 
> org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:324)
>   at 
> org.apache.spark.serializer.KryoSerializerInstance.(KryoSerializer.scala:309)
>   at 
> org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:218)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:288)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:127)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:88)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>   at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1489)
>   at 
> org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.buildReader(CSVFileFormat.scala:103)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormat$class.buildReaderWithPartitionValues(FileFormat.scala:129)
>   at 
> org.apache.spark.sql.execution.datasources.TextBasedFileFormat.buildReaderWithPartitionValues(FileFormat.scala:165)
>   at 
> org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:309)
>   at 
> org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:305)
>   at 
> org.apache.spark.sql.execution.FileSourceScanExec.inputRDDs(DataSourceScanExec.scala:327)
>   at 
> org.apache.spark.sql.execution.FilterExec.inputRDDs(basicPhysicalOperators.scala:121)
>   at 
> org.apache.spark.sql.execution.ProjectExec.inputRDDs(basicPhysicalOperators.scala:41)
>   at 
> org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:627)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:156)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>   at 
> org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:92)
>   at 
> org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:128)
>   at 
> org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:119)
>   at 
> org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
>   at 
> org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
>   at 
> 

[jira] [Comment Edited] (HIVE-24066) Hive query on parquet data should identify if column is not present in file schema and show NULL value instead of Exception

2021-05-06 Thread Shivam Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17340225#comment-17340225
 ] 

Shivam Sharma edited comment on HIVE-24066 at 5/6/21, 1:59 PM:
---

I am also facing the same issue.  Can we get an update here?

Can we get some workarounds here other than creating a new table?


was (Author: shivamsharma):
I am also facing the same issue. Can we get update here?

> Hive query on parquet data should identify if column is not present in file 
> schema and show NULL value instead of Exception
> ---
>
> Key: HIVE-24066
> URL: https://issues.apache.org/jira/browse/HIVE-24066
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.5, 3.1.2
>Reporter: Jainik Vora
>Priority: Major
> Attachments: day_01.snappy.parquet
>
>
> I created a hive table containing columns with struct data type 
>   
> {code:java}
> CREATE EXTERNAL TABLE test_dwh.sample_parquet_table (
>   `context` struct<
> `app`: struct<
> `build`: string,
> `name`: string,
> `namespace`: string,
> `version`: string
> >,
> `device`: struct<
> `adtrackingenabled`: boolean,
> `advertisingid`: string,
> `id`: string,
> `manufacturer`: string,
> `model`: string,
> `type`: string
> >,
> `locale`: string,
> `library`: struct<
> `name`: string,
> `version`: string
> >,
> `os`: struct<
> `name`: string,
> `version`: string
> >,
> `screen`: struct<
> `height`: bigint,
> `width`: bigint
> >,
> `network`: struct<
> `carrier`: string,
> `cellular`: boolean,
> `wifi`: boolean
>  >,
> `timezone`: string,
> `userAgent`: string
> >
> ) PARTITIONED BY (day string)
> STORED as PARQUET
> LOCATION 's3://xyz/events'{code}
>  
>  All columns are nullable hence the parquet files read by the table don't 
> always contain all columns. If any file in a partition doesn't have 
> "context.os" struct and if "context.os.name" is queried, Hive throws an 
> exception as below. Same for "context.screen" as well.
>   
> {code:java}
> 2020-10-23T00:44:10,496 ERROR [db58bfe6-d0ca-4233-845a-8a10916c3ff1 
> main([])]: CliDriver (SessionState.java:printError(1126)) - Failed with 
> exception java.io.IOException:java.lang.RuntimeException: Primitive type 
> osshould not doesn't match typeos[name]
> 2020-10-23T00:44:10,496 ERROR [db58bfe6-d0ca-4233-845a-8a10916c3ff1 
> main([])]: CliDriver (SessionState.java:printError(1126)) - Failed with 
> exception java.io.IOException:java.lang.RuntimeException: Primitive type 
> osshould not doesn't match typeos[name]java.io.IOException: 
> java.lang.RuntimeException: Primitive type osshould not doesn't match 
> typeos[name] 
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:521)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:428)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:147)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2208)
>   at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:253)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
>   at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.lang.RuntimeException: Primitive type osshould not doesn't 
> match typeos[name] 
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.projectLeafTypes(DataWritableReadSupport.java:330)
>  
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.projectLeafTypes(DataWritableReadSupport.java:322)
>  
>   at 
> 

[jira] [Commented] (HIVE-24066) Hive query on parquet data should identify if column is not present in file schema and show NULL value instead of Exception

2021-05-06 Thread Shivam Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17340225#comment-17340225
 ] 

Shivam Sharma commented on HIVE-24066:
--

I am also facing the same issue. Can we get update here?

> Hive query on parquet data should identify if column is not present in file 
> schema and show NULL value instead of Exception
> ---
>
> Key: HIVE-24066
> URL: https://issues.apache.org/jira/browse/HIVE-24066
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.5, 3.1.2
>Reporter: Jainik Vora
>Priority: Major
> Attachments: day_01.snappy.parquet
>
>
> I created a hive table containing columns with struct data type 
>   
> {code:java}
> CREATE EXTERNAL TABLE test_dwh.sample_parquet_table (
>   `context` struct<
> `app`: struct<
> `build`: string,
> `name`: string,
> `namespace`: string,
> `version`: string
> >,
> `device`: struct<
> `adtrackingenabled`: boolean,
> `advertisingid`: string,
> `id`: string,
> `manufacturer`: string,
> `model`: string,
> `type`: string
> >,
> `locale`: string,
> `library`: struct<
> `name`: string,
> `version`: string
> >,
> `os`: struct<
> `name`: string,
> `version`: string
> >,
> `screen`: struct<
> `height`: bigint,
> `width`: bigint
> >,
> `network`: struct<
> `carrier`: string,
> `cellular`: boolean,
> `wifi`: boolean
>  >,
> `timezone`: string,
> `userAgent`: string
> >
> ) PARTITIONED BY (day string)
> STORED as PARQUET
> LOCATION 's3://xyz/events'{code}
>  
>  All columns are nullable hence the parquet files read by the table don't 
> always contain all columns. If any file in a partition doesn't have 
> "context.os" struct and if "context.os.name" is queried, Hive throws an 
> exception as below. Same for "context.screen" as well.
>   
> {code:java}
> 2020-10-23T00:44:10,496 ERROR [db58bfe6-d0ca-4233-845a-8a10916c3ff1 
> main([])]: CliDriver (SessionState.java:printError(1126)) - Failed with 
> exception java.io.IOException:java.lang.RuntimeException: Primitive type 
> osshould not doesn't match typeos[name]
> 2020-10-23T00:44:10,496 ERROR [db58bfe6-d0ca-4233-845a-8a10916c3ff1 
> main([])]: CliDriver (SessionState.java:printError(1126)) - Failed with 
> exception java.io.IOException:java.lang.RuntimeException: Primitive type 
> osshould not doesn't match typeos[name]java.io.IOException: 
> java.lang.RuntimeException: Primitive type osshould not doesn't match 
> typeos[name] 
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:521)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:428)
>   at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:147)
>   at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2208)
>   at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:253)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
>   at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.lang.RuntimeException: Primitive type osshould not doesn't 
> match typeos[name] 
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.projectLeafTypes(DataWritableReadSupport.java:330)
>  
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.projectLeafTypes(DataWritableReadSupport.java:322)
>  
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.getProjectedSchema(DataWritableReadSupport.java:249)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.init(DataWritableReadSupport.java:379)
>  
>   at 
> 

[jira] [Resolved] (CASSANDRA-14785) http://cassandra.apache.org/ leads to Hadoop site

2018-09-22 Thread Shivam Sharma (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivam Sharma resolved CASSANDRA-14785.
---
Resolution: Invalid

Browser cache issue.

> http://cassandra.apache.org/ leads to Hadoop site
> -
>
> Key: CASSANDRA-14785
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14785
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shivam Sharma
>Priority: Major
> Attachments: image-2018-09-23-07-11-02-900.png
>
>
> http://cassandra.apache.org/ link is leading to apache hadoop website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14785) http://cassandra.apache.org/ leads to Hadoop site

2018-09-22 Thread Shivam Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624891#comment-16624891
 ] 

Shivam Sharma edited comment on CASSANDRA-14785 at 9/23/18 1:41 AM:


[~djoshi3] Something weird happened on my machine. I tried CURL and it worked 
fine. But in the browser, it was creating the issue before deleting cache and 
all.

First I tried opening in chrome which gave the same response it was opening 
Hadoop home page. Then I tried on firefox which was giving Mesos home page. 
Then I deleted the cache of both browser then it worked fine in the browser 
also. 

!image-2018-09-23-07-11-02-900.png!

 

I will close the issue as it belongs to my machine.


was (Author: shivamsharma):
[~djoshi3] Something weird happened on my machine. I tried CURL and it worked 
fine. But in the browser, it was creating the issue before deleting cache and 
all.

First I tried opening in chrome which gave the same response it was opening 
Hadoop home page. Then I tried on firefox which was giving Mesos home page. 
Then I deleted the cache of both browser then it worked fine in the browser 
also. 

!image-2018-09-23-07-11-02-900.png!

 

> http://cassandra.apache.org/ leads to Hadoop site
> -
>
> Key: CASSANDRA-14785
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14785
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shivam Sharma
>Priority: Major
> Attachments: image-2018-09-23-07-11-02-900.png
>
>
> http://cassandra.apache.org/ link is leading to apache hadoop website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14785) http://cassandra.apache.org/ leads to Hadoop site

2018-09-22 Thread Shivam Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624891#comment-16624891
 ] 

Shivam Sharma commented on CASSANDRA-14785:
---

[~djoshi3] Something weird happened on my machine. I tried CURL and it worked 
fine. But in the browser, it was creating the issue before deleting cache and 
all.

First I tried opening in chrome which gave the same response it was opening 
Hadoop home page. Then I tried on firefox which was giving Mesos home page. 
Then I deleted the cache of both browser then it worked fine in the browser 
also. 

!image-2018-09-23-07-11-02-900.png!

 

> http://cassandra.apache.org/ leads to Hadoop site
> -
>
> Key: CASSANDRA-14785
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14785
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shivam Sharma
>Priority: Major
> Attachments: image-2018-09-23-07-11-02-900.png
>
>
> http://cassandra.apache.org/ link is leading to apache hadoop website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14785) http://cassandra.apache.org/ leads to Hadoop site

2018-09-22 Thread Shivam Sharma (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivam Sharma updated CASSANDRA-14785:
--
Attachment: image-2018-09-23-07-11-02-900.png

> http://cassandra.apache.org/ leads to Hadoop site
> -
>
> Key: CASSANDRA-14785
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14785
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shivam Sharma
>Priority: Major
> Attachments: image-2018-09-23-07-11-02-900.png
>
>
> http://cassandra.apache.org/ link is leading to apache hadoop website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14785) http://cassandra.apache.org/ leads to Hadoop site

2018-09-22 Thread Shivam Sharma (JIRA)
Shivam Sharma created CASSANDRA-14785:
-

 Summary: http://cassandra.apache.org/ leads to Hadoop site
 Key: CASSANDRA-14785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14785
 Project: Cassandra
  Issue Type: Bug
Reporter: Shivam Sharma


http://cassandra.apache.org/ link is leading to apache hadoop website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (SPARK-19546) Every mail to u...@spark.apache.org is getting blocked

2017-02-10 Thread Shivam Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivam Sharma updated SPARK-19546:
--
Priority: Major  (was: Minor)

> Every mail to u...@spark.apache.org is getting blocked
> --
>
> Key: SPARK-19546
> URL: https://issues.apache.org/jira/browse/SPARK-19546
> Project: Spark
>  Issue Type: IT Help
>  Components: Project Infra
>Affects Versions: 2.1.0
>Reporter: Shivam Sharma
>
> Each time I am sending mail to  u...@spark.apache.org I am getting email from 
> yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19546) Every mail to u...@spark.apache.org is getting blocked

2017-02-10 Thread Shivam Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivam Sharma updated SPARK-19546:
--
Description: 
Each time I am sending mail to  u...@spark.apache.org I am getting email from 
yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".


  was:
Each time I am sending mail to  u...@spark.apache.org I am getting email from 
yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".

P

Summary: Every mail to u...@spark.apache.org is getting blocked  (was: 
Every mail to u...@spark.apache.org is blocked)

> Every mail to u...@spark.apache.org is getting blocked
> --
>
> Key: SPARK-19546
> URL: https://issues.apache.org/jira/browse/SPARK-19546
> Project: Spark
>  Issue Type: IT Help
>  Components: Project Infra
>Affects Versions: 2.1.0
>Reporter: Shivam Sharma
>Priority: Minor
>
> Each time I am sending mail to  u...@spark.apache.org I am getting email from 
> yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-19546) Every mail to u...@spark.apache.org is blocked

2017-02-10 Thread Shivam Sharma (JIRA)
Shivam Sharma created SPARK-19546:
-

 Summary: Every mail to u...@spark.apache.org is blocked
 Key: SPARK-19546
 URL: https://issues.apache.org/jira/browse/SPARK-19546
 Project: Spark
  Issue Type: IT Help
  Components: Project Infra
Affects Versions: 2.1.0
Reporter: Shivam Sharma
Priority: Minor


Each time I am sending mail to  u...@spark.apache.org I am getting email from 
yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".

P



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org