All,
I am new to Spark 2.2.1. I have a single node cluster and also have enabled
thriftserver for my Tableau application to connect to my persisted table.
I feel that the spark cluster metastore is different from the thrift-server
metastore. If this assumption is valid, what do I need
=project%20%3D%20SPARK%20AND%20labels%20%3D%20Starter%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)
.
On Wed, Sep 28, 2016 at 9:11 AM, Anirudh Muhnot <muh...@icloud.com> wrote:
> Hello everyone, I'm Anirudh. I'm fairly new to spark as I've done an
> online specialisa
Hello everyone, I'm Anirudh. I'm fairly new to spark as I've done an online
specialisation from UC Berkeley. I know how to code in Python but have little
to no idea about Scala. I want to contribute to Spark, Where do I start and
how? I'm reading the pull requests at Git Hub but I'm barley able
ov>
wrote:
> Hello folks,
>
> This is my first msg to the list. New to Spark, and trying to run the
> SparkPi example shown in the Cloudera documentation. We have Cloudera
> 5.5.1 running on a small cluster at our lab, with Spark 1.5.
>
> My trial invocation is given below. The
0-cdh5.5.1.jar
>10
>
> Log Type: stdout
>
> Log Upload Time: Sat Feb 13 11:00:08 + 2016
>
> Log Length: 23
>
> Pi is roughly 3.140224
>
>
> Hope that helps!
>
>
> On Sat, Feb 13, 2016 at 3:14 AM, Taylor, Ronald C <ronald.tay...@pnnl.gov>
Hello folks,
This is my first msg to the list. New to Spark, and trying to run the SparkPi
example shown in the Cloudera documentation. We have Cloudera 5.5.1 running on
a small cluster at our lab, with Spark 1.5.
My trial invocation is given below. The output that I get *says* that I
and hive config, that would help to locate root cause for
the problem.
Best,
Sun.
fightf...@163.com
From: Ashok Kumar
Date: 2015-12-01 18:54
To: user@spark.apache.org
Subject: New to Spark
Hi,
I am new to Spark.
I am trying to use spark-sql with SPARK CREATED and HIVE CREATED tables.
I have
Have you tried the following command ?
REFRESH TABLE
Cheers
On Tue, Dec 1, 2015 at 1:54 AM, Ashok Kumar <ashok34...@yahoo.com.invalid>
wrote:
> Hi,
>
> I am new to Spark.
>
> I am trying to use spark-sql with SPARK CREATED and HIVE CREATED tables.
>
> I have succe
Hi,
I am new to Spark.
I am trying to use spark-sql with SPARK CREATED and HIVE CREATED tables.
I have successfully made Hive metastore to be used by Spark.
In spark-sql I can see the DDL for Hive tables. However, when I do select
count(1) from HIVE_TABLE it always returns zero rows.
If I
Hi,
After attending the Spark Summit Europe 2015, I have started a Spark meetup
group for the German State of NordRhein-Westfalen.
It would be great if you could add it to the list of meet up's on the Apache
Spark page.
http://www.meetup.com/spark-users-NRW/
I am very new to Spark.
I have a very basic question. I have an array of values:
listofECtokens: Array[String] = Array(EC-17A5206955089011B,
EC-17A5206955089011A)
I want to filter an RDD for all of these token values. I tried the following
way:
val ECtokens = for (token <- listofECtok
hanks
> Best Regards
>
>> On Wed, Sep 9, 2015 at 3:25 PM, prachicsa <prachi...@gmail.com> wrote:
>>
>>
>> I am very new to Spark.
>>
>> I have a very basic question. I have an array of values:
>>
>> listofECtokens: Array[String] =
.contains(item)) found = true
}
found
}).collect()
Output:
res8: Array[String] = Array(This contains EC-17A5206955089011B)
Thanks
Best Regards
On Wed, Sep 9, 2015 at 3:25 PM, prachicsa <prachi...@gmail.com> wrote:
>
>
> I am very new to Spark.
>
> I have a very b
mind empty partitions. Otherwise you might have to
>> mess around to extract the exact number of keys if it's not readily
>> available.
>>
>> Aside: what is the requirement to have each partition only contain the
>> data related to one key?
>>
>> On Fri, Sep
ve each partition only contain the data
related to one key?
On Fri, Sep 4, 2015 at 11:06 AM, mmike87 <mwri...@snl.com> wrote:
> Hello, I am new to Apache Spark and this is my company's first Spark
> project.
> Essentially, we are calculating models dealing with Mining data using
> Spa
to have each partition only contain the
> data related to one key?
>
> On Fri, Sep 4, 2015 at 11:06 AM, mmike87 <mwri...@snl.com> wrote:
>
>> Hello, I am new to Apache Spark and this is my company's first Spark
>> project.
>> Essentially, we are calculating models
Hello, I am new to Apache Spark and this is my company's first Spark project.
Essentially, we are calculating models dealing with Mining data using Spark.
I am holding all the source data in a persisted RDD that we will refresh
periodically. When a "scenario" is passed to the Spark
for Spark SQL.
Its very early code, but you can find it here:
https://github.com/databricks/spark-avro
Bug reports welcome!
Michael
On Wed, Nov 19, 2014 at 1:02 PM, Sam Flint sam.fl...@magnetic.com
wrote:
Hi,
I am new to spark. I have began to read to understand sparks RDD
files
For Avro in particular, I have been working on a library for Spark
SQL. Its very early code, but you can find it here:
https://github.com/databricks/spark-avro
Bug reports welcome!
Michael
On Wed, Nov 19, 2014 at 1:02 PM, Sam Flint sam.fl...@magnetic.com
wrote:
Hi,
I am new
find it here:
https://github.com/databricks/spark-avro
Bug reports welcome!
Michael
On Wed, Nov 19, 2014 at 1:02 PM, Sam Flint sam.fl...@magnetic.com
wrote:
Hi,
I am new to spark. I have began to read to understand sparks RDD
files as well as SparkSQL. My question is more on how
20 matches
Mail list logo