Hi,
I found an example of simple applications like wordcount running on tez
-
https://github.com/apache/tez/tree/master/tez-examples/src/main/java/org/apache/tez/examples.
However, how to run this on tez+llap? Any suggestions?
BR,
Patcharee
or destruction.
On 24 October 2016 at 10:43, Patcharee Thongtra
<patcharee.thong...@uni.no <mailto:patcharee.thong...@uni.no>> wrote:
Hi,
I tried to query orc file by beeline and java program using jdbc
("select * from orcfileTable limit 1"). B
Hi,
I tried to query orc file by beeline and java program using jdbc
("select * from orcfileTable limit 1"). Both failed with Caused by:
java.lang.OutOfMemoryError: Java heap space. Hiveserver2 heap size is
1024m. I guess I need to increase this Hiveserver2 heap size? However I
wonder why I
Yes, I did not install R. Stupid me.
Thanks for your guide!
BR,
Patcharee
On 04/13/2016 08:23 PM, Eric Charles wrote:
Can you post the full stacktrace you have (look also at the log file)?
Did you install R on your machine?
SPARK_HOME is optional.
On 13/04/16 15:39, Patcharee Thongtra wrote
Hi,
I am running a jupyter notebook - pyspark. I noticed from the history
server UI there are some tasks spending a lot of time on either
- executor running time
- getting result
But some tasks finished both steps very quick. All tasks however have
very similar input size.
What can be the
spark for testing first.
BR,
Patcharee
On 04/13/2016 02:52 PM, Patcharee Thongtra wrote:
Hi,
I have been struggling with R interpreter / SparkR interpreter. Is
below the right command to build zeppelin with R interpreter / SparkR
interpreter?
mvn clean package -Pspark-1.6 -Phadoop-2.6
Hi,
In python how to use inputformat/custom recordreader?
Thanks,
Patcharee
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
Hi,
Is there a counter for data local read? I understood that it is locality
level counter, but it seems not.
Thanks,
Patcharee
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail:
Hi Zhan Zhang,
Is my problem (which is ORC predicate is not generated from WHERE clause
even though spark.sql.orc.filterPushdown=true) can be related to some
factors below ?
- orc file version (File Version: 0.12 with HIVE_8732)
- hive version (using Hive 1.2.1.2.3.0.0-2557)
- orc table is
Hi,
How can I create a partitioned orc table with sorted field(s)? I tried
to use sorted by keyword, but failed parse exception>
CREATE TABLE peoplesort (name string, age int) partition by (bddate int)
SORTED BY (age) stored as orc
Is it possible to have some sorted columns? From hive ddl
. But if the
driver side didn’t set the predicate at all, then somewhere else is
broken.
Can you please file a JIRA with a simple reproduce step, and let me
know the JIRA number?
Thanks.
Zhan Zhang
On Oct 13, 2015, at 1:01 AM, Patcharee Thongtra
<patcharee.thong...@uni.no <mailto:patcharee.thong...@
Hi,
I am trying to build spark 1.3 from source. After I executed|
mvn -DskipTests clean package|
I tried to use shell but got this error
[root@sandbox spark]# ./bin/spark-shell
Exception in thread main java.lang.IllegalStateException: No
assemblies found in
Hi,
I have built spark version 1.3 and tried to use this in my spark scala
application. When I tried to compile and build the application by SBT, I
got error
bad symbolic reference. A signature in SparkContext.class refers to term
conf in value org.apache.hadoop which is not available
It
load_u.pig
it just worked.
Patcharee
On 03/09/2015 08:42 PM, Daniel Dai wrote:
Sounds like a commons-codec version conflict. Tez is using commons-codec
1.4. Do you see another version of commons-codec.jar in your Hadoop lib or
CLASSPATH?
Thanks,
Daniel
On 3/9/15, 5:28 AM, Patcharee
Hi,
I tried to run pig on TEZ
pig -Dpig.additional.jars=... -useHCatalog -x TEZ load_u.pig
But I got an exception
Error cause TezChild exit.:java.lang.NoSuchMethodError:
org.apache.commons.codec.binary.Base64.decodeBase64(Ljava/lang/String;)[B
It seems the problem is from
Hi,
In my spark application I queried a hive table and tried to take only
one record, but got java.lang.RuntimeException: Couldn't find function Some
val rddCoOrd = sql(SELECT date, x, y FROM coordinate where order
by date limit 1)
valresultCoOrd = rddCoOrd.take(1)(0)
Any ideas? I
Hi,
I have a hive table with a column which was changed its name. Pig is not
able to load data from this column, it is all empty.
Any ideas how to fix it?
BR,
Patcharee
Hi,
I have a hive table with a column which was changed its name. Pig is not
able to load data from this column, it is all empty.
Any ideas how to fix it?
BR,
Patcharee
After I changed org.apache.hcatalog.pig.HCatStorer() to
org.apache.hive.hcatalog.pig.HCatStorer(), it worked.
Patcharee
On 01/14/2015 02:57 PM, Patcharee Thongtra wrote:
Hi,
I am having a weird problem. I created a table in orc format:
Create table
Hi,
I am having a weird problem. I created a table in orc format:
Create table
create external table cossin (x int, y int, cos float, sin float)
PARTITIONED BY(zone int) stored as orc location
Hi,
I have a table with float columns. I tried to query based on the
condition on a float column (called 'long'), but it failed (nothing
returned).
hive select * from test_float where long == -41.338276;
select * from test_float where long == -41.338276
Status: Finished successfully
OK
Time
It works. Thanks!
Patcharee
On 01/13/2015 10:15 AM, Devopam Mittra wrote:
please try the following and report observation:
WHERE long = CAST(-41.338276 AS FLOAT)
regards
Devopam
On Tue, Jan 13, 2015 at 2:25 PM, Patcharee Thongtra
patcharee.thong...@uni.no mailto:patcharee.thong...@uni.no
Hi,
I am new to pig. I am using pig version 0.12. I found an unexpected
behaviour from left join on multiple columns as listed below
--
...
...
dump r_four_dim1;
describe r_four_dim1;
dump result_height;
describe result_height;
Hi,
I am not able to update my jar, it seems it has been cached somewhere
I run hadoop -jar myjar.jar arg0 arg1
How can I fix this?
Patcharee
, Patcharee Thongtra
patcharee.thong...@uni.no mailto:patcharee.thong...@uni.no wrote:
Hi,
I am not able to update my jar, it seems it has been cached somewhere
I run hadoop -jar myjar.jar arg0 arg1
How can I fix this?
Patcharee
--
Thanks and Regards,
VIVEK KOUL
Hi,
I have two applications running in Tomcat 6. I made the first app as a
default web app by placing it as ROOT.war in webapps/. How can I access
the second app? Whenever I browse http://localhost:8080/the_second_app/
Tomcat thinks I will access /the_second_app part in the first app.
Hi,
I use version 6.0.0.37. The second app was already deployed.
Patcharee
On 08/11/2014 02:17 PM, Daniel Mikusa wrote:
On Mon, Aug 11, 2014 at 8:00 AM, Patcharee Thongtra
patcharee.thong...@uni.no wrote:
Hi,
I have two applications running in Tomcat 6.
What version specifically?
I
Hi,
Actually it is my fault. The second app could not start up because of
some lib missing. Now it is up and I can access.
Thanks.
Patcharee
On 08/11/2014 02:21 PM, Ognjen Blagojevic wrote:
Patcharee,
On 11.8.2014 14:00, Patcharee Thongtra wrote:
I have two applications running in Tomcat
On 08/04/2014 11:26 AM, André Warnier wrote:
Patcharee Thongtra wrote:
Hi,
Is it possible to have Tomcat do some custom actions after a specific
page/file is accessed/downloaded? If so, how to?
Any suggestions are appreciated.
What kind of custom actions, for what kind of pages/files
Hi,
Is it possible to have Tomcat do some custom actions after a specific
page/file is accessed/downloaded? If so, how to?
Any suggestions are appreciated.
Patcharee
-
To unsubscribe, e-mail:
Hi,
I have the following data
(2009-09-09,2,1,{(70)},{(80)},{(90)})
(2010-10-10,2,12,{(71),(75)},{(81),(85)},{(91),(95)})
(2012-12-12,2,9,{(76),(77),(78)},{(86),(87),(88)},{(96),(97),(98)})
which is in the format
{date: chararray, zone: int, z: int, uTmp: {(varvalue: int)}, vTmp:
{(varvalue:
Hi,
I got very strange exception.
80693 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066:
Unable to open iterator for alias ordered. Backend error :
java.lang.String cannot be cast to java.lang.Integer
14/05/30 11:53:22 ERROR grunt.Grunt: ERROR 1066: Unable to open iterator
for
).
Ultimately, you can read the z column as chararray and process it with udf.
This will give you chance to log the faulty record.
Tell if any of this removed the problem.
Piotr
30 maj 2014 12:02 Patcharee Thongtra patcharee.thong...@uni.no
napisał(a):
Hi,
I got very strange exception.
80693 [main
Hi,
I am using HCatLoader to load data from a table (existing in hive).
A = load 'rwf_data' USING org.apache.hcatalog.pig.HCatLoader();
describe A;
I got Error 1115: Table not found : ...
It is weird. Any suggestions on this? Thanks
Patcharee
Hi,
Is it possible to store results in to a file with determined filename,
instead of part-r-0? How to do that?
Patcharee
Hi,
How can I pass command line parameters to my custom LOAD function?
Patcharee
Hi,
My file name contains : and I got error copyFromLocal: unexpected
URISyntaxException when I try to copy this file to Hadoop. See below.
[patcharee@compute-1-0 ~]$ hadoop fs -copyFromLocal
wrfout_d01_2001-01-01_00:00:00 netcdf_data/
copyFromLocal: unexpected URISyntaxException
I am
Hi,
I tried to put escape chars around it, but it does not work.
Patcharee
On 04/28/2014 11:45 AM, Nitin Pawar wrote:
try putting escape chars around it
On Mon, Apr 28, 2014 at 2:52 PM, Patcharee Thongtra
patcharee.thong...@uni.no mailto:patcharee.thong...@uni.no wrote:
Hi,
My
Hi,
From the schema
C: {group: (int,int,int),{(varvalue: {t: (varname: chararray,shape:
float)})}}
I would like to get
{int,int,int,(varname,shape)}, where there are multiple varname and
shape value of each varname.
How can I write the pig script to generate that?
Patcharee
Hi,
I wrote a custom InputFormat. When I ran the pig script Load function
using this InputFormat, the number of InputSplit = 16, but there was
only 2 map tasks handling these splits. Apparently the no. of map tasks
= the no. of input files.
Does the number of Map task not correspond to the
-2.4.0/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileSplit.java.
Look for the Writable method overrides in particular to understand how
to use custom fields.
On Thu, Apr 10, 2014 at 9:54 PM, Patcharee Thongtra
Hi,
I wrote a custom InputFormat. When I ran the pig script Load function
using this InputFormat, the number of InputSplit 1, but there was only
1 map task handling these splits.
Does the number of Map task not correspond to the number of splits?
I think the job will be done quicker if
Hi,
I implemented a custom load function. How to pass some user settings to
this function?
Any help is appreciated,
Patcharee
Hi,
I wrote a custom InputFormat and InputSplit to handle netcdf file. I use
with a custom pig Load function. When I submitted a job by running a pig
script. I got an error below. From the error log, the network location
name is
Hi,
I am trying to run pig test. When I execute mvn test, I got error
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
to open iterator for alias data
at org.apache.pig.PigServer.openIterator(PigServer.java:880)
at
Hi
I found that 'libwsman_cim_plugin.so.1' is missing from the package
'libopenwsman1 in maverick of architecture i386'.
It must be under /usr/lib/openwsman/plugins/.
Can you please fix this?
Regards,
Patcharee
--
Ubuntu-motu mailing list
I found that 'libwsman_cim_plugin.so.1' is missing from the package
'libopenwsman1 in maverick of architecture i386'.
It must be under /usr/lib/openwsman/plugins/.
Just so you know, I filed a bug about this:
https://bugs.launchpad.net/ubuntu/+source/openwsman/+bug/760835
There's a
47 matches
Mail list logo