simple application on tez + llap

2017-02-24 Thread Patcharee Thongtra

Hi,

I found an example of simple applications like wordcount running on tez 
- 
https://github.com/apache/tez/tree/master/tez-examples/src/main/java/org/apache/tez/examples. 
However, how to run this on tez+llap? Any suggestions?


BR,

Patcharee



Re: hiveserver2 java heap space

2016-10-24 Thread Patcharee Thongtra

It works on Hive cli

Patcharee

On 10/24/2016 11:51 AM, Mich Talebzadeh wrote:

does this work ok through Hive cli?

Dr Mich Talebzadeh

LinkedIn 
/https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw/


http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk.Any and all responsibility for 
any loss, damage or destruction of data or any other property which 
may arise from relying on this email's technical content is explicitly 
disclaimed. The author will in no case be liable for any monetary 
damages arising from such loss, damage or destruction.



On 24 October 2016 at 10:43, Patcharee Thongtra 
<patcharee.thong...@uni.no <mailto:patcharee.thong...@uni.no>> wrote:


Hi,

I tried to query orc file by beeline and java program using jdbc
("select * from orcfileTable limit 1"). Both failed with Caused
by: java.lang.OutOfMemoryError: Java heap space. Hiveserver2 heap
size is 1024m. I  guess I need to increase this Hiveserver2 heap
size? However I wonder why I got this error because I query just
ONE line. Any ideas?

Thanks,

Patcharee







hiveserver2 java heap space

2016-10-24 Thread Patcharee Thongtra

Hi,

I tried to query orc file by beeline and java program using jdbc 
("select * from orcfileTable limit 1"). Both failed with Caused by: 
java.lang.OutOfMemoryError: Java heap space. Hiveserver2 heap size is 
1024m. I  guess I need to increase this Hiveserver2 heap size? However I 
wonder why I got this error because I query just ONE line. Any ideas?


Thanks,

Patcharee




Re: build r-intepreter

2016-04-14 Thread Patcharee Thongtra

Yes, I did not install R. Stupid me.
Thanks for your guide!

BR,
Patcharee

On 04/13/2016 08:23 PM, Eric Charles wrote:

Can you post the full stacktrace you have (look also at the log file)?
Did you install R on your machine?

SPARK_HOME is optional.


On 13/04/16 15:39, Patcharee Thongtra wrote:

Hi,

When I ran R notebook example, I got these errors in the logs:

- Caused by: org.apache.zeppelin.interpreter.InterpreterException:
sparkr is not responding

- Caused by: org.apache.thrift.transport.TTransportException

I did not config SPARK_HOME so far, and intended to use the embedded
spark for testing first.

BR,
Patcharee


On 04/13/2016 02:52 PM, Patcharee Thongtra wrote:

Hi,

I have been struggling with R interpreter / SparkR interpreter. Is
below the right command to build zeppelin with R interpreter / SparkR
interpreter?

mvn clean package -Pspark-1.6 -Phadoop-2.6 -Pyarn -Ppyspark -Psparkr

BR,
Patcharee










executor running time vs getting result from jupyter notebook

2016-04-14 Thread Patcharee Thongtra

Hi,

I am running a jupyter notebook - pyspark. I noticed from the history 
server UI there are some tasks spending a lot of time on either

- executor running time
- getting result

But some tasks finished both steps very quick. All tasks however have 
very similar input size.


What can be the factor of time spending on these steps?

BR,
Patcharee

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: build r-intepreter

2016-04-13 Thread Patcharee Thongtra

Hi,

When I ran R notebook example, I got these errors in the logs:

- Caused by: org.apache.zeppelin.interpreter.InterpreterException: 
sparkr is not responding


- Caused by: org.apache.thrift.transport.TTransportException

I did not config SPARK_HOME so far, and intended to use the embedded 
spark for testing first.


BR,
Patcharee


On 04/13/2016 02:52 PM, Patcharee Thongtra wrote:

Hi,

I have been struggling with R interpreter / SparkR interpreter. Is 
below the right command to build zeppelin with R interpreter / SparkR 
interpreter?


mvn clean package -Pspark-1.6 -Phadoop-2.6 -Pyarn -Ppyspark -Psparkr

BR,
Patcharee








custom inputformat recordreader

2015-11-26 Thread Patcharee Thongtra

Hi,

In python how to use inputformat/custom recordreader?

Thanks,
Patcharee


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



data local read counter

2015-11-25 Thread Patcharee Thongtra

Hi,

Is there a counter for data local read? I understood that it is locality 
level counter, but it seems not.


Thanks,
Patcharee

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: sql query orc slow

2015-10-13 Thread Patcharee Thongtra

Hi Zhan Zhang,

Is my problem (which is ORC predicate is not generated from WHERE clause 
even though spark.sql.orc.filterPushdown=true) can be related to some 
factors below ?


- orc file version (File Version: 0.12 with HIVE_8732)
- hive version (using Hive 1.2.1.2.3.0.0-2557)
- orc table is not sorted / indexed
- the split strategy hive.exec.orc.split.strategy

BR,
Patcharee


On 10/09/2015 08:01 PM, Zhan Zhang wrote:
That is weird. Unfortunately, there is no debug info available on this 
part. Can you please open a JIRA to add some debug information on the 
driver side?


Thanks.

Zhan Zhang

On Oct 9, 2015, at 10:22 AM, patcharee > wrote:


I set hiveContext.setConf("spark.sql.orc.filterPushdown", "true"). 
But from the log No ORC pushdown predicate for my query with WHERE 
clause.


15/10/09 19:16:01 DEBUG OrcInputFormat: No ORC pushdown predicate

I did not understand what wrong with this.

BR,
Patcharee

On 09. okt. 2015 19:10, Zhan Zhang wrote:
In your case, you manually set an AND pushdown, and the predicate is 
right based on your setting, : leaf-0 = (EQUALS x 320)


The right way is to enable the predicate pushdown as follows.
sqlContext.setConf("spark.sql.orc.filterPushdown", "true”)

Thanks.

Zhan Zhang







On Oct 9, 2015, at 9:58 AM, patcharee  wrote:


Hi Zhan Zhang

Actually my query has WHERE clause "select date, month, year, hh, 
(u*0.9122461 - v*-0.40964267), (v*0.9122461 + u*-0.40964267), z 
from 4D where x = 320 and y = 117 and zone == 2 and year=2009 and z 
>= 2 and z <= 8", column "x", "y" is not partition column, the 
others are partition columns. I expected the system will use 
predicate pushdown. I turned on the debug and found pushdown 
predicate was not generated ("DEBUG OrcInputFormat: No ORC pushdown 
predicate")


Then I tried to set the search argument explicitly (on the column 
"x" which is not partition column)


   val xs = 
SearchArgumentFactory.newBuilder().startAnd().equals("x", 
320).end().build()

   hiveContext.setConf("hive.io.file.readcolumn.names", "x")
   hiveContext.setConf("sarg.pushdown", xs.toKryo())

this time in the log pushdown predicate was generated but results 
was wrong (no results at all)


15/10/09 18:36:06 INFO OrcInputFormat: ORC pushdown predicate: 
leaf-0 = (EQUALS x 320)

expr = leaf-0

Any ideas What wrong with this? Why the ORC pushdown predicate is 
not applied by the system?


BR,
Patcharee

On 09. okt. 2015 18:31, Zhan Zhang wrote:

Hi Patcharee,

>From the query, it looks like only the column pruning will be 
applied. Partition pruning and predicate pushdown does not have 
effect. Do you see big IO difference between two methods?


The potential reason of the speed difference I can think of may be 
the different versions of OrcInputFormat. The hive path may use 
NewOrcInputFormat, but the spark path use OrcInputFormat.


Thanks.

Zhan Zhang

On Oct 8, 2015, at 11:55 PM, patcharee  
wrote:


Yes, the predicate pushdown is enabled, but still take longer 
time than the first method


BR,
Patcharee

On 08. okt. 2015 18:43, Zhan Zhang wrote:

Hi Patcharee,

Did you enable the predicate pushdown in the second method?

Thanks.

Zhan Zhang

On Oct 8, 2015, at 1:43 AM, patcharee 
 wrote:



Hi,

I am using spark sql 1.5 to query a hive table stored as 
partitioned orc file. We have the total files is about 6000 
files and each file size is about 245MB.


What is the difference between these two query methods below:

1. Using query on hive table directly

hiveContext.sql("select col1, col2 from table1")

2. Reading from orc file, register temp table and query from 
the temp table


val c = 
hiveContext.read.format("orc").load("/apps/hive/warehouse/table1")

c.registerTempTable("regTable")
hiveContext.sql("select col1, col2 from regTable")

When the number of files is large (query all from the total 
6000 files) , the second case is much slower then the first 
one. Any ideas why?


BR,




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 

For additional commands, e-mail: 
user-h...@spark.apache.org


















orc table with sorted field

2015-10-13 Thread Patcharee Thongtra

Hi,

How can I create a partitioned orc table with sorted field(s)? I tried 
to use sorted by keyword, but failed parse exception>


CREATE TABLE peoplesort (name string, age int) partition by (bddate int) 
SORTED BY (age) stored as orc


Is it possible to have some sorted columns? From hive ddl page, it seems 
only bucket table can be sorted.


Any suggestions please

BR,
Patcharee



Re: sql query orc slow

2015-10-13 Thread Patcharee Thongtra

Hi Zhan Zhang,

Here is the issue https://issues.apache.org/jira/browse/SPARK-11087

BR,
Patcharee

On 10/13/2015 06:47 PM, Zhan Zhang wrote:

Hi Patcharee,

I am not sure which side is wrong, driver or executor. If it is 
executor side, the reason you mentioned may be possible. But if the 
driver side didn’t set the predicate at all, then somewhere else is 
broken.


Can you please file a JIRA with a simple reproduce step, and let me 
know the JIRA number?


Thanks.

Zhan Zhang

On Oct 13, 2015, at 1:01 AM, Patcharee Thongtra 
<patcharee.thong...@uni.no <mailto:patcharee.thong...@uni.no>> wrote:



Hi Zhan Zhang,

Is my problem (which is ORC predicate is not generated from WHERE 
clause even though spark.sql.orc.filterPushdown=true) can be related 
to some factors below ?


- orc file version (File Version: 0.12 with HIVE_8732)
- hive version (using Hive 1.2.1.2.3.0.0-2557)
- orc table is not sorted / indexed
- the split strategy hive.exec.orc.split.strategy

BR,
Patcharee


On 10/09/2015 08:01 PM, Zhan Zhang wrote:
That is weird. Unfortunately, there is no debug info available on 
this part. Can you please open a JIRA to add some debug information 
on the driver side?


Thanks.

Zhan Zhang

On Oct 9, 2015, at 10:22 AM, patcharee <patcharee.thong...@uni.no> 
wrote:


I set hiveContext.setConf("spark.sql.orc.filterPushdown", "true"). 
But from the log No ORC pushdown predicate for my query with WHERE 
clause.


15/10/09 19:16:01 DEBUG OrcInputFormat: No ORC pushdown predicate

I did not understand what wrong with this.

BR,
Patcharee

On 09. okt. 2015 19:10, Zhan Zhang wrote:
In your case, you manually set an AND pushdown, and the predicate 
is right based on your setting, : leaf-0 = (EQUALS x 320)


The right way is to enable the predicate pushdown as follows.
sqlContext.setConf("spark.sql.orc.filterPushdown", "true”)

Thanks.

Zhan Zhang







On Oct 9, 2015, at 9:58 AM, patcharee <patcharee.thong...@uni.no> 
wrote:



Hi Zhan Zhang

Actually my query has WHERE clause "select date, month, year, hh, 
(u*0.9122461 - v*-0.40964267), (v*0.9122461 + u*-0.40964267), z 
from 4D where x = 320 and y = 117 and zone == 2 and year=2009 and 
z >= 2 and z <= 8", column "x", "y" is not partition column, the 
others are partition columns. I expected the system will use 
predicate pushdown. I turned on the debug and found pushdown 
predicate was not generated ("DEBUG OrcInputFormat: No ORC 
pushdown predicate")


Then I tried to set the search argument explicitly (on the column 
"x" which is not partition column)


   val xs = 
SearchArgumentFactory.newBuilder().startAnd().equals("x", 
320).end().build()

   hiveContext.setConf("hive.io.file.readcolumn.names", "x")
   hiveContext.setConf("sarg.pushdown", xs.toKryo())

this time in the log pushdown predicate was generated but results 
was wrong (no results at all)


15/10/09 18:36:06 INFO OrcInputFormat: ORC pushdown predicate: 
leaf-0 = (EQUALS x 320)

expr = leaf-0

Any ideas What wrong with this? Why the ORC pushdown predicate is 
not applied by the system?


BR,
Patcharee

On 09. okt. 2015 18:31, Zhan Zhang wrote:

Hi Patcharee,

>From the query, it looks like only the column pruning will be 
applied. Partition pruning and predicate pushdown does not have 
effect. Do you see big IO difference between two methods?


The potential reason of the speed difference I can think of may 
be the different versions of OrcInputFormat. The hive path may 
use NewOrcInputFormat, but the spark path use OrcInputFormat.


Thanks.

Zhan Zhang

On Oct 8, 2015, at 11:55 PM, patcharee 
<patcharee.thong...@uni.no> wrote:


Yes, the predicate pushdown is enabled, but still take longer 
time than the first method


BR,
Patcharee

On 08. okt. 2015 18:43, Zhan Zhang wrote:

Hi Patcharee,

Did you enable the predicate pushdown in the second method?

Thanks.

Zhan Zhang

On Oct 8, 2015, at 1:43 AM, patcharee 
<patcharee.thong...@uni.no> wrote:



Hi,

I am using spark sql 1.5 to query a hive table stored as 
partitioned orc file. We have the total files is about 6000 
files and each file size is about 245MB.


What is the difference between these two query methods below:

1. Using query on hive table directly

hiveContext.sql("select col1, col2 from table1")

2. Reading from orc file, register temp table and query from 
the temp table


val c = 
hiveContext.read.format("orc").load("/apps/hive/warehouse/table1")

c.registerTempTable("regTable")
hiveContext.sql("select col1, col2 from regTable")

When the number of files is large (query all from the total 
6000 files) , the second case is much slower then the first 
one. Any ideas why?


BR,




-
To unsubscribe, e-mail: 
<mailto:user-unsubscr...@spark.apache.org>user-unsubscr...@spark.apache.org
For additional commands, e-mail: 
<mailto:user-h...@spark.apache.org>user-h...@spark.apache.org






















No assemblies found in assembly/target/scala-2.10

2015-03-13 Thread Patcharee Thongtra

Hi,

I am trying to build spark 1.3 from source. After I executed|

mvn -DskipTests clean package|

I tried to use shell but got this error

[root@sandbox spark]# ./bin/spark-shell
Exception in thread main java.lang.IllegalStateException: No 
assemblies found in '/root/spark/assembly/target/scala-2.10'.
at 
org.apache.spark.launcher.CommandBuilderUtils.checkState(CommandBuilderUtils.java:228)
at 
org.apache.spark.launcher.AbstractCommandBuilder.findAssembly(AbstractCommandBuilder.java:352)
at 
org.apache.spark.launcher.AbstractCommandBuilder.buildClassPath(AbstractCommandBuilder.java:185)
at 
org.apache.spark.launcher.AbstractCommandBuilder.buildJavaCommand(AbstractCommandBuilder.java:111)
at 
org.apache.spark.launcher.SparkSubmitCommandBuilder.buildSparkSubmitCommand(SparkSubmitCommandBuilder.java:177)
at 
org.apache.spark.launcher.SparkSubmitCommandBuilder.buildCommand(SparkSubmitCommandBuilder.java:102)

at org.apache.spark.launcher.Main.main(Main.java:74)

Any ideas?

Patcharee


bad symbolic reference. A signature in SparkContext.class refers to term conf in value org.apache.hadoop which is not available

2015-03-11 Thread Patcharee Thongtra

Hi,

I have built spark version 1.3 and tried to use this in my spark scala 
application. When I tried to compile and build the application by SBT, I 
got error
bad symbolic reference. A signature in SparkContext.class refers to term 
conf in value org.apache.hadoop which is not available


It seems hadoop library is missing, but it should be referred 
automatically by SBT, isn't it.


This application is buit-able on spark version 1.2

Here is my build.sbt

name := wind25t-v013
version := 0.1
scalaVersion := 2.10.4
unmanagedBase := baseDirectory.value / lib
libraryDependencies += org.apache.spark %% spark-core % 1.3.0
libraryDependencies += org.apache.spark %% spark-streaming % 1.3.0
libraryDependencies += org.apache.spark %% spark-sql % 1.3.0
libraryDependencies += org.apache.spark % spark-hive_2.10 % 1.3.0

What should I do to fix it?

BR,
Patcharee




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: pig on tez NoSuchMethodError decodeBase64

2015-03-10 Thread Patcharee Thongtra

Hi,

I thought Tez will work with commons-codec 1.4, but I found something 
weird.


There is only commons-codec 1.4 and newer in my cluster, but I got 
error. When I tried to run with explicitly defined commons-codec 1.5,


pig -Dpig.additional.jars=...,commons-codec-1.5.jar -useHCatalog -x TEZ 
load_u.pig


it just worked.

Patcharee


On 03/09/2015 08:42 PM, Daniel Dai wrote:

Sounds like a commons-codec version conflict. Tez is using commons-codec
1.4. Do you see another version of commons-codec.jar in your Hadoop lib or
CLASSPATH?

Thanks,
Daniel


On 3/9/15, 5:28 AM, Patcharee Thongtra patcharee.thong...@uni.no wrote:


Hi,

I tried to run pig on TEZ

pig -Dpig.additional.jars=... -useHCatalog -x TEZ load_u.pig

But I got an exception

Error cause TezChild exit.:java.lang.NoSuchMethodError:
org.apache.commons.codec.binary.Base64.decodeBase64(Ljava/lang/String;)[B

It seems the problem is from commons-codec-*.jar, but I am not sure how
to fix it. Any ideas?

Best,
Patcharee





pig on tez NoSuchMethodError decodeBase64

2015-03-09 Thread Patcharee Thongtra

Hi,

I tried to run pig on TEZ

pig -Dpig.additional.jars=... -useHCatalog -x TEZ load_u.pig

But I got an exception

Error cause TezChild exit.:java.lang.NoSuchMethodError: 
org.apache.commons.codec.binary.Base64.decodeBase64(Ljava/lang/String;)[B


It seems the problem is from commons-codec-*.jar, but I am not sure how 
to fix it. Any ideas?


Best,
Patcharee



java.lang.RuntimeException: Couldn't find function Some

2015-03-09 Thread Patcharee Thongtra

Hi,

In my spark application I queried a hive table and tried to take only 
one record, but got java.lang.RuntimeException: Couldn't find function Some



val rddCoOrd = sql(SELECT date, x, y FROM coordinate where  order 
by date limit 1)


valresultCoOrd = rddCoOrd.take(1)(0)

Any ideas? I tested the same code on spark shell, it worked.

Best,
Patcharee










Load columns changed name

2015-01-15 Thread Patcharee Thongtra

Hi,

I have a hive table with a column which was changed its name. Pig is not 
able to load data from this column, it is all empty.


Any ideas how to fix it?

BR,
Patcharee


Load columns changed name

2015-01-15 Thread Patcharee Thongtra

Hi,

I have a hive table with a column which was changed its name. Pig is not 
able to load data from this column, it is all empty.


Any ideas how to fix it?

BR,
Patcharee


Re: cannot store value into partition column

2015-01-14 Thread Patcharee Thongtra
After I changed org.apache.hcatalog.pig.HCatStorer() to 
org.apache.hive.hcatalog.pig.HCatStorer(), it worked.


Patcharee

On 01/14/2015 02:57 PM, Patcharee Thongtra wrote:

Hi,

I am having a weird problem. I created a table in orc format:


Create table

create external table cossin (x int, y int, cos float, sin float) 
PARTITIONED BY(zone int) stored as orc location 
'/apps/hive/warehouse/wrf_tables/cossin' tblproperties 
(orc.compress=ZLIB);


I run a pig script below to import data into this table 'cossin'.


Pig script

...
r_three_dim = FOREACH result_three_dim GENERATE
 $ZONE as zone: int,
 result::x as x: int, result::y as y: int,
 result::cos as cos: float, result::sin as sin: float;

x = FILTER r_three_dim by x  5 and y  5;
dump x;
describe x;

store x into 'cossin' using org.apache.hcatalog.pig.HCatStorer();


Dump x

(2,3,3,0.9883806,-0.15199915)
(2,3,4,0.98836243,-0.15211758)
(2,4,1,0.98830783,-0.15247186)
(2,4,2,0.9882811,-0.15264522)
(2,4,3,0.9882628,-0.15276346)
(2,4,4,0.98824626,-0.15287022)
x: {zone: int,x: int,y: int,cos: float,sin: float}

But when I checked the table 'cossin', zone is NULL instead on 2.

Any ideas?

BR,
Patcharee





cannot store value into partition column

2015-01-14 Thread Patcharee Thongtra

Hi,

I am having a weird problem. I created a table in orc format:


Create table

create external table cossin (x int, y int, cos float, sin float) 
PARTITIONED BY(zone int) stored as orc location 
'/apps/hive/warehouse/wrf_tables/cossin' tblproperties 
(orc.compress=ZLIB);


I run a pig script below to import data into this table 'cossin'.


Pig script

...
r_three_dim = FOREACH result_three_dim GENERATE
 $ZONE as zone: int,
 result::x as x: int, result::y as y: int,
 result::cos as cos: float, result::sin as sin: float;

x = FILTER r_three_dim by x  5 and y  5;
dump x;
describe x;

store x into 'cossin' using org.apache.hcatalog.pig.HCatStorer();


Dump x

(2,3,3,0.9883806,-0.15199915)
(2,3,4,0.98836243,-0.15211758)
(2,4,1,0.98830783,-0.15247186)
(2,4,2,0.9882811,-0.15264522)
(2,4,3,0.9882628,-0.15276346)
(2,4,4,0.98824626,-0.15287022)
x: {zone: int,x: int,y: int,cos: float,sin: float}

But when I checked the table 'cossin', zone is NULL instead on 2.

Any ideas?

BR,
Patcharee



compare float column

2015-01-13 Thread Patcharee Thongtra

Hi,

I have a table with float columns. I tried to query based on the 
condition on a float column (called 'long'), but it failed (nothing 
returned).


hive select * from test_float where long == -41.338276;
select * from test_float where long == -41.338276
Status: Finished successfully
OK
Time taken: 14.262 seconds

hive select long from test_float;
select long from test_float
Status: Finished successfully
OK
-41.338276
Time taken: 6.843 seconds, Fetched: 1 row(s)


Any ideas? I am using hive version 0.13.

BR,
Patcharee






Re: compare float column

2015-01-13 Thread Patcharee Thongtra

It works. Thanks!

Patcharee

On 01/13/2015 10:15 AM, Devopam Mittra wrote:

please try the following and report observation:

WHERE long = CAST(-41.338276 AS FLOAT)


regards
Devopam


On Tue, Jan 13, 2015 at 2:25 PM, Patcharee Thongtra 
patcharee.thong...@uni.no mailto:patcharee.thong...@uni.no wrote:


Hi,

I have a table with float columns. I tried to query based on the
condition on a float column (called 'long'), but it failed
(nothing returned).

hive select * from test_float where long == -41.338276;
select * from test_float where long == -41.338276
Status: Finished successfully
OK
Time taken: 14.262 seconds

hive select long from test_float;
select long from test_float
Status: Finished successfully
OK
-41.338276
Time taken: 6.843 seconds, Fetched: 1 row(s)


Any ideas? I am using hive version 0.13.

BR,
Patcharee







--
Devopam Mittra
Life and Relations are not binary




left join on multiple columns

2015-01-08 Thread Patcharee Thongtra
Hi,



I am new to pig. I am using pig version 0.12. I found an unexpected 
behaviour from left join on multiple columns as listed below



--

...

...

dump r_four_dim1;

describe r_four_dim1;



dump result_height;

describe result_height;



join_height = join r_four_dim1 by (date, month, year, hh, x, y, z) LEFT 
OUTER, result_height by (date, month, year, hh, x, y, z);

dump join_height;

describe join_height;



--

Result

--

(1,1,2009,0,559,447,1,-4.964739)

r_four_dim1: {date: int,month: int,year: int,hh: int,x: int,y: int,z: 
int,u: float}

(1,1,2009,0,559,447,1,109.71929)

result_height: {date: int,month: int,year: int,hh: int,x: int,y: int,z: 
int,height: float}

(1,1,2009,0,559,447,1,-4.964739)

join_height: {r_four_dim1::date: int,r_four_dim1::month: 
int,r_four_dim1::year: int,r_four_dim1::hh: int,r_four_dim1::x: 
int,r_four_dim1::y: int,r_four_dim1::z: int,r_four_dim1::u: 
float,result_height::date: int,result_height::month: 
int,result_height::year: int,result_height::hh: int,result_height::x: 
int,result_height::y: int,result_height::z: int,result_height::height: 
float}



--



Left Join did not work as expected. In addition when I tried to join 
only year (year: int) as below



join_height = join r_four_dim1 by year LEFT OUTER, result_height by year;

dump join_height;

describe join_height;



I got the ClassCastException



ERROR 2999: Unexpected internal error. java.lang.String cannot be cast 
to java.lang.Integer



java.lang.ClassCastException: java.lang.String cannot be cast to 
java.lang.Integer

at 
org.apache.pig.backend.hadoop.HDataType.getWritableComparableTypes(HDataType.java:115)

at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:111)

at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)

at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)

at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)

at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)

at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)

at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)



Any suggestions?



BR,

Patcharee
  

Not able to update jar

2014-10-10 Thread Patcharee Thongtra

Hi,

I am not able to update my jar, it seems it has been cached somewhere

I run hadoop -jar myjar.jar arg0 arg1

How can I fix this?

Patcharee


Re: Not able to update jar

2014-10-10 Thread Patcharee Thongtra
Yes I meant hadoop jar myjar.jar package.classname arg0 arg1, but the 
problem is the latest version of myjar.jar has not been executed.


Patcharee

On 10/10/2014 01:50 PM, vivek wrote:


I think the syntax is hadoop jar myjar.jar package.classname arg0 arg1

On Fri, Oct 10, 2014 at 7:42 AM, Patcharee Thongtra 
patcharee.thong...@uni.no mailto:patcharee.thong...@uni.no wrote:


Hi,

I am not able to update my jar, it seems it has been cached somewhere

I run hadoop -jar myjar.jar arg0 arg1

How can I fix this?

Patcharee




--







Thanks and Regards,

VIVEK KOUL




access non-default webapp

2014-08-11 Thread Patcharee Thongtra

Hi,

I have two applications running in Tomcat 6. I made the first app as a 
default web app by placing it as ROOT.war in webapps/. How can I access 
the second app? Whenever I browse http://localhost:8080/the_second_app/ 
Tomcat thinks I will access /the_second_app part in the first app.


Patcharee

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: access non-default webapp

2014-08-11 Thread Patcharee Thongtra


Hi,

I use version 6.0.0.37. The second app was already deployed.

Patcharee

On 08/11/2014 02:17 PM, Daniel Mikusa wrote:

On Mon, Aug 11, 2014 at 8:00 AM, Patcharee Thongtra 
patcharee.thong...@uni.no wrote:


Hi,

I have two applications running in Tomcat 6.


What version specifically?



I made the first app as a default web app by placing it as ROOT.war in
webapps/. How can I access the second app? Whenever I browse
http://localhost:8080/the_second_app/ Tomcat thinks I will access
/the_second_app part in the first app.


Did the second one deploy successfully?  Sounds like it didn't.  Maybe try
just deploying the second app and see what happens.

Dan



Patcharee

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org





-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: access non-default webapp

2014-08-11 Thread Patcharee Thongtra

Hi,

Actually it is my fault. The second app could not start up because of 
some lib missing. Now it is up and I can access.


Thanks.
Patcharee

On 08/11/2014 02:21 PM, Ognjen Blagojevic wrote:

Patcharee,

On 11.8.2014 14:00, Patcharee Thongtra wrote:

I have two applications running in Tomcat 6. I made the first app as a
default web app by placing it as ROOT.war in webapps/. How can I access
the second app? Whenever I browse http://localhost:8080/the_second_app/
Tomcat thinks I will access /the_second_app part in the first app.


What makes you think that?

Did you preprely deployed context the_second_app? How do you know you 
did?


Did you try to access http://localhost:8080/the_second_app/? What 
happens?


-Ognjen

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: custom actions after accessed

2014-08-04 Thread Patcharee Thongtra

On 08/04/2014 11:26 AM, André Warnier wrote:

Patcharee Thongtra wrote:

Hi,

Is it possible to have Tomcat do some custom actions after a specific 
page/file is accessed/downloaded? If so, how to?

Any suggestions are appreciated.



What kind of custom actions, for what kind of pages/files ?

What prevents you from doing such custom actions in your own 
webapp/servlet, or in a servlet filter, after you have returned the 
response to the client ?



Actually I set my web app as directory listing and I would like to keep 
logs after users finish download files. I do not know how to do that in 
my web app. Any ideas?


Then I found tomcat is aware of downloading file done (logged in access 
log), so I though maybe I can make Tomcat activate my servlet after the 
downloaded and the servlet logs the download activity.


Patcharee

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



custom actions after accessed

2014-08-03 Thread Patcharee Thongtra

Hi,

Is it possible to have Tomcat do some custom actions after a specific 
page/file is accessed/downloaded? If so, how to?

Any suggestions are appreciated.

Patcharee

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



extract tuple from bag in an order

2014-06-05 Thread Patcharee Thongtra

Hi,

I have the following data

(2009-09-09,2,1,{(70)},{(80)},{(90)})
(2010-10-10,2,12,{(71),(75)},{(81),(85)},{(91),(95)})
(2012-12-12,2,9,{(76),(77),(78)},{(86),(87),(88)},{(96),(97),(98)})

which is in the format

{date: chararray, zone: int, z: int, uTmp: {(varvalue: int)}, vTmp: 
{(varvalue: int)}, thTmp: {(varvalue: int)} }


How can I get:

(2009-09-09,2,1,70,80,90)
(2010-10-10,2,12,71,81,91)
(2010-10-10,2,12,75,85,95)
(2012-12-12,2,9,76,86,96)
(2012-12-12,2,9,77,87,97)
(2012-12-12,2,9,78,88,98)

Any suggestion is appreciated.

Patcharee






java.lang.String cannot be cast to java.lang.Integer

2014-05-30 Thread Patcharee Thongtra

Hi,

I got very strange exception.

80693 [main] ERROR org.apache.pig.tools.grunt.Grunt  - ERROR 1066: 
Unable to open iterator for alias ordered. Backend error : 
java.lang.String cannot be cast to java.lang.Integer
14/05/30 11:53:22 ERROR grunt.Grunt: ERROR 1066: Unable to open iterator 
for alias ordered. Backend error : java.lang.String cannot be cast to 
java.lang.Integer



In my pig script below, z is integer but pig complains as it is String.

query = load 'fino32' USING org.apache.hcatalog.pig.HCatLoader() as (
date: chararray,
u: float,
v: float,
t: float,
zone: int,
z: int);

ordered = ORDER query BY z;

dump ordered;


Any suggestion is appreciated.

Patcharee




Re: java.lang.String cannot be cast to java.lang.Integer

2014-05-30 Thread Patcharee Thongtra

Hi,

column z is integer and not null, so I do not understand why I got cast 
exception.


Patcharee

On 05/30/2014 01:23 PM, Piotr Dendek wrote:

Hi,

Handle null values,
check if the content of column z is definitely integer values,
ensure that no whitecharacters are included (e.g. 11 ).

Ultimately, you can read the z column as chararray and process it with udf.
This will give you chance to log the faulty record.

Tell if any of this removed the problem.

Piotr
30 maj 2014 12:02 Patcharee Thongtra patcharee.thong...@uni.no
napisał(a):


Hi,

I got very strange exception.

80693 [main] ERROR org.apache.pig.tools.grunt.Grunt  - ERROR 1066: Unable
to open iterator for alias ordered. Backend error : java.lang.String cannot
be cast to java.lang.Integer
14/05/30 11:53:22 ERROR grunt.Grunt: ERROR 1066: Unable to open iterator
for alias ordered. Backend error : java.lang.String cannot be cast to
java.lang.Integer


In my pig script below, z is integer but pig complains as it is String.

query = load 'fino32' USING org.apache.hcatalog.pig.HCatLoader() as (
date: chararray,
u: float,
v: float,
t: float,
zone: int,
z: int);

ordered = ORDER query BY z;

dump ordered;


Any suggestion is appreciated.

Patcharee







HCatLoader Table not found

2014-05-16 Thread Patcharee Thongtra

Hi,

I am using HCatLoader to load data from a table (existing in hive).

A = load 'rwf_data' USING org.apache.hcatalog.pig.HCatLoader();
describe A;

I got Error 1115: Table not found : ...

It is weird. Any suggestions on this? Thanks

Patcharee


store to defined filename

2014-05-14 Thread Patcharee Thongtra

Hi,

Is it possible to store results in to a file with determined filename, 
instead of part-r-0? How to do that?


Patcharee


pass command line parameters to custom LOAD

2014-05-06 Thread Patcharee Thongtra

Hi,

How can I pass command line parameters to my custom LOAD function?

Patcharee


copyFromLocal: unexpected URISyntaxException

2014-04-28 Thread Patcharee Thongtra

Hi,

My file name contains : and I got error copyFromLocal: unexpected 
URISyntaxException when I try to copy this file to Hadoop. See below.


[patcharee@compute-1-0 ~]$ hadoop fs -copyFromLocal 
wrfout_d01_2001-01-01_00:00:00 netcdf_data/

copyFromLocal: unexpected URISyntaxException

I am using Hadoop 2.2.0.

Any suggestions?

Patcharee



Re: copyFromLocal: unexpected URISyntaxException

2014-04-28 Thread Patcharee Thongtra

Hi,

I tried to put escape chars around it, but it does not work.

Patcharee

On 04/28/2014 11:45 AM, Nitin Pawar wrote:

try putting escape chars around it


On Mon, Apr 28, 2014 at 2:52 PM, Patcharee Thongtra 
patcharee.thong...@uni.no mailto:patcharee.thong...@uni.no wrote:


Hi,

My file name contains : and I got error copyFromLocal:
unexpected URISyntaxException when I try to copy this file to
Hadoop. See below.

[patcharee@compute-1-0 ~]$ hadoop fs -copyFromLocal
wrfout_d01_2001-01-01_00:00:00 netcdf_data/
copyFromLocal: unexpected URISyntaxException

I am using Hadoop 2.2.0.

Any suggestions?

Patcharee




--
Nitin Pawar




How do I flatten bag after group?

2014-04-23 Thread Patcharee Thongtra

Hi,

From the schema

C: {group: (int,int,int),{(varvalue: {t: (varname: chararray,shape: 
float)})}}


I would like to get

{int,int,int,(varname,shape)}, where there are multiple varname and 
shape value of each varname.


How can I write the pig script to generate that?

Patcharee


Number of map task

2014-04-22 Thread Patcharee Thongtra

Hi,

I wrote a custom InputFormat. When I ran the pig script Load function 
using this InputFormat, the number of InputSplit = 16, but there was 
only 2 map tasks handling these splits. Apparently the no. of map tasks 
= the no. of input files.


Does the number of Map task not correspond to the number of splits?

I think the job will be done quicker if there are more Map tasks?

Patcharee


Re: InputFormat and InputSplit - Network location name contains /:

2014-04-11 Thread Patcharee Thongtra

Hi Harsh,

Many thanks! I got rid of the problem by updating the InputSplit's 
getLocations() to return hosts.


Patcharee

On 04/11/2014 06:16 AM, Harsh J wrote:

Do not use the InputSplit's getLocations() API to supply your file
path, it is not intended for such things, if thats what you've done in
your current InputFormat implementation.

If you're looking to store a single file path, use the FileSplit
class, or if not as simple as that, do use it as a base reference to
build you Path based InputSplit derivative. Its sources are at
https://github.com/apache/hadoop-common/blob/release-2.4.0/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileSplit.java.
Look for the Writable method overrides in particular to understand how
to use custom fields.

On Thu, Apr 10, 2014 at 9:54 PM, Patcharee Thongtra
patcharee.thong...@uni.no wrote:

Hi,

I wrote a custom InputFormat and InputSplit to handle netcdf file. I use
with a custom pig Load function. When I submitted a job by running a pig
script. I got an error below. From the error log, the network location name
is hdfs://service-1-0.local:8020/user/patcharee/netcdf_data/wrfout_d02 -
my input file, containing /, and hadoop does not allow.

It could be something missing in my custom InputFormat and InputSplit. Any
ideas? Any help is appreciated,

Patcharee


2014-04-10 17:09:01,854 INFO [CommitterEvent Processor #0]
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
the event EventType: JOB_SETUP

2014-04-10 17:09:01,918 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1387474594811_0071Job Transitioned from SETUP to RUNNING

2014-04-10 17:09:01,982 INFO [AsyncDispatcher event handler]
org.apache.hadoop.yarn.util.RackResolver: Resolved
hdfs://service-1-0.local:8020/user/patcharee/netcdf_data/wrfout_d02 to
/default-rack

2014-04-10 17:09:01,984 FATAL [AsyncDispatcher event handler]
org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.IllegalArgumentException: Network location name contains /:
hdfs://service-1-0.local:8020/user/patcharee/netcdf_data/wrfout_d02
 at org.apache.hadoop.net.NodeBase.set(NodeBase.java:87)
 at org.apache.hadoop.net.NodeBase.init(NodeBase.java:65)
 at
org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:111)
 at
org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:95)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.init(TaskAttemptImpl.java:548)
 at
org.apache.hadoop.mapred.MapTaskAttemptImpl.init(MapTaskAttemptImpl.java:47)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.MapTaskImpl.createAttempt(MapTaskImpl.java:62)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.addAttempt(TaskImpl.java:594)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.addAndScheduleAttempt(TaskImpl.java:581)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.access$1300(TaskImpl.java:100)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl$InitialScheduleTransition.transition(TaskImpl.java:871)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl$InitialScheduleTransition.transition(TaskImpl.java:866)
 at
org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
 at
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
 at
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
 at
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.handle(TaskImpl.java:632)
 at
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.handle(TaskImpl.java:99)
 at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher.handle(MRAppMaster.java:1237)
 at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher.handle(MRAppMaster.java:1231)
 at
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:134)
 at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:81)
 at java.lang.Thread.run(Thread.java:662)
2014-04-10 17:09:01,986 INFO [AsyncDispatcher event handler]
org.apache.hadoop.







Number of map task

2014-04-11 Thread Patcharee Thongtra

Hi,

I wrote a custom InputFormat. When I ran the pig script Load function 
using this InputFormat, the number of InputSplit  1, but there was only 
1 map task handling these splits.


Does the number of Map task not correspond to the number of splits?

I think the job will be done quicker if there are more Map tasks?

Patcharee


Pass user configurations/arguments to UDF

2014-04-10 Thread Patcharee Thongtra

Hi,

I implemented a custom load function. How to pass some user settings to 
this function?


Any help is appreciated,

Patcharee


InputFormat and InputSplit - Network location name contains /:

2014-04-10 Thread Patcharee Thongtra

Hi,

I wrote a custom InputFormat and InputSplit to handle netcdf file. I use 
with a custom pig Load function. When I submitted a job by running a pig 
script. I got an error below. From the error log, the network location 
name is 
hdfs://service-1-0.local:8020/user/patcharee/netcdf_data/wrfout_d02 - 
my input file, containing /, and hadoop does not allow.


It could be something missing in my custom InputFormat and InputSplit. 
Any ideas? Any help is appreciated,


Patcharee


2014-04-10 17:09:01,854 INFO [CommitterEvent Processor #0] 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: 
Processing the event EventType: JOB_SETUP


2014-04-10 17:09:01,918 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: 
job_1387474594811_0071Job Transitioned from SETUP to RUNNING


2014-04-10 17:09:01,982 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.yarn.util.RackResolver: Resolved 
hdfs://service-1-0.local:8020/user/patcharee/netcdf_data/wrfout_d02 to 
/default-rack


2014-04-10 17:09:01,984 FATAL [AsyncDispatcher event handler] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.IllegalArgumentException: Network location name contains /: 
hdfs://service-1-0.local:8020/user/patcharee/netcdf_data/wrfout_d02

at org.apache.hadoop.net.NodeBase.set(NodeBase.java:87)
at org.apache.hadoop.net.NodeBase.init(NodeBase.java:65)
at 
org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:111)
at 
org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:95)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.init(TaskAttemptImpl.java:548)
at 
org.apache.hadoop.mapred.MapTaskAttemptImpl.init(MapTaskAttemptImpl.java:47)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.MapTaskImpl.createAttempt(MapTaskImpl.java:62)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.addAttempt(TaskImpl.java:594)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.addAndScheduleAttempt(TaskImpl.java:581)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.access$1300(TaskImpl.java:100)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl$InitialScheduleTransition.transition(TaskImpl.java:871)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl$InitialScheduleTransition.transition(TaskImpl.java:866)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.handle(TaskImpl.java:632)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.handle(TaskImpl.java:99)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher.handle(MRAppMaster.java:1237)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher.handle(MRAppMaster.java:1231)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:134)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:81)

at java.lang.Thread.run(Thread.java:662)
2014-04-10 17:09:01,986 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.


Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses

2014-04-03 Thread Patcharee Thongtra

Hi,

I am trying to run pig test. When I execute mvn test, I got error

org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable 
to open iterator for alias data

at org.apache.pig.PigServer.openIterator(PigServer.java:880)
at 
com.mortardata.pig.TestExampleLoader.testLoader(TestExampleLoader.java:55)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)


Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store 
alias data

at org.apache.pig.PigServer.storeEx(PigServer.java:982)
at org.apache.pig.PigServer.store(PigServer.java:942)
at org.apache.pig.PigServer.openIterator(PigServer.java:855)
... 31 more
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 
2043: Unexpected error during execution.

at org.apache.pig.PigServer.launchPlan(PigServer.java:1333)
at 
org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)

at org.apache.pig.PigServer.storeEx(PigServer.java:978)
... 33 more
Caused by: java.io.IOException: Cannot initialize Cluster. Please check 
your configuration for mapreduce.framework.name and the correspond 
server addresses.

at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:75)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:449)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:158)

at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)

I am using pig 0.12.0 with hadoop 2.2.0. This is dependencies in my pom.xml

  dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-common/artifactId
version2.2.0/version
/dependency
dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-mapreduce-client-core/artifactId
version2.2.0/version
  /dependency
  dependency
groupIdorg.apache.pig/groupId
artifactIdpig/artifactId
version0.12.0/version
  /dependency
  dependency
groupIdorg.apache.pig/groupId
artifactIdpigunit/artifactId
version0.12.0/version
  /dependency
  dependency
groupIdjoda-time/groupId
artifactIdjoda-time/artifactId
version2.0/version
scopetest/scope
  /dependency


Any suggestions please!
Patcharee




libopenwsman1 Broken

2011-04-14 Thread Patcharee Thongtra

Hi

I found that 'libwsman_cim_plugin.so.1' is missing from the package 
'libopenwsman1 in maverick of architecture i386'.
It must be under /usr/lib/openwsman/plugins/.

Can you please fix this?

Regards,
Patcharee
  -- 
Ubuntu-motu mailing list
Ubuntu-motu@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu


RE: libopenwsman1 Broken

2011-04-14 Thread Patcharee Thongtra


  I found that 'libwsman_cim_plugin.so.1' is missing from the package
  'libopenwsman1 in maverick of architecture i386'.
  It must be under /usr/lib/openwsman/plugins/.
 
 Just so you know, I filed a bug about this:
 https://bugs.launchpad.net/ubuntu/+source/openwsman/+bug/760835
 
 There's a branch attached to that bug, it's waiting for sponsorship to
 get in Natty.
 

Thank you for info.
  -- 
Ubuntu-motu mailing list
Ubuntu-motu@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu