Re: One partitions segment files in different log-dirs

2019-01-07 Thread margusja
Hi

Thank you for a answer. 

The answer I was looking for was how partition segment files are distributed 
over kafka-logs directories? 
In example I have one broker with two log directories: kafka-logs1 and 
kafka-logs2 both of them 100MB in example and partition file segment size is 
90MB. If one segment will be full can kafka start second segment file in other 
kafka-logs directory?

Br, Margus

On 2019/01/05 17:20:06, Jonathan Santilli  wrote: 
> Hello Margus,
> 
> Am not sure if I got your question correctly, but, assuming you have a
> topic called "*kafka-log*" with two partitions, each of them (kafka-log-1
> and kafka-log-2) will contain its own segments.
> Kafka Brokers will distribute/replicate (according to the Brokers config)
> the topics partitions among the available Brokers (once again, it depends
> on the configuration you have in place).
> 
> The segments within a topic partition belongs to that particular partition
> and are not shared between partitions, that is, one particular segment
> sticks to the partition it belongs and is not shared/split with other
> partitions.
> 
> Hope this helps or maybe you can provide more details about your doubt.
> 
> Cheers!
> --
> Jonathan
> 
> 
> On Fri, Jan 4, 2019 at 4:29 PM  wrote:
> 
> > Hi
> >
> > In example if I have /kafka-log1 and /kafka-log2
> >
> > Can kafka distribute one partitions segment files between different logs
> > directories?
> >
> > Br,
> > Margus Roo
> >
> 
> 
> -- 
> Santilli Jonathan
> 


Re: Index population over table contains 2.3 x 10^10 records

2018-03-22 Thread Margusja
Great hint! Looks like it helped! 

What a great power of community!

Br, Margus

> On 22 Mar 2018, at 18:24, Josh Elser <els...@apache.org> wrote:
> 
> Hard to say at a glance, but this issue is happening down in the MapReduce 
> framework, not in Phoenix itself.
> 
> It looks similar to problems I've seen many years ago around 
> mapreduce.task.io.sort.mb. You can try reducing that value. It also may be 
> related to a bug in your Hadoop version.
> 
> Good luck!
> 
> On 3/22/18 4:37 AM, Margusja wrote:
>> Hi
>> Needed to recreate indexes over main table contains more than 2.3 x 10^10 
>> records.
>> I used ASYNC and org.apache.phoenix.mapreduce.index.IndexTool
>> One index succeed but another gives stack:
>> 2018-03-20 13:23:16,723 FATAL [IPC Server handler 0 on 43926] 
>> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
>> attempt_1521544097253_0004_m_08_0 - exited : 
>> java.lang.ArrayIndexOutOfBoundsException at 
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1453)
>>  at 
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1349)
>>  at java.io.DataOutputStream.writeInt(DataOutputStream.java:197) at 
>> org.apache.hadoop.hbase.io.ImmutableBytesWritable.write(ImmutableBytesWritable.java:159)
>>  at 
>> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98)
>>  at 
>> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82)
>>  at 
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1149) 
>> at 
>> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:715) 
>> at 
>> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>>  at 
>> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>>  at 
>> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:114)
>>  at 
>> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:48)
>>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at 
>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
>> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at 
>> java.security.AccessController.doPrivileged(Native Method) at 
>> javax.security.auth.Subject.doAs(Subject.java:422) at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
>> Is here any best practice how to deal with situations like this?
>> Br, Margus



Index population over table contains 2.3 x 10^10 records

2018-03-22 Thread Margusja
Hi 

Needed to recreate indexes over main table contains more than 2.3 x 10^10 
records.
I used ASYNC and org.apache.phoenix.mapreduce.index.IndexTool


One index succeed but another gives stack:

2018-03-20 13:23:16,723 FATAL [IPC Server handler 0 on 43926] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
attempt_1521544097253_0004_m_08_0 - exited : 
java.lang.ArrayIndexOutOfBoundsException
 at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1453)
 at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1349)
 at java.io.DataOutputStream.writeInt(DataOutputStream.java:197)
 at 
org.apache.hadoop.hbase.io.ImmutableBytesWritable.write(ImmutableBytesWritable.java:159)
 at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98)
 at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82)
 at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1149)
 at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:715)
 at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
 at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:114)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:48)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)


Is here any best practice how to deal with situations like this?

Br, Margus

Re: Hiveserver2 - Beeline How to enable progress bar (Tez)

2018-02-02 Thread Margusja
Does it depends what kind of engine you are using - tez versus mr?

Br margus

> On 1 Feb 2018, at 17:57, Shankar Mane  wrote:
> 
> Any update? 
> 
> I m not getting progress bar with beeline client. tried both embedded and 
> remote connection.
> 
> It's working properly with Hive CLI.
> 
> 
> On 30-Jan-2018 6:01 PM, "Shankar Mane"  > wrote:
> May i know, how to enable Tez progress bar in beeline ? I am using Tez has a 
> execution engine. 
> 
> I have followed some steps but failed. steps are:
> set hive.async.log.enable=false
> set hive.server2.in.place.progress=true
> 
> 
> Env Details:
> RHEL 
> hive 2.3.2
> hadoop 2.9.0
> Tez 0.9.1
> Apache Distribution
> 
> Also while connecting to hiveserver, i am getting below error:
> 
> 
> 2018-01-30T16:08:46,204  INFO [main] http.HttpServer: Started 
> HttpServer[hiveserver2] on port 10002
> Exception in thread 
> "org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor@2489e84a" 
> java.lang.NoSuchMethodError: 
> com.google.common.base.Stopwatch.elapsed(Ljava/util/concurrent/TimeUnit;)J
> at 
> org.apache.hadoop.hive.common.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:185)
> at java.lang.Thread.run(Thread.java:748)
> 
> 
> 



Re: Spark 2.1.1 (Scala 2.11.8) write to Phoenix 4.7 (HBase 1.1.2)

2018-01-30 Thread Margusja
Also I see that Logging is moved to internal/Logging. But is there package for 
my environment I can use?

Margus


> On 30 Jan 2018, at 17:00, Margusja <mar...@roo.ee> wrote:
> 
> Hi
> 
> Followed page (https://phoenix.apache.org/phoenix_spark.html 
> <https://phoenix.apache.org/phoenix_spark.html> 
> <https://phoenix.apache.org/phoenix_spark.html 
> <https://phoenix.apache.org/phoenix_spark.html>>) and trying to save to 
> phoenix.
> 
> Using spark-1.6.3 it is successful but using spark-2.1.1 it is not.
> First error I am getting using spark-2.1.1 is that:
> 
> Error:scalac: missing or invalid dependency detected while loading class file 
> 'ProductRDDFunctions.class'.
> Could not access type Logging in package org.apache.spark,
> because it (or its dependencies) are missing. Check your build definition for
> missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see 
> the problematic classpath.)
> A full rebuild may help if 'ProductRDDFunctions.class' was compiled against 
> an incompatible version of org.apache.spark.
> 
> I can see that Logging is removed after 1.6.3 and does not exist in 2.1.1.
> 
> What are my options?
> 
> Br
> Margus



Spark 2.1.1 (Scala 2.11.8) write to Phoenix 4.7 (HBase 1.1.2)

2018-01-30 Thread Margusja
Hi

Followed page (https://phoenix.apache.org/phoenix_spark.html 
 
>) and trying to save to phoenix.

Using spark-1.6.3 it is successful but using spark-2.1.1 it is not.
First error I am getting using spark-2.1.1 is that:

Error:scalac: missing or invalid dependency detected while loading class file 
'ProductRDDFunctions.class'.
Could not access type Logging in package org.apache.spark,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the 
problematic classpath.)
A full rebuild may help if 'ProductRDDFunctions.class' was compiled against an 
incompatible version of org.apache.spark.

I can see that Logging is removed after 1.6.3 and does not exist in 2.1.1.

What are my options?

Br
Margus


Spark 2.1.1 (Scala 2.11.8) write to Phoenix 4.7 (HBase 1.1.2)

2018-01-30 Thread Margusja
Hi

Followed page (https://phoenix.apache.org/phoenix_spark.html 
) and trying to save to phoenix.

Using spark-1.6.3 it is successful but using spark-2.1.1 it is not. 
First error I am getting using spark-2.1.1 is that:

Error:scalac: missing or invalid dependency detected while loading class file 
'ProductRDDFunctions.class'.
Could not access type Logging in package org.apache.spark,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the 
problematic classpath.)
A full rebuild may help if 'ProductRDDFunctions.class' was compiled against an 
incompatible version of org.apache.spark.

I can see that Logging is removed after 1.6.3 and does not exist in 2.1.1.

What are my options?

Br
Margus

Get broadcast (set in one method) in another method

2018-01-25 Thread Margusja
Hi

Maybe I am overthinking. I’d like to set broadcast in object A method y  and 
get it in object A method x.

In example:

object A {

def main (args: Array[String]) {
y()
x()
}

def x() : Unit = {
val a = bcA.value
...
}

def y(): String = {
val bcA = sc.broadcast(a) 
…
return “String value"
}

}




---
Br
Margus

Re: run spark job in yarn cluster mode as specified user

2018-01-22 Thread Margusja
Hi

org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor requires user 
in each node and right permissions set in necessary directories. 

Br
Margus


> On 22 Jan 2018, at 13:41, sd wang  wrote:
> 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor



Re: run spark job in yarn cluster mode as specified user

2018-01-21 Thread Margusja
Hi

One way to get it is use YARN configuration parameter - 
yarn.nodemanager.container-executor.class.
By default it is 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor

org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor - gives you 
user who executes script. 

Br
Margus



> On 22 Jan 2018, at 09:28, sd wang  wrote:
> 
> Hi Advisers,
> When submit spark job in yarn cluster mode, the job will be executed by 
> "yarn" user. Any parameters can change the user? I tried setting 
> HADOOP_USER_NAME but it did not work. I'm using spark 2.2. 
> Thanks for any help!



Re: ambari 2.6.0 hang up after Registering your hosts

2017-11-09 Thread Margusja
Maybe ambari-database somehow get broken?

Margus Roo


> On 10 Nov 2017, at 05:11, lk_hadoop  wrote:
> 
> 



Re: spark job paused(active stages finished)

2017-11-08 Thread Margusja
You have to deal with failed jobs. In example try catch in your code.

Br Margus Roo


> On 9 Nov 2017, at 05:37, bing...@iflytek.com wrote:
> 
> Dear,All
> I have a simple spark job, as below, all tasks in the stage 2(sth failed, 
> retry) already finished. But the next stage never run.
> 
> 
>
> driver thread dump:  attachment( thread.dump)
> driver last log:
> 
> 
>  driver do not receive the 16 retry tasks report.Thank you ideas.
> 
> 
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org 
> 


Re: Developing plugin using 0.7.0 and RANGER-1479

2017-11-07 Thread Margusja
Hi

I resolved at the moment it compiling 
https://github.com/apache/ranger/blob/master/agents-common/src/main/java/org/apache/ranger/authorization/hadoop/config/RangerConfiguration.java
 
<https://github.com/apache/ranger/blob/master/agents-common/src/main/java/org/apache/ranger/authorization/hadoop/config/RangerConfiguration.java>
 separately and replaced class fail in dependency jar file 
ranger-plugins-common-0.7.1.jar

We are developing ranger plugin application working with data on hadoop. All 
hadoop components are nicely via Ranger but our custom GUI needed audit and 
security. So it was logical not to build new audit framework but use existing - 
Ranger. 
Used examples from git and development process was clear and easy.

Br Margus

> On 7 Nov 2017, at 09:00, Margusja <mar...@roo.ee> wrote:
> 
> Hi
> 
> We hit https://issues.apache.org/jira/browse/RANGER-1479 
> <https://issues.apache.org/jira/browse/RANGER-1479>.
> At the moment we use Ranger-0.7.0 from Hortonworks distro (HDP-2.6.1.0).
> 
> What are my option at the moment?
> 1 - download latest src and compile it and use it to build our custom plugin?
> 2 - Is it possible to put this XML files somewhere else to help 
> RangerConfiguration to find them?
> 3 - ?
> 
> Br Margus



Developing plugin using 0.7.0 and RANGER-1479

2017-11-06 Thread Margusja
Hi

We hit https://issues.apache.org/jira/browse/RANGER-1479 
.
At the moment we use Ranger-0.7.0 from Hortonworks distro (HDP-2.6.1.0).

What are my option at the moment?
1 - download latest src and compile it and use it to build our custom plugin?
2 - Is it possible to put this XML files somewhere else to help 
RangerConfiguration to find them?
3 - ?

Br Margus

Re: Authenticate from SQL

2014-11-03 Thread Margusja

Hi

In one old project where usernames and passwords are in RDB, we need 
authenticate users from RDB before they can go via REST to HBase.

So the first thing was Knox.

Best regards, Margus Roo
skype: margusja
phone: +372 51 48 780
web: http://margus.roo.ee

On 04/11/14 04:02, Bharath Vissapragada wrote:

Hi,

What do you mean by auth in SQL? It supports SPNEGO incase you are
interested.

On Mon, Nov 3, 2014 at 12:16 PM, Margusja mar...@roo.ee wrote:


Hi

I am looking solutions where users before using HBase rest will be
authenticate from SQL (in example from Oracle).
Is there any best practices or ready solutions for HBase?

--
Best regards, Margus Roo
skype: margusja
phone: +372 51 48 780
web: http://margus.roo.ee








Authenticate from SQL

2014-11-02 Thread Margusja

Hi

I am looking solutions where users before using HBase rest will be 
authenticate from SQL (in example from Oracle).

Is there any best practices or ready solutions for HBase?

--
Best regards, Margus Roo
skype: margusja
phone: +372 51 48 780
web: http://margus.roo.ee



[no subject]

2014-10-22 Thread Margusja

unsubscribe

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



The filesystem under path '/' has n CORRUPT files

2014-10-14 Thread Margusja

Hi

I am playing with hadoop-2 filesystem. I have two namenodes with HA and 
six datanodes.

I tried different configurations and killed namenodes ans so on...
Now I have situation where most of my data are there but some corrupted 
blocks exists.
hdfs fsck / - gives my loads of Under replicated blocks. Will they 
recover? My replica factor is 3.

Filespystem Status: HEALTHY
Via Web UI I see many missing blocks message.

hdfs fsck / -list-corruptfileblocks gives me many corrupted blocks.
In example - blk_1073745897  /user/hue/cdr/2014/12/10/table10.csv
[hdfs@bigdata1 dfs]$ hdfs fsck /user/hue/cdr/2014/12/10/table10.csv 
-files -locations -blocks

Connecting to namenode via http://namenode1:50070
FSCK started by hdfs (auth:SIMPLE) from /192.168.81.108 for path 
/user/hue/cdr/2014/12/10/table10.csv at Tue Oct 14 16:51:55 EEST 2014

Path '/user/hue/cdr/2014/12/10/table10.csv' does not exist

As I understand There is nothing to do.
Tried to delete it
[hdfs@bigdata1 dfs]$ hdfs dfs -rm /user/hue/cdr/2014/12/10/table10.csv
rm: `/user/hue/cdr/2014/12/10/table10.csv': No such file or directory

So what sould I do?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)



ApplicationMaster link on cluster web page does not work

2014-08-28 Thread Margusja

Hi

configuration

!-- Site specific YARN configuration properties --
property
  nameyarn.app.mapreduce.am.staging-dir/name
  value/user/value
/property

property
nameyarn.nodemanager.aux-services/name
valuemapreduce_shuffle/value
/property

property
nameyarn.nodemanager.aux-services.mapreduce_shuffle.class/name
valueorg.apache.hadoop.mapred.ShuffleHandler/value
/property

property
nameyarn.application.classpath/name
value
/etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib/hadoop/lib/*,/usr/lib/hadoop-hdfs/*,/usr/lib/hadoop-hdfs/lib/*,/usr/lib/hadoop-yarn/*,/usr/lib/hadoop-yarn/lib/*,/usr/lib/hadoop-mapreduce/*,/usr/lib/hadoop-mapreduce/lib/*,/home/hduser/mahout-1.0-snapshot/math/target/*
/value
/property

property
nameyarn.resourcemanager.scheduler.address/name
value0.0.0.0:8030/value
/property

property
nameyarn.web-proxy.address/name
valuevm38.dbweb.ee:8089/value
/property

/configuration

when i click job's ApplicationMaster link 
(http://vm38.dbweb.ee:8089/proxy/application_1409246753441_0001/) on 
claster all applications list I will get:



   HTTP ERROR 404

Problem accessing /proxy/application_1409246753441_0001/mapreduce. Reason:

NOT_FOUND


/Powered by Jetty://

/I use yarn.web-proxy because without yarn.web-proxy when I click 
ApplicationMaster link I will have loads of tcp connections with 
established statuses.

In yarn-proxy log there is nothing interesting

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)



Re: ApplicationMaster link on cluster web page does not work

2014-08-28 Thread Margusja
 
:::90.190.106.33:55712  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:42313  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:45879  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:33412 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:51992 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:37513  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:39260 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:46269  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:46087 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:47321 
:::90.190.106.33:8088   ESTABLISHED 15723/java

...
...
...

wtf?

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 28/08/14 20:40, Margusja wrote:

Hi

configuration

!-- Site specific YARN configuration properties --
property
  nameyarn.app.mapreduce.am.staging-dir/name
  value/user/value
/property

property
nameyarn.nodemanager.aux-services/name
valuemapreduce_shuffle/value
/property

property
nameyarn.nodemanager.aux-services.mapreduce_shuffle.class/name
valueorg.apache.hadoop.mapred.ShuffleHandler/value
/property

property
nameyarn.application.classpath/name
value
/etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib/hadoop/lib/*,/usr/lib/hadoop-hdfs/*,/usr/lib/hadoop-hdfs/lib/*,/usr/lib/hadoop-yarn/*,/usr/lib/hadoop-yarn/lib/*,/usr/lib/hadoop-mapreduce/*,/usr/lib/hadoop-mapreduce/lib/*,/home/hduser/mahout-1.0-snapshot/math/target/* 


/value
/property

property
nameyarn.resourcemanager.scheduler.address/name
value0.0.0.0:8030/value
/property

property
nameyarn.web-proxy.address/name
valuevm38.dbweb.ee:8089/value
/property

/configuration

when i click job's ApplicationMaster link 
(http://vm38.dbweb.ee:8089/proxy/application_1409246753441_0001/) on 
claster all applications list I will get:



   HTTP ERROR 404

Problem accessing /proxy/application_1409246753441_0001/mapreduce. 
Reason:


NOT_FOUND


/Powered by Jetty://

/I use yarn.web-proxy because without yarn.web-proxy when I click 
ApplicationMaster link I will have loads of tcp connections with 
established statuses.

In yarn-proxy log there is nothing interesting





Re: ApplicationMaster link on cluster web page does not work

2014-08-28 Thread Margusja
Moved resourcemanager to another server and it works. I guess I have 
some network miss routing there :)


Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 28/08/14 21:39, Margusja wrote:

More information

after I started resourcemanager
[root@vm38 ~]# /etc/init.d/hadoop-yarn-resourcemanager start
Starting Hadoop resourcemanager:   [  OK ]

and I open cluster web interface there is some tcp connections to 8088:

[root@vm38 ~]# netstat -np | grep 8088
tcp0  0 :::90.190.106.33:8088 
:::84.50.21.39:61120ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::84.50.21.39:64412ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::84.50.21.39:50139ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::84.50.21.39:64407ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::84.50.21.39:58817ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::84.50.21.39:52250ESTABLISHED 15723/java


this is ok

Now I started new map reduce job and got tracking url 
http://server:8088/proxy/application_1409250808355_0001/


now there are loads of connections and tracking url does not response:
tcp0  0 :::90.190.106.33:46910 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:35984 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:37559  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:44154 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:45417 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:57294 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:47949  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:55330 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp  432  0 :::90.190.106.33:8088 
:::90.190.106.33:45467  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:58405 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:55580 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:51992  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:44578 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:38686  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:39242 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:55916 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:48064 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:35148 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:42638 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:50836 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:44789  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:55051 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:39207  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:53781 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:54180 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:56344 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:52707 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:52274 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:45417  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:46627  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:44583  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:40928 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:47014 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:60939 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:8088 
:::90.190.106.33:52274  ESTABLISHED 15723/java
tcp0  0 :::90.190.106.33:38391 
:::90.190.106.33:8088   ESTABLISHED 15723/java
tcp0  0 ::

Re: Worker dies (bolt)

2014-06-03 Thread Margusja

Hei

I have made a new test and discovered that in my environment a very 
simple bolt will die too after around 2500 cycle.


Bolt's code:

  1 package storm;
  2
  3 import backtype.storm.task.TopologyContext;
  4 import backtype.storm.topology.BasicOutputCollector;
  5 import backtype.storm.topology.OutputFieldsDeclarer;
  6 import backtype.storm.topology.base.BaseBasicBolt;
  7 import backtype.storm.tuple.Fields;
  8 import backtype.storm.tuple.Tuple;
  9 import backtype.storm.tuple.Values;
 10
 11 import java.util.Map;
 12 import java.util.UUID;
 13
 14 public class DummyBolt extends BaseBasicBolt
 15 {
 16 int count = 0;
 17
 18  @Override
 19  public void prepare(Map stormConf, TopologyContext context) {
 20  }
 21
 22 @Override
 23 public void execute(Tuple tuple, BasicOutputCollector 
collector)

 24 {
 25 String line  = tuple.getString(0);
 26
 27 count ++;
 28 System.out.println(Dummy count: + count);
 29 collector.emit(new Values(line));
 30
 31 }
 32
 33 @Override
 34 public void declareOutputFields(OutputFieldsDeclarer declarer)
 35 {
 36 declarer.declare(new Fields(line));
 37 }
 38
 39 @Override
 40 public void cleanup()
 41 {
 42 }
 43
 44 }

after around 2500 cycles there is no output from execute methods.
What I do after this.
[root@dlvm2 sysconfig]# ls /var/lib/storm/workers/*/pids
[root@dlvm2 sysconfig]# kill -9 4179
after it  new worker is coming up and it works again around 2500 cycles 
and stops and I have to kill pid again.


Any ideas?

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 02/06/14 13:36, Margusja wrote:

Hi

I am using apache-storm-0.9.1-incubating.
I have simple topology: Spout reads from kafka topic and Bolt writes 
lines from spout to HBase.


recently we did a test - we send 300 000 000 messages over kafka-rest 
- kafka-queue - storm topology - hbase.


I noticed that around one hour and around 2500 messages worker died. 
PID is there and process is up but bolt's execute method hangs.


Bolts code is:
package storm;
  2
  3 import java.util.Map;
  4 import java.util.UUID;
  5
  6 import org.apache.hadoop.conf.Configuration;
  7 import org.apache.hadoop.hbase.HBaseConfiguration;
  8 import org.apache.hadoop.hbase.client.HTableInterface;
  9 import org.apache.hadoop.hbase.client.HTablePool;
 10 import org.apache.hadoop.hbase.client.Put;
 11 import org.apache.hadoop.hbase.util.Bytes;
 12
 13 import backtype.storm.task.TopologyContext;
 14 import backtype.storm.topology.BasicOutputCollector;
 15 import backtype.storm.topology.OutputFieldsDeclarer;
 16 import backtype.storm.topology.base.BaseBasicBolt;
 17 import backtype.storm.tuple.Fields;
 18 import backtype.storm.tuple.Tuple;
 19 import backtype.storm.tuple.Values;

public class HBaseWriterBolt extends BaseBasicBolt
 22 {
 23
 24 HTableInterface usersTable;
 25 HTablePool pool;
 26 int count = 0;
 27
 28  @Override
 29  public void prepare(Map stormConf, TopologyContext 
context) {

 30  Configuration conf = HBaseConfiguration.create();
 31 conf.set(hbase.defaults.for.version,0.96.0.2.0.6.0-76-hadoop2);
 32 conf.set(hbase.defaults.for.version.skip,true);
 33  conf.set(hbase.zookeeper.quorum, 
vm24,vm37,vm38);

 34 conf.set(hbase.zookeeper.property.clientPort, 2181);
 35  conf.set(hbase.rootdir, 
hdfs://vm38:8020/user/hbase/data);
 36  //conf.set(zookeeper.znode.parent, 
/hbase-unsecure);

 37
 38  pool = new HTablePool(conf, 1);
 39  usersTable = pool.getTable(kafkademo1);
 40  }
41
 42 @Override
 43 public void execute(Tuple tuple, BasicOutputCollector 
collector)

 44 {
 45 String line  = tuple.getString(0);
 46
 47 Put p = new 
Put(Bytes.toBytes(UUID.randomUUID().toString()));
 48 p.add(Bytes.toBytes(info), 
Bytes.toBytes(line), Bytes.toBytes(line));

 49
 50 try {
 51 usersTable.put(p);
 52 count ++;
 53 System.out.println(Count: + count);
 54 }
 55 catch (Exception e){
 56 e.printStackTrace();
 57 }
 58 collector.emit(new Values(line));
 59
 60 }
 61
 62 @Override
 63 public void declareOutputFields(OutputFieldsDeclarer 
declarer)

 64 {
 65 declarer.declare(new Fields(line));
 66 }
 67
 68 @Override
 69 public void cleanup()
 70 {
 71 try {
 72 usersTable.close();
 73

Re: Worker dies (bolt)

2014-06-03 Thread Margusja

Some new information.
Set debug true and from active worker log I can see:
if worker is ok:
2014-06-03 11:04:55 b.s.d.task [INFO] Emitting: hbasewriter __ack_ack 
[7197822474056634252 -608920652033678418]
2014-06-03 11:04:55 b.s.d.executor [INFO] Processing received message 
source: hbasewriter:3, stream: __ack_ack, id: {}, [7197822474056634252 
-608920652033678418]
2014-06-03 11:04:55 b.s.d.task [INFO] Emitting direct: 1; __acker 
__ack_ack [7197822474056634252]
2014-06-03 11:04:55 b.s.d.executor [INFO] Processing received message 
source: KafkaConsumerSpout:1, stream: default, id: 
{4344988213623161794=-5214435544383558411},  my message...


and after worker dies there are only rows about spout like:
2014-06-03 11:06:30 b.s.d.task [INFO] Emitting: KafkaConsumerSpout 
__ack_init [3399515592775976300 5357635772515085965 1]

2014-06-03 11:06:30 b.s.d.task [INFO] Emitting: KafkaConsumerSpout default

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 03/06/14 09:58, Margusja wrote:

Hei

I have made a new test and discovered that in my environment a very 
simple bolt will die too after around 2500 cycle.


Bolt's code:

  1 package storm;
  2
  3 import backtype.storm.task.TopologyContext;
  4 import backtype.storm.topology.BasicOutputCollector;
  5 import backtype.storm.topology.OutputFieldsDeclarer;
  6 import backtype.storm.topology.base.BaseBasicBolt;
  7 import backtype.storm.tuple.Fields;
  8 import backtype.storm.tuple.Tuple;
  9 import backtype.storm.tuple.Values;
 10
 11 import java.util.Map;
 12 import java.util.UUID;
 13
 14 public class DummyBolt extends BaseBasicBolt
 15 {
 16 int count = 0;
 17
 18  @Override
 19  public void prepare(Map stormConf, TopologyContext 
context) {

 20  }
 21
 22 @Override
 23 public void execute(Tuple tuple, BasicOutputCollector 
collector)

 24 {
 25 String line  = tuple.getString(0);
 26
 27 count ++;
 28 System.out.println(Dummy count: + count);
 29 collector.emit(new Values(line));
 30
 31 }
 32
 33 @Override
 34 public void declareOutputFields(OutputFieldsDeclarer 
declarer)

 35 {
 36 declarer.declare(new Fields(line));
 37 }
 38
 39 @Override
 40 public void cleanup()
 41 {
 42 }
 43
 44 }

after around 2500 cycles there is no output from execute methods.
What I do after this.
[root@dlvm2 sysconfig]# ls /var/lib/storm/workers/*/pids
[root@dlvm2 sysconfig]# kill -9 4179
after it  new worker is coming up and it works again around 2500 
cycles and stops and I have to kill pid again.


Any ideas?

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 02/06/14 13:36, Margusja wrote:

Hi

I am using apache-storm-0.9.1-incubating.
I have simple topology: Spout reads from kafka topic and Bolt writes 
lines from spout to HBase.


recently we did a test - we send 300 000 000 messages over kafka-rest 
- kafka-queue - storm topology - hbase.


I noticed that around one hour and around 2500 messages worker died. 
PID is there and process is up but bolt's execute method hangs.


Bolts code is:
package storm;
  2
  3 import java.util.Map;
  4 import java.util.UUID;
  5
  6 import org.apache.hadoop.conf.Configuration;
  7 import org.apache.hadoop.hbase.HBaseConfiguration;
  8 import org.apache.hadoop.hbase.client.HTableInterface;
  9 import org.apache.hadoop.hbase.client.HTablePool;
 10 import org.apache.hadoop.hbase.client.Put;
 11 import org.apache.hadoop.hbase.util.Bytes;
 12
 13 import backtype.storm.task.TopologyContext;
 14 import backtype.storm.topology.BasicOutputCollector;
 15 import backtype.storm.topology.OutputFieldsDeclarer;
 16 import backtype.storm.topology.base.BaseBasicBolt;
 17 import backtype.storm.tuple.Fields;
 18 import backtype.storm.tuple.Tuple;
 19 import backtype.storm.tuple.Values;

public class HBaseWriterBolt extends BaseBasicBolt
 22 {
 23
 24 HTableInterface usersTable;
 25 HTablePool pool;
 26 int count = 0;
 27
 28  @Override
 29  public void prepare(Map stormConf, TopologyContext 
context) {

 30  Configuration conf = HBaseConfiguration.create();
 31 conf.set(hbase.defaults.for.version,0.96.0.2.0.6.0-76-hadoop2);
 32 conf.set(hbase.defaults.for.version.skip,true);
 33  conf.set(hbase.zookeeper.quorum, 
vm24,vm37,vm38);

 34 conf.set(hbase.zookeeper.property.clientPort, 2181);
 35  conf.set(hbase.rootdir, 
hdfs://vm38:8020/user/hbase/data);
 36  //conf.set(zookeeper.znode.parent, 
/hbase-unsecure);

 37
 38  pool = new HTablePool(conf, 1);
 39

Re: Worker dies (bolt)

2014-06-03 Thread Margusja

Ok got more info.
Looks like the problem is related with spout.

I changed spout:

 32 public void open(Map conf, TopologyContext 
context,SpoutOutputCollector collector)

 33 {
 34 this.collector = collector;
 35
 36 Properties props = new Properties();
 37 props.put(zookeeper.connect, 
vm24:2181,vm37:2181,vm38:2181);

 38 props.put(group.id, testgroup);
 39 props.put(zookeeper.session.timeout.ms, 500);
 40 props.put(zookeeper.sync.time.ms, 250);
 41 props.put(auto.commit.interval.ms, 1000);
 42 consumer = Consumer.createJavaConsumerConnector(new 
ConsumerConfig(props));

 43 this.topic = kafkademo1;
 44
 45
 46 }

to

 32 public void open(Map conf, TopologyContext 
context,SpoutOutputCollector collector)

 33 {
 34 this.collector = collector;
 35
 36 Properties props = new Properties();
 37 props.put(zookeeper.connect, 
vm24:2181,vm37:2181,vm38:2181);

 38 props.put(group.id, testgroup);
 39 //props.put(zookeeper.session.timeout.ms, 500);
 40 //props.put(zookeeper.sync.time.ms, 250);
 41 //props.put(auto.commit.interval.ms, 1000);
 42 consumer = Consumer.createJavaConsumerConnector(new 
ConsumerConfig(props));

 43 this.topic = kafkademo1;
 44
 45
 46 }

and
 48 public void nextTuple()
 49 {
 50
 55
 56 MapString, Integer topicCount = new HashMapString, 
Integer();

 57 // Define single thread for topic
 58 topicCount.put(topic, new Integer(1));
 59 MapString, ListKafkaStreambyte[], byte[] 
consumerStreams = consumer.createMessageStreams(topicCount);
 60 ListKafkaStreambyte[], byte[] streams = 
consumerStreams.get(topic);

 61 for (final KafkaStream stream : streams) {
 62   ConsumerIteratorbyte[], byte[] consumerIte = 
stream.iterator();

 63   while (consumerIte.hasNext())
 64   {
 65  // System.out.println(Message from the Topic ...);
 66   String line = new 
String(consumerIte.next().message());

 67   this.collector.emit(new Values(line), line);
 69   }
 70
 71
 72 }
 73 if (consumer != null)
 74   consumer.shutdown();
 75 }

to

 48 public void nextTuple()
 49 {
 50
 55
 56 MapString, Integer topicCount = new HashMapString, 
Integer();

 57 // Define single thread for topic
 58 topicCount.put(topic, new Integer(1));
 59 MapString, ListKafkaStreambyte[], byte[] 
consumerStreams = consumer.createMessageStreams(topicCount);
 60 ListKafkaStreambyte[], byte[] streams = 
consumerStreams.get(topic);

 61 for (final KafkaStream stream : streams) {
 62   ConsumerIteratorbyte[], byte[] consumerIte = 
stream.iterator();

 63   while (consumerIte.hasNext())
 64   {
 65  // System.out.println(Message from the Topic ...);
 66   String line = new 
String(consumerIte.next().message());

 67   //this.collector.emit(new Values(line), line);
 68   this.collector.emit(new Values(line));
 69   }
 70
 71
 72 }
 73 if (consumer != null)
 74   consumer.shutdown();
 75 }


And now it is running.

Strange because when worker died then I see log rows from spout. But I 
think it is related somehow with the internal stuff in storm.


Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 03/06/14 11:09, Margusja wrote:

Some new information.
Set debug true and from active worker log I can see:
if worker is ok:
2014-06-03 11:04:55 b.s.d.task [INFO] Emitting: hbasewriter __ack_ack 
[7197822474056634252 -608920652033678418]
2014-06-03 11:04:55 b.s.d.executor [INFO] Processing received message 
source: hbasewriter:3, stream: __ack_ack, id: {}, [7197822474056634252 
-608920652033678418]
2014-06-03 11:04:55 b.s.d.task [INFO] Emitting direct: 1; __acker 
__ack_ack [7197822474056634252]
2014-06-03 11:04:55 b.s.d.executor [INFO] Processing received message 
source: KafkaConsumerSpout:1, stream: default, id: 
{4344988213623161794=-5214435544383558411},  my message...


and after worker dies there are only rows about spout like:
2014-06-03 11:06:30 b.s.d.task [INFO] Emitting: KafkaConsumerSpout 
__ack_init [3399515592775976300 5357635772515085965 1]
2014-06-03 11:06:30 b.s.d.task [INFO] Emitting: KafkaConsumerSpout 
default


Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http

Worker dies (bolt)

2014-06-02 Thread Margusja

Hi

I am using apache-storm-0.9.1-incubating.
I have simple topology: Spout reads from kafka topic and Bolt writes 
lines from spout to HBase.


recently we did a test - we send 300 000 000 messages over kafka-rest - 
kafka-queue - storm topology - hbase.


I noticed that around one hour and around 2500 messages worker died. PID 
is there and process is up but bolt's execute method hangs.


Bolts code is:
package storm;
  2
  3 import java.util.Map;
  4 import java.util.UUID;
  5
  6 import org.apache.hadoop.conf.Configuration;
  7 import org.apache.hadoop.hbase.HBaseConfiguration;
  8 import org.apache.hadoop.hbase.client.HTableInterface;
  9 import org.apache.hadoop.hbase.client.HTablePool;
 10 import org.apache.hadoop.hbase.client.Put;
 11 import org.apache.hadoop.hbase.util.Bytes;
 12
 13 import backtype.storm.task.TopologyContext;
 14 import backtype.storm.topology.BasicOutputCollector;
 15 import backtype.storm.topology.OutputFieldsDeclarer;
 16 import backtype.storm.topology.base.BaseBasicBolt;
 17 import backtype.storm.tuple.Fields;
 18 import backtype.storm.tuple.Tuple;
 19 import backtype.storm.tuple.Values;

public class HBaseWriterBolt extends BaseBasicBolt
 22 {
 23
 24 HTableInterface usersTable;
 25 HTablePool pool;
 26 int count = 0;
 27
 28  @Override
 29  public void prepare(Map stormConf, TopologyContext context) {
 30  Configuration conf = HBaseConfiguration.create();
 31 conf.set(hbase.defaults.for.version,0.96.0.2.0.6.0-76-hadoop2);
 32 conf.set(hbase.defaults.for.version.skip,true);
 33  conf.set(hbase.zookeeper.quorum, vm24,vm37,vm38);
 34  conf.set(hbase.zookeeper.property.clientPort, 
2181);
 35  conf.set(hbase.rootdir, 
hdfs://vm38:8020/user/hbase/data);
 36  //conf.set(zookeeper.znode.parent, 
/hbase-unsecure);

 37
 38  pool = new HTablePool(conf, 1);
 39  usersTable = pool.getTable(kafkademo1);
 40  }
41
 42 @Override
 43 public void execute(Tuple tuple, BasicOutputCollector 
collector)

 44 {
 45 String line  = tuple.getString(0);
 46
 47 Put p = new 
Put(Bytes.toBytes(UUID.randomUUID().toString()));
 48 p.add(Bytes.toBytes(info), Bytes.toBytes(line), 
Bytes.toBytes(line));

 49
 50 try {
 51 usersTable.put(p);
 52 count ++;
 53 System.out.println(Count: + count);
 54 }
 55 catch (Exception e){
 56 e.printStackTrace();
 57 }
 58 collector.emit(new Values(line));
 59
 60 }
 61
 62 @Override
 63 public void declareOutputFields(OutputFieldsDeclarer declarer)
 64 {
 65 declarer.declare(new Fields(line));
 66 }
 67
 68 @Override
 69 public void cleanup()
 70 {
 71 try {
 72 usersTable.close();
 73 }
 74 catch (Exception e){
 75 e.printStackTrace();
 76 }
 77 }
 78
 79 }

line in execute method: System.out.println(Count: + count); added in 
debug purpose to see in log that bolt is running.


In to Spout in method  nextTuple()
I added debug line: System.out.println(Message from the Topic ...);

After some time around 50minutes in log file I can see that Spout is 
working but Bolt is died.


Any ideas?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)



Exception failure: java.lang.ClassNotFoundException: org.apache.spark.streaming.kafka.KafkaReceiver

2014-05-30 Thread Margusja
(Method.java:606)
at 
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)

at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at 
org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:145)
at 
java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)

at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
at 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:62)
at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:193)
at 
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
at 
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)

at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at 
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:744)
14/05/30 11:53:56 INFO TaskSetManager: Starting task 8.0:0 as TID 75 on 
executor 0: dlvm1 (PROCESS_LOCAL)
14/05/30 11:53:56 INFO TaskSetManager: Serialized task 8.0:0 as 2975 
bytes in 1 ms
14/05/30 11:53:56 INFO TaskSetManager: Finished TID 74 in 62 ms on dlvm1 
(progress: 1/1)
14/05/30 11:53:56 INFO TaskSchedulerImpl: Removed TaskSet 9.0, whose 
tasks have all completed, from pool

14/05/30 11:53:56 INFO DAGScheduler: Completed ResultTask(9, 1)
14/05/30 11:53:56 INFO DAGScheduler: Stage 9 (take at DStream.scala:586) 
finished in 0.083 s
14/05/30 11:53:56 INFO SparkContext: Job finished: take at 
DStream.scala:586, took 0.101449564 s

---
Time: 140144003 ms
---
...
...
...

I know that KafkaReceiver is in my package:
[root@dlvm1 margusja_kafka]# cd libs/
[root@dlvm1 libs]# jar xvf spark-
spark-assembly_2.10-0.9.1-hadoop2.2.0.jar spark-streaming_2.10-0.9.1.jar 
spark-streaming-kafka_2.10-0.9.1.jar
[root@dlvm1 libs]# jar xvf spark-streaming-kafka_2.10-0.9.1.jar | grep 
KafkaReceiver

 inflated: org/apache/spark/streaming/kafka/KafkaReceiver$$anonfun$1.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$MessageHandler$$anonfun$run$2.class

 inflated: org/apache/spark/streaming/kafka/KafkaReceiver.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$$anonfun$onStart$3.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$$anonfun$onStart$1.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$MessageHandler.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$$anonfun$tryZookeeperConsumerGroupCleanup$1.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$$anonfun$onStart$5.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$$anonfun$onStart$2.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$MessageHandler$$anonfun$run$1.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$$anonfun$onStart$4.class
 inflated: 
org/apache/spark/streaming/kafka/KafkaReceiver$$anonfun$onStart$5$$anonfun$apply$1.class

[root@dlvm1 libs]#

Any ideas?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)



Re: Announcing Spark 1.0.0

2014-05-30 Thread Margusja

Now I can download. Thanks.

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 30/05/14 13:48, Patrick Wendell wrote:

It is updated - try holding Shift + refresh in your browser, you are
probably caching the page.

On Fri, May 30, 2014 at 3:46 AM, prabeesh k prabsma...@gmail.com wrote:

Please update the http://spark.apache.org/docs/latest/  link


On Fri, May 30, 2014 at 4:03 PM, Margusja mar...@roo.ee wrote:

Is it possible to download pre build package?

http://mirror.symnds.com/software/Apache/incubator/spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz
- gives me 404

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)


On 30/05/14 13:18, Christopher Nguyen wrote:

Awesome work, Pat et al.!

--
Christopher T. Nguyen
Co-founder  CEO, Adatao http://adatao.com
linkedin.com/in/ctnguyen http://linkedin.com/in/ctnguyen




On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell pwend...@gmail.com
mailto:pwend...@gmail.com wrote:

 I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
 is a milestone release as the first in the 1.0 line of releases,
 providing API stability for Spark's core interfaces.

 Spark 1.0.0 is Spark's largest release ever, with contributions from
 117 developers. I'd like to thank everyone involved in this release -
 it was truly a community effort with fixes, features, and
 optimizations contributed from dozens of organizations.

 This release expands Spark's standard libraries, introducing a new
SQL
 package (SparkSQL) which lets users integrate SQL queries into
 existing Spark workflows. MLlib, Spark's machine learning library, is
 expanded with sparse vector support and several new algorithms. The
 GraphX and Streaming libraries also introduce new features and
 optimizations. Spark's core engine adds support for secured YARN
 clusters, a unified tool for submitting Spark applications, and
 several performance and stability improvements. Finally, Spark adds
 support for Java 8 lambda syntax and improves coverage of the Java
and
 Python API's.

 Those features only scratch the surface - check out the release
 notes here:
 http://spark.apache.org/releases/spark-release-1-0-0.html

 Note that since release artifacts were posted recently, certain
 mirrors may not have working downloads for a few hours.

 - Patrick






CSharp librari and Producer Closing socket for because of error (kafka.network.Processor),java.nio.BufferUnderflowException

2014-05-13 Thread Margusja

Hi

I have kafka broker running (kafka_2.9.1-0.8.1.1)
All is working.

One project requires producer is written in CSharp
I am not dot net programmer but I managed to write simple producer code 
using 
https://github.com/kafka-dev/kafka/blob/master/clients/csharp/README.md


the code
...
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading.Tasks;
using Kafka.Client;

namespace DemoProducer
{
class Program
{
static void Main(string[] args)
{
string payload1 = kafka 1.;
byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);
Message msg1 = new Message(payloadData1);

string payload2 = kafka 2.;
byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);
Message msg2 = new Message(payloadData2);

Producer producer = new Producer(broker, 9092);
producer.Send(kafkademo3, 0 ,  msg1 );
}
}
}
...

In broker side I am getting the error if I executing the code above:

[2014-05-12 19:15:58,984] ERROR Closing socket for /84.50.21.39 because 
of error (kafka.network.Processor)

java.nio.BufferUnderflowException
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145)
at java.nio.ByteBuffer.get(ByteBuffer.java:694)
at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38)
at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33)
at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36)
at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36)
at 
kafka.network.RequestChannel$Request.init(RequestChannel.scala:53)

at kafka.network.Processor.read(SocketServer.scala:353)
at kafka.network.Processor.run(SocketServer.scala:245)
at java.lang.Thread.run(Thread.java:744)



[2014-05-12 19:16:11,836] ERROR Closing socket for /90.190.106.56 
because of error (kafka.network.Processor)

java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at kafka.utils.Utils$.read(Utils.scala:375)
at 
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)

at kafka.network.Processor.read(SocketServer.scala:347)
at kafka.network.Processor.run(SocketServer.scala:245)
at java.lang.Thread.run(Thread.java:744)

I suspected that the problem is in the broker version 
(kafka_2.9.1-0.8.1.1) so I downloaded kafka-0.7.1-incubating.

Now I was able to send messages using CSharp code.

So is there workaround how I can use latest kafka version and CSharp ? 
Or What is the latest kafka version supporting CSharp producer?


And one more question. In Csharp lib how can I give to producer brokers 
list to get fault tolerance in case one broker is down?


--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)



Re: Kafka producer in CSharp

2014-05-13 Thread Margusja

Thank you for response. I think HTTP is ok.
I have two more question in case of HTTP.
1. Can I have fault tolerance in case I have two or more brokers?
2. Can I ack that message is in queue?

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 12/05/14 23:28, Joe Stein wrote:

The wire protocol has changed drastically since then.

I don't know of any C# clients (there are none on the client library page
nor have I heard of any being used in production but maybe there are some).


For clients that use DotNet I often suggest that they use some HTTP
producer/consumer
https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-HTTPREST

/***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


On Mon, May 12, 2014 at 12:34 PM, Margusja mar...@roo.ee wrote:


Hi

I have kafka broker running (kafka_2.9.1-0.8.1.1)
All is working.

One project requires producer is written in CSharp
I am not dot net programmer but I managed to write simple producer code
using https://github.com/kafka-dev/kafka/blob/master/clients/
csharp/README.md

the code
...
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading.Tasks;
using Kafka.Client;

namespace DemoProducer
{
 class Program
 {
 static void Main(string[] args)
 {
 string payload1 = kafka 1.;
 byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);
 Message msg1 = new Message(payloadData1);

 string payload2 = kafka 2.;
 byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);
 Message msg2 = new Message(payloadData2);

 Producer producer = new Producer(broker, 9092);
 producer.Send(kafkademo3, 0 ,  msg1 );
 }
 }
}
...

In broker side I am getting the error if I executing the code above:

[2014-05-12 19:15:58,984] ERROR Closing socket for /84.50.21.39 because
of error (kafka.network.Processor)
java.nio.BufferUnderflowException
 at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145)
 at java.nio.ByteBuffer.get(ByteBuffer.java:694)
 at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38)
 at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33)
 at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36)
 at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36)
 at kafka.network.RequestChannel$Request.init(RequestChannel.
scala:53)
 at kafka.network.Processor.read(SocketServer.scala:353)
 at kafka.network.Processor.run(SocketServer.scala:245)
 at java.lang.Thread.run(Thread.java:744)



[2014-05-12 19:16:11,836] ERROR Closing socket for /90.190.106.56 because
of error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
 at sun.nio.ch.IOUtil.read(IOUtil.java:197)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
 at kafka.utils.Utils$.read(Utils.scala:375)
 at kafka.network.BoundedByteBufferReceive.readFrom(
BoundedByteBufferReceive.scala:54)
 at kafka.network.Processor.read(SocketServer.scala:347)
 at kafka.network.Processor.run(SocketServer.scala:245)
 at java.lang.Thread.run(Thread.java:744)

I suspected that the problem is in the broker version
(kafka_2.9.1-0.8.1.1) so I downloaded kafka-0.7.1-incubating.
Now I was able to send messages using CSharp code.

So is there workaround how I can use latest kafka version and CSharp ? Or
What is the latest kafka version supporting CSharp producer?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)






Re: Kafka producer in CSharp

2014-05-13 Thread Margusja

Ok got some info myself.

I can have fault tolerance - I can start kafka-http-endpoint using 
broker lists

I can have ack - start using --sync

But what is best practice in case if kafka-http-endpoint goes down?

Start multiple kafka-http-endpoint's and in client side just control 
that kafka-http-endpoint is up? And if not up then using another?


Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 13/05/14 10:49, Margusja wrote:

Thank you for response. I think HTTP is ok.
I have two more question in case of HTTP.
1. Can I have fault tolerance in case I have two or more brokers?
2. Can I ack that message is in queue?

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 12/05/14 23:28, Joe Stein wrote:

The wire protocol has changed drastically since then.

I don't know of any C# clients (there are none on the client library 
page
nor have I heard of any being used in production but maybe there are 
some).



For clients that use DotNet I often suggest that they use some HTTP
producer/consumer
https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-HTTPREST 



/***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


On Mon, May 12, 2014 at 12:34 PM, Margusja mar...@roo.ee wrote:


Hi

I have kafka broker running (kafka_2.9.1-0.8.1.1)
All is working.

One project requires producer is written in CSharp
I am not dot net programmer but I managed to write simple producer code
using https://github.com/kafka-dev/kafka/blob/master/clients/
csharp/README.md

the code
...
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading.Tasks;
using Kafka.Client;

namespace DemoProducer
{
 class Program
 {
 static void Main(string[] args)
 {
 string payload1 = kafka 1.;
 byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);
 Message msg1 = new Message(payloadData1);

 string payload2 = kafka 2.;
 byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);
 Message msg2 = new Message(payloadData2);

 Producer producer = new Producer(broker, 9092);
 producer.Send(kafkademo3, 0 ,  msg1 );
 }
 }
}
...

In broker side I am getting the error if I executing the code above:

[2014-05-12 19:15:58,984] ERROR Closing socket for /84.50.21.39 because
of error (kafka.network.Processor)
java.nio.BufferUnderflowException
 at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145)
 at java.nio.ByteBuffer.get(ByteBuffer.java:694)
 at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38)
 at 
kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33)
 at 
kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36)
 at 
kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36)

 at kafka.network.RequestChannel$Request.init(RequestChannel.
scala:53)
 at kafka.network.Processor.read(SocketServer.scala:353)
 at kafka.network.Processor.run(SocketServer.scala:245)
 at java.lang.Thread.run(Thread.java:744)



[2014-05-12 19:16:11,836] ERROR Closing socket for /90.190.106.56 
because

of error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
 at sun.nio.ch.IOUtil.read(IOUtil.java:197)
 at 
sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)

 at kafka.utils.Utils$.read(Utils.scala:375)
 at kafka.network.BoundedByteBufferReceive.readFrom(
BoundedByteBufferReceive.scala:54)
 at kafka.network.Processor.read(SocketServer.scala:347)
 at kafka.network.Processor.run(SocketServer.scala:245)
 at java.lang.Thread.run(Thread.java:744)

I suspected that the problem is in the broker version
(kafka_2.9.1-0.8.1.1) so I downloaded kafka-0.7.1-incubating.
Now I was able to send messages using CSharp code.

So is there workaround how I can use latest kafka version and CSharp 
? Or

What is the latest kafka version supporting CSharp producer?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)








Kafka producer in CSharp

2014-05-12 Thread Margusja

Hi

I have kafka broker running (kafka_2.9.1-0.8.1.1)
All is working.

One project requires producer is written in CSharp
I am not dot net programmer but I managed to write simple producer code 
using 
https://github.com/kafka-dev/kafka/blob/master/clients/csharp/README.md


the code
...
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading.Tasks;
using Kafka.Client;

namespace DemoProducer
{
class Program
{
static void Main(string[] args)
{
string payload1 = kafka 1.;
byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1);
Message msg1 = new Message(payloadData1);

string payload2 = kafka 2.;
byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2);
Message msg2 = new Message(payloadData2);

Producer producer = new Producer(broker, 9092);
producer.Send(kafkademo3, 0 ,  msg1 );
}
}
}
...

In broker side I am getting the error if I executing the code above:

[2014-05-12 19:15:58,984] ERROR Closing socket for /84.50.21.39 because 
of error (kafka.network.Processor)

java.nio.BufferUnderflowException
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145)
at java.nio.ByteBuffer.get(ByteBuffer.java:694)
at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38)
at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33)
at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36)
at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36)
at 
kafka.network.RequestChannel$Request.init(RequestChannel.scala:53)

at kafka.network.Processor.read(SocketServer.scala:353)
at kafka.network.Processor.run(SocketServer.scala:245)
at java.lang.Thread.run(Thread.java:744)



[2014-05-12 19:16:11,836] ERROR Closing socket for /90.190.106.56 
because of error (kafka.network.Processor)

java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at kafka.utils.Utils$.read(Utils.scala:375)
at 
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)

at kafka.network.Processor.read(SocketServer.scala:347)
at kafka.network.Processor.run(SocketServer.scala:245)
at java.lang.Thread.run(Thread.java:744)

I suspected that the problem is in the broker version 
(kafka_2.9.1-0.8.1.1) so I downloaded kafka-0.7.1-incubating.

Now I was able to send messages using CSharp code.

So is there workaround how I can use latest kafka version and CSharp ? 
Or What is the latest kafka version supporting CSharp producer?


--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)



Re: ZooKeeper available but no active master location found

2014-04-10 Thread Margusja

Yes there is:
  groupIdorg.apache.hbase/groupId
  artifactIdhbase/artifactId
  version0.92.1/version

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 10/04/14 00:57, Ted Yu wrote:

Have you modified pom.xml of twitbase ?
If not, this is the dependency you get:
 dependency
   groupIdorg.apache.hbase/groupId
   artifactIdhbase/artifactId
   version0.92.1/version

0.92.1 and 0.96.0 are not compatible.

Cheers


On Wed, Apr 9, 2014 at 10:58 AM, Margusja mar...@roo.ee wrote:


Hi

I downloaded and installed hortonworks sandbox 2.0 for virtualbox.
HBase version is: 0.96.0.2.0.6.0-76-hadoop2, re6d7a56f72914d01e55c0478d74e5
cfd3778f231
[hbase@sandbox twitbase-master]$ cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1   localhost.localdomain localhost
10.0.2.15   sandbox.hortonworks.com sandbox

[hbase@sandbox twitbase-master]$ hostname
sandbox.hortonworks.com

[root@sandbox ~]# netstat -lnp | grep 2181
tcp0  0 0.0.0.0:2181 0.0.0.0:*   LISTEN
  19359/java

[root@sandbox ~]# netstat -lnp | grep 6
tcp0  0 10.0.2.15:6 0.0.0.0:*   LISTEN
28549/java

[hbase@sandbox twitbase-master]$ hbase shell
14/04/05 05:56:44 INFO Configuration.deprecation: hadoop.native.lib is
deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
Version 0.96.0.2.0.6.0-76-hadoop2, re6d7a56f72914d01e55c0478d74e5cfd3778f231,
Thu Oct 17 18:15:20 PDT 2013

hbase(main):001:0 list
TABLE
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/
lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/
slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
ambarismoketest
mytable
simple_hcat_load_table
users
weblogs
5 row(s) in 4.6040 seconds

= [ambarismoketest, mytable, simple_hcat_load_table, users,
weblogs]
hbase(main):002:0

So far is good.

I'd like to play with a code: https://github.com/hbaseinaction/twitbase

downloaded and made package: mvn package and  got twitbase-1.0.0.jar.

When I try to exec code I will get:
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client environment:host.name=
sandbox.hortonworks.com
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.6.0_30
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=Sun Microsystems Inc.
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:java.home=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=target/twitbase-1.0.0.jar
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=/usr/lib/jvm/java-1.6.0-
openjdk-1.6.0.0.x86_64/jre/lib/amd64/server:/usr/lib/jvm/
java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64:/usr/lib/
jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/../lib/amd64:/
usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=NA
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client environment:os.name
=Linux
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:os.arch=amd64
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:os.version=2.6.32-431.11.2.el6.x86_64
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client environment:user.name
=hbase
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:user.home=/home/hbase
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/home/hbase/twitbase-master
14/04/05 05:59:50 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=10.0.2.15:2181 sessionTimeout=18 watcher=hconnection
14/04/05 05:59:50 INFO zookeeper.ClientCnxn: Opening socket connection to
server /10.0.2.15:2181
14/04/05 05:59:50 INFO zookeeper.RecoverableZooKeeper: The identifier of
this process is 30...@sandbox.hortonworks.com
14/04/05 05:59:50 INFO client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section 'Client'
could not be found. If you are not using SASL, you may ignore this. On the
other hand, if you expected SASL to work, please fix your JAAS
configuration.
14/04/05 05:59:51 INFO zookeeper.ClientCnxn: Socket connection established
to sandbox.hortonworks.com/10.0.2.15:2181, initiating session
14/04/05

This server is in the failed servers list

2014-04-10 Thread Margusja
:2181
14/04/05 11:03:03 INFO zookeeper.ClientCnxn: Opening socket connection 
to server sandbox.hortonworks.com/10.0.2.15:2181. Will not attempt to 
authenticate using SASL (unknown error)
14/04/05 11:03:03 INFO zookeeper.ClientCnxn: Socket connection 
established to sandbox.hortonworks.com/10.0.2.15:2181, initiating session
14/04/05 11:03:03 INFO zookeeper.ClientCnxn: Session establishment 
complete on server sandbox.hortonworks.com/10.0.2.15:2181, sessionid = 
0x1453145e9500056, negotiated timeout = 4
14/04/05 11:03:04 INFO 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 1 
of 35 failed; retrying after sleep of 100, 
exception=com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.NoSuchMethodError: 
org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
14/04/05 11:03:04 INFO 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 2 
of 35 failed; retrying after sleep of 201, 
exception=com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server 
is in the failed servers list: sandbox.hortonworks.com/10.0.2.15:6
14/04/05 11:03:04 INFO 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 3 
of 35 failed; retrying after sleep of 300, 
exception=com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server 
is in the failed servers list: sandbox.hortonworks.com/10.0.2.15:6
14/04/05 11:03:05 INFO 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 4 
of 35 failed; retrying after sleep of 500, 
exception=com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server 
is in the failed servers list: sandbox.hortonworks.com/10.0.2.15:6
14/04/05 11:03:05 INFO 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 5 
of 35 failed; retrying after sleep of 1001, 
exception=com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server 
is in the failed servers list: sandbox.hortonworks.com/10.0.2.15:6
14/04/05 11:03:06 INFO 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 6 
of 35 failed; retrying after sleep of 2014, 
exception=com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.NoSuchMethodError: 
org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
14/04/05 11:03:08 INFO 
client.HConnectionManager$HConnectionImplementation: getMaster attempt 7 
of 35 failed; retrying after sleep of 4027, 
exception=com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.NoSuchMethodError: 
org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;



[hbase@sandbox hbase_connect]$ jps
4355 HMaster
5335 Jps
4711 HRegionServer
4715 ThriftServer
4717 RESTServer

tcp 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN
tcp 0 0 10.0.2.15:6 0.0.0.0:* LISTEN 4355/java

[root@sandbox ~]# cat /etc/hosts
127.0.0.1   localhost.localdomain localhost
10.0.2.15   sandbox.hortonworks.com sandbox

Any hints?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)




Re: This server is in the failed servers list

2014-04-10 Thread Margusja
/10.0.2.15:2181. Will not attempt to 
authenticate using SASL (unknown error)
2014-04-05 12:09:34,663 INFO 
[main-SendThread(sandbox.hortonworks.com:2181)] zookeeper.ClientCnxn 
(ClientCnxn.java:primeConnection(849)) - Socket connection established 
to sandbox.hortonworks.com/10.0.2.15:2181, initiating session
2014-04-05 12:09:34,741 INFO 
[main-SendThread(sandbox.hortonworks.com:2181)] zookeeper.ClientCnxn 
(ClientCnxn.java:onConnected(1207)) - Session establishment complete on 
server sandbox.hortonworks.com/10.0.2.15:2181, sessionid = 
0x1453145e9500058, negotiated timeout = 4
2014-04-05 12:09:36,539 INFO  [main] Configuration.deprecation 
(Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available

Table = ambarismoketest
Table = mytable
Table = simple_hcat_load_table
Table = users
Table = weblogs

Maybe my mistake helps somebody :)

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 10/04/14 11:12, Margusja wrote:

Hi
I have java code:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.util.Bytes;

public class Hbase_connect {

public static void main(String[] args) throws Exception {
Configuration conf = HBaseConfiguration.create();
conf.set(hbase.zookeeper.quorum, 
sandbox.hortonworks.com);

conf.set(hbase.zookeeper.property.clientPort, 2181);
conf.set(hbase.rootdir, 
hdfs://sandbox.hortonworks.com:8020/apps/hbase/data);

conf.set(zookeeper.znode.parent, /hbase-unsecure);
HBaseAdmin admin = new HBaseAdmin(conf);
HTableDescriptor[] tabdesc = admin.listTables();
for(int i=0; itabdesc.length; i++) {
System.out.println(Table =  + new 
String(tabdesc [i].getName()));

}
}
}

^C[hbase@sandbox hbase_connect]$ ls -lah libs/
total 80M
drwxr-xr-x 3 hbase hadoop 4.0K Apr  5 10:42 .
drwxr-xr-x 3 hbase hadoop 4.0K Apr  5 11:02 ..
-rw-r--r-- 1 hbase hadoop 2.5K Oct  6 23:39 hadoop-client-2.2.0.jar
-rw-r--r-- 1 hbase hadoop 4.1M Jul 24  2013 hadoop-core-1.2.1.jar
drwxr-xr-x 4 hbase hadoop 4.0K Apr  5 09:40 hbase-0.96.2-hadoop2
-rw-r--r-- 1 hbase hadoop  76M Apr  3 16:18 
hbase-0.96.2-hadoop2-bin.tar.gz


[hbase@sandbox hbase_connect]$ java -cp 
./:./libs/*:./libs/hbase-0.96.2-hadoop2/lib/* Hbase_connect
14/04/05 11:03:03 INFO zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 
GMT
14/04/05 11:03:03 INFO zookeeper.ZooKeeper: Client 
environment:host.name=sandbox.hortonworks.com
14/04/05 11:03:03 INFO zookeeper.ZooKeeper: Client 
environment:java.version=1.6.0_30
14/04/05 11:03:03 INFO zookeeper.ZooKeeper: Client 
environment:java.vendor=Sun Microsystems Inc.
14/04/05 11:03:03 INFO zookeeper.ZooKeeper: Client 
environment:java.home=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
14/04/05 11:03:03 INFO zookeeper.ZooKeeper: Client 
environment:java.class.path=./:./libs/hadoop-client-2.2.0.jar:./libs/hadoop-core-1.2.1.jar:./libs/hbase-0.96.2-hadoop2/lib/management-api-3.0.0-b012.jar:./libs/hbase-0.96.2-hadoop2/lib/jackson-core-asl-1.8.8.jar:./libs/hbase-0.96.2-hadoop2/lib/slf4j-log4j12-1.6.4.jar:./libs/hbase-0.96.2-hadoop2/lib/hbase-server-0.96.2-hadoop2.jar:./libs/hbase-0.96.2-hadoop2/lib/jsp-2.1-6.1.14.jar:./libs/hbase-0.96.2-hadoop2/lib/log4j-1.2.17.jar:./libs/hbase-0.96.2-hadoop2/lib/hadoop-mapreduce-client-core-2.2.0.jar:./libs/hbase-0.96.2-hadoop2/lib/commons-codec-1.7.jar:./libs/hbase-0.96.2-hadoop2/lib/hadoop-mapreduce-client-jobclient-2.2.0.jar:./libs/hbase-0.96.2-hadoop2/lib/hbase-common-0.96.2-hadoop2.jar:./libs/hbase-0.96.2-hadoop2/lib/jersey-server-1.8.jar:./libs/hbase-0.96.2-hadoop2/lib/hbase-it-0.96.2-hadoop2.jar:./libs/hbase-0.96.2-hadoop2/lib/commons-el-1.0.jar:./libs/hbase-0.96.2-hadoop2/lib/commons-collections-3.2.1.jar:./libs/hbase-0.96.2-hadoop2/lib/hadoop-mapreduce-client-common-2.2.0.jar:./libs/hbase-0.96.2-hadoop2/lib/jersey-grizzly2-1.9.jar:./libs/hbase-0.96.2-hadoop2/lib/protobuf-java-2.5.0.jar:./libs/hbase-0.96.2-hadoop2/lib/hadoop-client-2.2.0.jar:./libs/hbase-0.96.2-hadoop2/lib/hadoop-common-2.2.0.jar:./libs/hbase-0.96.2-hadoop2/lib/hadoop-yarn-api-2.2.0.jar:./libs/hbase-0.96.2-hadoop2/lib/hadoop-mapreduce-client-app-2.2.0.jar:./libs/hbase-0.96.2-hadoop2/lib/hamcrest-core-1.3.jar:./libs/hbase-0.96.2-hadoop2/lib/commons-beanutils-core-1.8.0.jar:./libs/hbase-0.96.2-hadoop2/lib/hbase-client-0.96.2-hadoop2.jar:./libs/hbase-0.96.2-hadoop2/lib/slf4j-api-1.6.4.jar:./libs/hbase-0.96.2-hadoop2/lib/commons-compress-1.4.1.jar:./libs/hbase-0.96.2-hadoop2/lib

ZooKeeper available but no active master location found

2014-04-09 Thread Margusja
(HConnectionManager.java:634)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.init(HBaseAdmin.java:106)

at HBaseIA.TwitBase.InitTables.main(InitTables.java:20)

InitTables.java 
https://github.com/hbaseinaction/twitbase/blob/master/src/main/java/HBaseIA/TwitBase/InitTables.java

The line that drops error is: HBaseAdminadmin=newHBaseAdmin(conf);

Log line: ZooKeeper available but no active master location found is 
from org/apache/hadoop/hbase/client/HConnectionManager.java


[hbase@sandbox twitbase-master]$ jps
30473 Jps
28549 HMaster
28896 RESTServer
28916 HRegionServer
28902 ThriftServer
[hbase@sandbox twitbase-master]$

As much I read ther might be somthing wrong with binding and DNS resolving.

Any hints?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)



NodeHealthReport local-dirs turned bad

2014-03-19 Thread Margusja

Hi

I have one node in unhealthy status:




Total Vmem allocated for Containers 4.20 GB
Vmem enforcement enabledfalse
Total Pmem allocated for Container  2 GB
Pmem enforcement enabledfalse
NodeHealthyStatus   false
LastNodeHealthTime  Wed Mar 19 13:31:24 EET 2014
NodeHealthReport 	1/1 local-dirs turned bad: /hadoop/yarn/local;1/1 
log-dirs turned bad: /hadoop/yarn/log
Node Manager Version: 	2.2.0.2.0.6.0-101 from 
b07b2906c36defd389c8b5bd22bebc1bead8115b by jenkins source checksum 
82bd166aa0ada92b44f8a154836b92 on 2014-01-09T05:24Z
Hadoop Version: 	2.2.0.2.0.6.0-101 from 
b07b2906c36defd389c8b5bd22bebc1bead8115b by jenkins source checksum 
704f1e463ebc4fb89353011407e965 on 2014-01-09T05:18Z




I tried:
Deleted /hadoop/* and did namenode -format again
Restarted nodemanager but still in unhealthy mode.

Is there any guideline what I should do?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)



Re: NodeHealthReport local-dirs turned bad

2014-03-19 Thread Margusja
tnx got it work. In my init script I used wrong user. It was permissions 
problem like Rohith said.


Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)

On 19/03/14 14:08, Rohith Sharma K S wrote:

Hi

There is no relation to NameNode format

Does NodeManger is started with default configuration? If no , any NodeManger 
health script is configured?

Suspect can be
 1. /hadoop does not have permission or
 2. disk is full

Thanks  Regards
Rohith Sharma K S


-Original Message-
From: Margusja [mailto:mar...@roo.ee]
Sent: 19 March 2014 17:04
To: user@hadoop.apache.org
Subject: NodeHealthReport local-dirs turned bad

Hi

I have one node in unhealthy status:




Total Vmem allocated for Containers 4.20 GB
Vmem enforcement enabledfalse
Total Pmem allocated for Container  2 GB
Pmem enforcement enabledfalse
NodeHealthyStatus   false
LastNodeHealthTime  Wed Mar 19 13:31:24 EET 2014
NodeHealthReport1/1 local-dirs turned bad: /hadoop/yarn/local;1/1
log-dirs turned bad: /hadoop/yarn/log
Node Manager Version:   2.2.0.2.0.6.0-101 from
b07b2906c36defd389c8b5bd22bebc1bead8115b by jenkins source checksum
82bd166aa0ada92b44f8a154836b92 on 2014-01-09T05:24Z
Hadoop Version: 2.2.0.2.0.6.0-101 from
b07b2906c36defd389c8b5bd22bebc1bead8115b by jenkins source checksum
704f1e463ebc4fb89353011407e965 on 2014-01-09T05:18Z



I tried:
Deleted /hadoop/* and did namenode -format again Restarted nodemanager but 
still in unhealthy mode.

Is there any guideline what I should do?

--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)





Command line vector to sequence file

2014-03-18 Thread Margusja

Hi

I am looking a simple way in a command line how to convert vector to 
sequence file.

in example I have data.txt file contains vectors.
1,1
2,1
1,2
2,2
3,3
8,8
8,9
9,8
9,9

So is there command line possibility to convert that into sequence file?

I tried mahout seqdirectory but after it  hdfs dfs -text 
output2/part-m-0 gives me something like:

/data.txt1,1
2,1
1,2
2,2
3,3
8,8
8,9
9,8
9,9

and that is not sequence file format as I understand.

I know there are java API but I am looking command line.


--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-



Re: Command line vector to sequence file

2014-03-18 Thread Margusja

Thank you, I am going to try it.

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 18/03/14 10:58, Kevin Moulart wrote:

Hi,

I did the same search a few weeks back and found that there is nothing in
the current API to do that from command line.

However I did write a java program that transforms a csv into a
SequenceFile which can be used to train a naive bayes (amongst other
things).

Here are the sources :
https://gist.github.com/kmoulart/9616125

You'll find all you need to make a jar with dependecies running and with a
proper command line (using JCommander).
Both the sequential version and the MapReduce one are in the given files.

If you're lazy, I'll put the whole maven project on my github later today.

Hope it helps you

Kévin Moulart


2014-03-18 9:41 GMT+01:00 Margusja mar...@roo.ee:


Hi

I am looking a simple way in a command line how to convert vector to
sequence file.
in example I have data.txt file contains vectors.
1,1
2,1
1,2
2,2
3,3
8,8
8,9
9,8
9,9

So is there command line possibility to convert that into sequence file?

I tried mahout seqdirectory but after it  hdfs dfs -text
output2/part-m-0 gives me something like:
/data.txt1,1
2,1
1,2
2,2
3,3
8,8
8,9
9,8
9,9

and that is not sequence file format as I understand.

I know there are java API but I am looking command line.


--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-






java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

2014-03-17 Thread Margusja
/mahout/examples/target/dependency/commons-digester-1.8.jar:/home/speech/mahout/examples/target/dependency/commons-el-1.0.jar:/home/speech/mahout/examples/target/dependency/commons-httpclient-3.0.1.jar:/home/speech/mahout/examples/target/dependency/commons-io-2.4.jar:/home/speech/mahout/examples/target/dependency/commons-lang-2.4.jar:/home/speech/mahout/examples/target/dependency/commons-lang3-3.1.jar:/home/speech/mahout/examples/target/dependency/commons-logging-1.1.3.jar:/home/speech/mahout/examples/target/dependency/commons-math-2.1.jar:/home/speech/mahout/examples/target/dependency/commons-math3-3.2.jar:/home/speech/mahout/examples/target/dependency/commons-net-1.4.1.jar:/home/speech/mahout/examples/target/dependency/easymock-3.2.jar:/home/speech/mahout/examples/target/dependency/guava-16.0.jar:/home/speech/mahout/examples/target/dependency/hadoop-core-1.2.1.jar:/home/speech/mahout/examples/target/dependency/hamcrest-core-1.3.jar:/home/speech/mahout/examples/target/dependency/icu4j-49.1.jar:/home/speech/mahout/examples/target/dependency/jackson-core-asl-1.9.12.jar:/home/speech/mahout/examples/target/dependency/jackson-jaxrs-1.7.1.jar:/home/speech/mahout/examples/target/dependency/jackson-mapper-asl-1.9.12.jar:/home/speech/mahout/examples/target/dependency/jackson-xc-1.7.1.jar:/home/speech/mahout/examples/target/dependency/jakarta-regexp-1.4.jar:/home/speech/mahout/examples/target/dependency/jaxb-api-2.2.2.jar:/home/speech/mahout/examples/target/dependency/jaxb-impl-2.2.3-1.jar:/home/speech/mahout/examples/target/dependency/jcl-over-slf4j-1.7.5.jar:/home/speech/mahout/examples/target/dependency/jersey-core-1.8.jar:/home/speech/mahout/examples/target/dependency/jersey-json-1.8.jar:/home/speech/mahout/examples/target/dependency/jersey-server-1.8.jar:/home/speech/mahout/examples/target/dependency/jettison-1.1.jar:/home/speech/mahout/examples/target/dependency/junit-4.11.jar:/home/speech/mahout/examples/target/dependency/log4j-1.2.17.jar:/home/speech/mahout/examples/target/dependency/lucene-analyzers-common-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-benchmark-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-core-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-facet-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-highlighter-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-memory-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-queries-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-queryparser-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-sandbox-4.6.1.jar:/home/speech/mahout/examples/target/dependency/lucene-spatial-4.6.1.jar:/home/speech/mahout/examples/target/dependency/mahout-core-1.0-SNAPSHOT.jar:/home/speech/mahout/examples/target/dependency/mahout-core-1.0-SNAPSHOT-tests.jar:/home/speech/mahout/examples/target/dependency/mahout-integration-1.0-SNAPSHOT.jar:/home/speech/mahout/examples/target/dependency/mahout-math-1.0-SNAPSHOT.jar:/home/speech/mahout/examples/target/dependency/mahout-math-1.0-SNAPSHOT-tests.jar:/home/speech/mahout/examples/target/dependency/nekohtml-1.9.17.jar:/home/speech/mahout/examples/target/dependency/objenesis-1.3.jar:/home/speech/mahout/examples/target/dependency/randomizedtesting-runner-2.0.15.jar:/home/speech/mahout/examples/target/dependency/slf4j-api-1.7.5.jar:/home/speech/mahout/examples/target/dependency/slf4j-log4j12-1.7.5.jar:/home/speech/mahout/examples/target/dependency/solr-commons-csv-3.5.0.jar:/home/speech/mahout/examples/target/dependency/spatial4j-0.3.jar:/home/speech/mahout/examples/target/dependency/stax-api-1.0.1.jar:/home/speech/mahout/examples/target/dependency/stax-api-1.0-2.jar:/home/speech/mahout/examples/target/dependency/t-digest-2.0.2.jar:/home/speech/mahout/examples/target/dependency/xercesImpl-2.9.1.jar:/home/speech/mahout/examples/target/dependency/xmlpull-1.1.3.1.jar:/home/speech/mahout/examples/target/dependency/xpp3_min-1.1.4c.jar:/home/speech/mahout/examples/target/dependency/xstream-1.4.4.jar

Mahout-1.0 from https://github.com/apache/mahout compiled as:

mvn -DskipTests clean install

[speech@h14 ~]$ hadoop version
Hadoop 2.2.0.2.0.6.0-101
Subversion g...@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using /usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar
[speech@h14 ~]$

Any hint what I am doing wrong?



--
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-17 Thread Margusja
, use mapreduce.job.working.dir
14/03/17 12:07:23 INFO mapreduce.JobSubmitter: Submitting tokens for 
job: job_local1589554356_0001
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/staging/speech1589554356/.staging/job_local1589554356_0001/job.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/17 12:07:23 WARN conf.Configuration: 
file:/tmp/hadoop-speech/mapred/local/localRunner/speech/job_local1589554356_0001/job_local1589554356_0001.xml:an 
attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/17 12:07:23 INFO mapreduce.Job: The url to track the job: 
http://localhost:8080/

14/03/17 12:07:23 INFO mapreduce.Job: Running job: job_local1589554356_0001
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter set in 
config null
14/03/17 12:07:23 INFO mapred.LocalJobRunner: OutputCommitter is 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter

14/03/17 12:07:23 INFO mapred.LocalJobRunner: Waiting for map tasks
14/03/17 12:07:23 INFO mapred.LocalJobRunner: Starting task: 
attempt_local1589554356_0001_m_00_0
14/03/17 12:07:23 INFO mapred.Task:  Using ResourceCalculatorProcessTree 
: [ ]
14/03/17 12:07:23 INFO mapred.MapTask: Processing split: 
Paths:/user/speech/demo/text1.txt:0+628,/user/speech/demo/text10.txt:0+1327,/user/speech/demo/text2.txt:0+5165,/user/speech/demo/text3.txt:0+3736,/user/speech/demo/text4.txt:0+4338,/user/speech/demo/text5.txt:0+3338,/user/speech/demo/text6.txt:0+5836,/user/speech/demo/text7.txt:0+2936,/user/speech/demo/text8.txt:0+905,/user/speech/demo/text9.txt:0+1566

14/03/17 12:07:23 INFO mapred.LocalJobRunner: Map task executor complete.
14/03/17 12:07:23 WARN mapred.LocalJobRunner: job_local1589554356_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
Caused by: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:164)
at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.init(CombineFileRecordReader.java:126)
at 
org.apache.mahout.text.MultipleTextFileInputFormat.createRecordReader(MultipleTextFileInputFormat.java:43)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.init(MapTask.java:491)

at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:734)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:534)
at 
org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.initNextRecordReader(CombineFileRecordReader.java:155)

... 12 more
Caused by: java.lang.IncompatibleClassChangeError: Found interface 
org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
at 
org.apache.mahout.text.WholeFileRecordReader.init(WholeFileRecordReader.java:59)

... 17 more
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
running in uber mode : false

14/03/17 12:07:24 INFO mapreduce.Job:  map 0% reduce 0%
14/03/17 12:07:24 INFO mapreduce.Job: Job job_local1589554356_0001 
failed with state FAILED due to: NA

14/03/17 12:07:24 INFO mapreduce.Job: Counters: 0
14/03/17 12:07:24 INFO driver.MahoutDriver: Program took 3343 ms 
(Minutes: 0.055714)


Obviously I am doing something wrong :)

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-17 Thread Margusja

Okey sorry for the mess

[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.version=2.2.0 - did the trick


Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 17/03/14 12:16, Margusja wrote:

Hi thanks for your replay.

What I did:
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found 
it from pom profile section so I used it.


...
it compiled:
[INFO] 


[INFO] Reactor Summary:
[INFO]
[INFO] Mahout Build Tools  SUCCESS [  
1.751 s]
[INFO] Apache Mahout . SUCCESS [  
0.484 s]
[INFO] Mahout Math ... SUCCESS [ 
12.946 s]
[INFO] Mahout Core ... SUCCESS [ 
14.192 s]
[INFO] Mahout Integration  SUCCESS [  
1.857 s]
[INFO] Mahout Examples ... SUCCESS [ 
10.762 s]
[INFO] Mahout Release Package  SUCCESS [  
0.012 s]
[INFO] Mahout Math/Scala wrappers  SUCCESS [ 
25.431 s]
[INFO] Mahout Spark bindings . SUCCESS [ 
40.376 s]
[INFO] 


[INFO] BUILD SUCCESS
[INFO] 


[INFO] Total time: 01:48 min
[INFO] Finished at: 2014-03-17T12:06:31+02:00
[INFO] Final Memory: 79M/2947M
[INFO] 



How to check is there hadoop2 libs in use?

but unfortunately again:
[speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
/user/speech/demo -o demo-seqfiles

MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/bin/hadoop and 
HADOOP_CONF_DIR=/etc/hadoop/conf
MAHOUT-JOB: 
/home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
{--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
--fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
--input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
--output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/17 12:07:22 INFO Configuration.deprecation: 
mapred.compress.map.output is deprecated. Instead, use 
mapreduce.map.output.compress
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
deprecated. Instead, use dfs.metrics.session-id
14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
process : 10
14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
node allocation with : CompletedNodes: 4, size left: 29775

14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
deprecated. Instead, use mapreduce.job.user.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.compress is deprecated. Instead, use 
mapreduce.output.fileoutputformat.compress
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
deprecated. Instead, use mapreduce.job.jar
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks 
is deprecated. Instead, use mapreduce.job.reduces
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.value.class is deprecated. Instead, use 
mapreduce.map.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class 
is deprecated. Instead, use mapreduce.job.map.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.inputformat.class is deprecated. Instead, use 
mapreduce.job.inputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.max.split.size is deprecated. Instead, use

Re: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

2014-03-08 Thread Margusja

Hi, is there any information about the problem I submitted?

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 05/03/14 10:30, Margusja wrote:

Hi

Here are my actions and the problematic result again:

[hduser@vm38 ~]$ git clone https://github.com/apache/mahout.git
remote: Reusing existing pack: 76099, done.
remote: Counting objects: 39, done.
remote: Compressing objects: 100% (32/32), done.
remote: Total 76138 (delta 2), reused 0 (delta 0)
Receiving objects: 100% (76138/76138), 49.04 MiB | 275 KiB/s, done.
Resolving deltas: 100% (34449/34449), done.
[hduser@vm38 ~]$ cd mahout
[hduser@vm38 ~]$ mvn clean package -DskipTests=true 
-Dhadoop2.version=2.2.0

...
...
...
[INFO] Reactor Summary:
[INFO]
[INFO] Mahout Build Tools  SUCCESS 
[15.529s]
[INFO] Apache Mahout . SUCCESS 
[1.657s]
[INFO] Mahout Math ... SUCCESS 
[1:00.891s]
[INFO] Mahout Core ... SUCCESS 
[2:44.617s]
[INFO] Mahout Integration  SUCCESS 
[38.195s]
[INFO] Mahout Examples ... SUCCESS 
[45.458s]
[INFO] Mahout Release Package  SUCCESS 
[0.012s]
[INFO] Mahout Math/Scala wrappers  SUCCESS 
[53.519s]
[INFO] 


[INFO] BUILD SUCCESS
[INFO] 


[INFO] Total time: 6:27.763s
[INFO] Finished at: Wed Mar 05 10:22:51 EET 2014
[INFO] Final Memory: 57M/442M
[INFO] 


[hduser@vm38 mahout]$
[hduser@vm38 mahout]$ cd ../
[hduser@vm38 ~]$ /usr/lib/hadoop/bin/hadoop jar 
mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar 
org.apache.mahout.classifier.df.mapreduce.BuildForest -d 
input/data666.noheader.data -ds input/data666.noheader.data.info -sl 5 
-p -t 100 -o nsl-forest
14/03/05 10:26:39 INFO mapreduce.BuildForest: Partial Mapred 
implementation

14/03/05 10:26:39 INFO mapreduce.BuildForest: Building the forest...
14/03/05 10:26:39 INFO client.RMProxy: Connecting to ResourceManager 
at /0.0.0.0:8032
14/03/05 10:26:51 INFO input.FileInputFormat: Total input paths to 
process : 1

14/03/05 10:26:51 INFO mapreduce.JobSubmitter: number of splits:1
14/03/05 10:26:51 INFO Configuration.deprecation: user.name is 
deprecated. Instead, use mapreduce.job.user.name
14/03/05 10:26:51 INFO Configuration.deprecation: mapred.jar is 
deprecated. Instead, use mapreduce.job.jar
14/03/05 10:26:51 INFO Configuration.deprecation: 
mapred.cache.files.filesizes is deprecated. Instead, use 
mapreduce.job.cache.files.filesizes
14/03/05 10:26:51 INFO Configuration.deprecation: mapred.cache.files 
is deprecated. Instead, use mapreduce.job.cache.files
14/03/05 10:26:51 INFO Configuration.deprecation: mapred.reduce.tasks 
is deprecated. Instead, use mapreduce.job.reduces
14/03/05 10:26:51 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class
14/03/05 10:26:51 INFO Configuration.deprecation: mapreduce.map.class 
is deprecated. Instead, use mapreduce.job.map.class
14/03/05 10:26:51 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/05 10:26:51 INFO Configuration.deprecation: 
mapreduce.inputformat.class is deprecated. Instead, use 
mapreduce.job.inputformat.class
14/03/05 10:26:51 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/05 10:26:51 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/05 10:26:51 INFO Configuration.deprecation: 
mapreduce.outputformat.class is deprecated. Instead, use 
mapreduce.job.outputformat.class
14/03/05 10:26:51 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
14/03/05 10:26:51 INFO Configuration.deprecation: 
mapred.cache.files.timestamps is deprecated. Instead, use 
mapreduce.job.cache.files.timestamps
14/03/05 10:26:51 INFO Configuration.deprecation: 
mapred.output.key.class is deprecated. Instead, use 
mapreduce.job.output.key.class
14/03/05 10:26:51 INFO Configuration.deprecation: mapred.working.dir 
is deprecated. Instead, use mapreduce.job.working.dir
14/03/05 10:26:52 INFO mapreduce.JobSubmitter: Submitting tokens for 
job: job_1393936067845_0018
14/03/05 10:26

Re: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

2014-03-05 Thread Margusja
 mapreduce.Job: Job job_1393936067845_0018 
completed successfully

14/03/05 10:27:49 INFO mapreduce.Job: Counters: 27
File System Counters
FILE: Number of bytes read=2994
FILE: Number of bytes written=80677
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=880103
HDFS: Number of bytes written=2446794
HDFS: Number of read operations=5
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=36022
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
Map input records=9994
Map output records=100
Input split bytes=123
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=402
CPU time spent (ms)=32020
Physical memory (bytes) snapshot=200962048
Virtual memory (bytes) snapshot=997363712
Total committed heap usage (bytes)=111673344
File Input Format Counters
Bytes Read=879980
File Output Format Counters
Bytes Written=2446794
Exception in thread main java.lang.IncompatibleClassChangeError: Found 
interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at 
org.apache.mahout.classifier.df.mapreduce.partial.PartialBuilder.processOutput(PartialBuilder.java:113)
at 
org.apache.mahout.classifier.df.mapreduce.partial.PartialBuilder.parseOutput(PartialBuilder.java:89)
at 
org.apache.mahout.classifier.df.mapreduce.Builder.build(Builder.java:294)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.buildForest(BuildForest.java:228)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.run(BuildForest.java:188)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.main(BuildForest.java:252)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
[hduser@vm38 ~]$

Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 05/03/14 01:18, Gokhan Capan wrote:

mvn clean package -DskipTests=true -Dhadoop2.version=2.2.0




Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

2014-03-04 Thread Margusja
Bytes Written=2483042
Exception in thread main java.lang.IncompatibleClassChangeError: Found 
interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at 
org.apache.mahout.classifier.df.mapreduce.partial.PartialBuilder.processOutput(PartialBuilder.java:113)
at 
org.apache.mahout.classifier.df.mapreduce.partial.PartialBuilder.parseOutput(PartialBuilder.java:89)
at 
org.apache.mahout.classifier.df.mapreduce.Builder.build(Builder.java:294)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.buildForest(BuildForest.java:228)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.run(BuildForest.java:188)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.main(BuildForest.java:252)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

I even downloaded source from https://github.com/apache/mahout.git and 
build it like:

mvn -DskipTests -Dhadoop2.version=2.2.0 clean install
then used command line:
/usr/lib/hadoop-yarn/bin/yarn jar 
mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar 
org.apache.mahout.classifier.df.mapreduce.BuildForest -d 
input/data666.noheader.data -ds input/data666.noheader.data.info -sl 5 
-p -t 100 -o nsl-forest


and got the same error like above.

Is there something wrong in my side or hadoop-2.2.0 and mahout can not 
play each other anymore?


The typical example:
/usr/lib/hadoop-yarn/bin/yarn jar 
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.2.0.2.0.6.0-101.jar pi 
2 5

works.

--
Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-



Re: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

2014-03-04 Thread Margusja

Hi thanks for reply.

Here is my output:

[hduser@vm38 ~]$ /usr/lib/hadoop/bin/hadoop version Hadoop 2.2.0.2.0.6.0-101
Subversion g...@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b

Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using 
/usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar


[hduser@vm38 ~]$ /usr/lib/hadoop/bin/hadoop jar 
mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar 
org.apache.mahout.classifier.df.mapreduce.BuildForest -d 
input/data666.noheader.data -ds input/data666.noheader.data.info -sl 5 
-p -t 100 -o nsl-forest


...
14/03/04 16:22:51 INFO mapreduce.Job:  map 0% reduce 0%
14/03/04 16:23:12 INFO mapreduce.Job:  map 100% reduce 0%
14/03/04 16:23:43 INFO mapreduce.Job: Job job_1393936067845_0013 
completed successfully

14/03/04 16:23:44 INFO mapreduce.Job: Counters: 27
File System Counters
FILE: Number of bytes read=2994
FILE: Number of bytes written=80677
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=880103
HDFS: Number of bytes written=2436546
HDFS: Number of read operations=5
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=45253
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
Map input records=9994
Map output records=100
Input split bytes=123
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=456
CPU time spent (ms)=36010
Physical memory (bytes) snapshot=180752384
Virtual memory (bytes) snapshot=994275328
Total committed heap usage (bytes)=101187584
File Input Format Counters
Bytes Read=879980
File Output Format Counters
Bytes Written=2436546
Exception in thread main java.lang.IncompatibleClassChangeError: Found 
interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at 
org.apache.mahout.classifier.df.mapreduce.partial.PartialBuilder.processOutput(PartialBuilder.java:113)
at 
org.apache.mahout.classifier.df.mapreduce.partial.PartialBuilder.parseOutput(PartialBuilder.java:89)
at 
org.apache.mahout.classifier.df.mapreduce.Builder.build(Builder.java:294)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.buildForest(BuildForest.java:228)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.run(BuildForest.java:188)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.mahout.classifier.df.mapreduce.BuildForest.main(BuildForest.java:252)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)



Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 04/03/14 16:11, Sergey Svinarchuk wrote:

Sory, I didn't see that you try use mahout-1.0-snapshot.
You used /usr/lib/hadoop-yarn/bin/yarn but need use
/usr/lib/hadoop/bin/hadoop and then your example will be success.


On Tue, Mar 4, 2014 at 3:45 PM, Sergey Svinarchuk 
ssvinarc...@hortonworks.com wrote:


Mahout 0.9 not supported hadoop 2 dependencies.
You can use mahout-1.0-SNAPSHOT or add to your mahout patch from
https://issues.apache.org/jira/browse/MAHOUT-1329 for added hadoop 2
support.


On Tue, Mar 4, 2014 at 3:38 PM, Margusja mar...@roo.ee wrote:


Hi

following command:
/usr/lib/hadoop-yarn/bin/yarn jar 
mahout-distribution-0.9/mahout-examples-0.9.jar
org.apache.mahout.classifier.df.mapreduce.BuildForest -d
input/data666.noheader.data -ds input/data666.noheader.data.info -sl 5
-p -t 100 -o nsl-forest

When I used hadoop 1.x then it worked.
Now I use hadoop-2.2.0 it gives me:
14/03/04 15:25:58 INFO mapreduce.BuildForest

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-04 Thread Margusja

Thank you for replay, I got it work.

[hduser@vm38 ~]$ /usr/lib/hadoop-yarn/bin/yarn version
Hadoop 2.2.0.2.0.6.0-101
Subversion g...@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b

Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using 
/usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar

[hduser@vm38 ~]$

The main problem I think was I had yarn binary in two places and I used 
wrong one that didn't use my yarn-site.xml.
Every time I look into .staging/job.../job.xml there were values from 
sourceyarn-default.xml/source even I set them in yarn-site.xml.


Typical mess up :)

Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 04/03/14 05:14, Rohith Sharma K S wrote:

Hi

   The reason for  org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto 
overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet is 
hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of 
protobuf is present.

1. Check MRAppMaster classpath, which version of protobuf is in classpath. 
Expected to have 2.5.0 version.



Thanks  Regards
Rohith Sharma K S



-Original Message-
From: Margusja [mailto:mar...@roo.ee]
Sent: 03 March 2014 22:45
To: user@hadoop.apache.org
Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto 
overrides final method getUnknownFields

Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
  dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-client/artifactId
version2.3.0/version
  /dependency
  dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-core/artifactId
  version1.2.1/version
  /dependency

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber 
mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with 
state FAILED due to: Application application_1393848686226_0018 failed 2 times 
due to AM Container for
appattempt_1393848686226_0018_02 exited with  exitCode: 1 due to:
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
  at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
  at org.apache.hadoop.util.Shell.run(Shell.java:379)
  at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
  at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
  at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
  at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total 
time for transactions(ms): 69 Number of transactions batched in
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
/user/hduser/input/data666.noheader.data.
BP-802201089-90.190.106.33-1393506052071
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete

class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-03 Thread Margusja

Hi

I even don't know what information to provide but my container log is:

2014-03-03 17:36:05,311 FATAL [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.VerifyError: class 
org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final 
method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
at java.lang.Class.getConstructor0(Class.java:2803)
at java.lang.Class.getConstructor(Class.java:1718)
at 
org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java:62)
at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
at 
org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(ApplicationId.java:49)
at 
org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(ConverterUtils.java:137)
at 
org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:177)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)


Where to start digging?

--
Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-



Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-03 Thread Margusja
 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: 
USER=hduser   OPERATION=Container Finished - Failed 
TARGET=ContainerImplRESULT=FAILURE   DESCRIPTION=Container 
failed with state: EXITED_WITH_FAILURE 
APPID=application_1393848686226_0019 
CONTAINERID=container_1393848686226_0019_02_01
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1393848686226_0019_02_01 transitioned from 
EXITED_WITH_FAILURE to DONE
2014-03-03 19:13:19,498 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Removing container_1393848686226_0019_02_01 from application 
application_1393848686226_0019
2014-03-03 19:13:19,499 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event CONTAINER_STOP for appId application_1393848686226_0019
2014-03-03 19:13:20,160 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending 
out status for container: container_id { app_attempt_id { application_id 
{ id: 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 } 
state: C_COMPLETE diagnostics: Exception from container-launch: 
\norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat 
org.apache.hadoop.util.Shell.runCommand(Shell.java:464)\n\tat 
org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)\n\tat 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)\n\tat 
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
java.lang.Thread.run(Thread.java:744)\n\n\n exit_status: 1
2014-03-03 19:13:20,161 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed 
completed container container_1393848686226_0019_02_01
2014-03-03 19:13:20,542 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Starting resource-monitoring for container_1393848686226_0019_02_01
2014-03-03 19:13:20,543 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Stopping resource-monitoring for container_1393848686226_0019_02_01
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from RUNNING to 
APPLICATION_RESOURCES_CLEANINGUP
2014-03-03 19:13:21,164 INFO 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
Deleting absolute path : 
/tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
Got event APPLICATION_STOP for appId application_1393848686226_0019
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: 
Application application_1393848686226_0019 transitioned from 
APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2014-03-03 19:13:21,165 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: 
Scheduling Log Deletion for application: application_1393848686226_0019, 
with delay of 10800 seconds

...


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 03/03/14 19:05, Ted Yu wrote:

Can you tell us the hadoop release you're using ?

Seems there is inconsistency in protobuf library.


On Mon, Mar 3, 2014 at 8:01 AM, Margusja mar...@roo.ee 
mailto:mar...@roo.ee wrote:


Hi

I even don't know what information to provide but my container log is:

2014-03-03 17:36:05,311 FATAL [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting
MRAppMaster
java.lang.VerifyError: class
org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
overrides final method
getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at
java.security.SecureClassLoader.defineClass

unsubscribe

2013-07-12 Thread Margusja




Task failure in slave node

2013-07-11 Thread Margusja
: 10698
13/07/11 15:42:20 INFO mapreduce.BuildForest: Forest mean max Depth: 16
13/07/11 15:42:20 INFO mapreduce.BuildForest: Storing the forest in: 
bal_ee_2009_out/forest.seq


Both (n1 and n2) are used and from web console I can see that there are 
no errors.


Is there any explanations why I am getting errors when I run command 
from master?



--
Regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
skype: margusja
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-



Re: Task failure in slave node

2013-07-11 Thread Margusja

Than you, it resolved the problem.
Funny, I don't remember that I copied mahout libs to n1 hadoop but there 
they are.


Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
skype: margusja
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 7/11/13 4:41 PM, Azuryy Yu wrote:


sorry for typo,

mahout, not mahou.  sent from mobile

On Jul 11, 2013 9:40 PM, Azuryy Yu azury...@gmail.com 
mailto:azury...@gmail.com wrote:


hi,

put all mahou jars under hadoop_home/lib, then restart cluster.

On Jul 11, 2013 8:45 PM, Margusja mar...@roo.ee
mailto:mar...@roo.ee wrote:

Hi

I have tow nodes:
n1 (master, salve) and n2 (slave)

after set up I ran wordcount example and it worked fine:
[hduser@n1 ~]$ hadoop jar
/usr/local/hadoop/hadoop-examples-1.0.4.jar wordcount
/user/hduser/gutenberg /user/hduser/gutenberg-output
13/07/11 15:30:44 INFO input.FileInputFormat: Total input
paths to process : 7
13/07/11 15:30:44 INFO util.NativeCodeLoader: Loaded the
native-hadoop library
13/07/11 15:30:44 WARN snappy.LoadSnappy: Snappy native
library not loaded
13/07/11 15:30:44 INFO mapred.JobClient: Running job:
job_201307111355_0015
13/07/11 15:30:45 INFO mapred.JobClient:  map 0% reduce 0%
13/07/11 15:31:03 INFO mapred.JobClient:  map 42% reduce 0%
13/07/11 15:31:06 INFO mapred.JobClient:  map 57% reduce 0%
13/07/11 15:31:09 INFO mapred.JobClient:  map 71% reduce 0%
13/07/11 15:31:15 INFO mapred.JobClient:  map 100% reduce 0%
13/07/11 15:31:18 INFO mapred.JobClient:  map 100% reduce 23%
13/07/11 15:31:27 INFO mapred.JobClient:  map 100% reduce 100%
13/07/11 15:31:32 INFO mapred.JobClient: Job complete:
job_201307111355_0015
13/07/11 15:31:32 INFO mapred.JobClient: Counters: 30
13/07/11 15:31:32 INFO mapred.JobClient:   Job Counters
13/07/11 15:31:32 INFO mapred.JobClient: Launched reduce
tasks=1
13/07/11 15:31:32 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=67576
13/07/11 15:31:32 INFO mapred.JobClient: Total time spent
by all reduces waiting after reserving slots (ms)=0
13/07/11 15:31:32 INFO mapred.JobClient: Total time spent
by all maps waiting after reserving slots (ms)=0
13/07/11 15:31:32 INFO mapred.JobClient: Rack-local map
tasks=3
13/07/11 15:31:32 INFO mapred.JobClient: Launched map tasks=7
13/07/11 15:31:32 INFO mapred.JobClient: Data-local map
tasks=4
13/07/11 15:31:32 INFO mapred.JobClient:
SLOTS_MILLIS_REDUCES=21992
13/07/11 15:31:32 INFO mapred.JobClient:   File Output Format
Counters
13/07/11 15:31:32 INFO mapred.JobClient: Bytes Written=1412505
13/07/11 15:31:32 INFO mapred.JobClient: FileSystemCounters
13/07/11 15:31:32 INFO mapred.JobClient: FILE_BYTES_READ=5414195
13/07/11 15:31:32 INFO mapred.JobClient: HDFS_BYTES_READ=6950820
13/07/11 15:31:32 INFO mapred.JobClient:
FILE_BYTES_WRITTEN=8744993
13/07/11 15:31:32 INFO mapred.JobClient:
HDFS_BYTES_WRITTEN=1412505
13/07/11 15:31:32 INFO mapred.JobClient:   File Input Format
Counters
13/07/11 15:31:32 INFO mapred.JobClient: Bytes Read=6950001
13/07/11 15:31:32 INFO mapred.JobClient:   Map-Reduce Framework
13/07/11 15:31:32 INFO mapred.JobClient: Map output
materialized bytes=3157469
13/07/11 15:31:32 INFO mapred.JobClient: Map input
records=137146
13/07/11 15:31:32 INFO mapred.JobClient: Reduce shuffle
bytes=2904836
13/07/11 15:31:32 INFO mapred.JobClient: Spilled
Records=594764
13/07/11 15:31:32 INFO mapred.JobClient: Map output
bytes=11435849
13/07/11 15:31:32 INFO mapred.JobClient: Total committed
heap usage (bytes)=1128136704
13/07/11 15:31:32 INFO mapred.JobClient: CPU time spent
(ms)=18230
13/07/11 15:31:32 INFO mapred.JobClient: Combine input
records=1174991
13/07/11 15:31:32 INFO mapred.JobClient: SPLIT_RAW_BYTES=819
13/07/11 15:31:32 INFO mapred.JobClient: Reduce input
records=218990
13/07/11 15:31:32 INFO mapred.JobClient: Reduce input
groups=128513
13/07/11 15:31:32 INFO mapred.JobClient: Combine output
records=218990
13/07/11 15:31:32 INFO mapred.JobClient: Physical memory
(bytes) snapshot=1179656192
13/07/11 15:31:32 INFO mapred.JobClient: Reduce output
records=128513
13/07/11

[Components] Node activates less conditional outgoing nodes than required

2010-04-28 Thread Margusja
Hi,

I have the workflow: http://ftp.margusja.pri.ee/demowf.png
I presume if I can create and execute the  workflow then I might  to 
resume it.
I have simple the code . almost copy paste from the documentation.

   1 ?php
   2 set_include_path( 
/var/www/arendus/mehis/big/library/ezcomponents/ . PATH_SEPARATOR .  
get_include_path());
   3 require_once Base/src/base.php; // dependent on installation 
method, see below
   4 function __autoload( $className ){
   5  ezcBase::autoload( $className );
   6 }
   7
   8 class age implements ezcWorkflowServiceObject{
   9  public $points;
  10  public $variableName;
  11  public $inNode;
  12  public $outNode;
  13  public $choice;
  14  public function  __construct($points) {
  15  $this-points = $points;
  16  $this-variableName = __CLASS__;
  17  $this-choice = array(
  18  array('start' = 0, 'end' = 17, 'score' = 1000),
  19  array('start' = 18, 'end' = 24, 'score' = 20),
  20  array('start' = 66, 'end' = 150, 'score' = 1000)
  21  );
  22  }
  23  public function execute( ezcWorkflowExecution $execution ){
  24  print $this-points;
  25  return true;
  26  }
  27  public function __toString() {
  28  return $this-variableName;
  29  }
  30  }
  31
  32
  33 // Set up database connection.
  34 $db = ezcDbFactory::create( 
'mysql://dbusert:dbpa...@localhost/mehis_ezwf' );
  35
  36 // Set up database-based workflow executer.
  37 $execution = new ezcWorkflowDatabaseExecution( $db, 34 );
  38
  39 // Resume workflow execution.
  40 $execution-resume(
  41   array( 'age' = 10 )
  42 );
  43 ?

If I try to resume my workflow I'll get:
1000
Fatal error: Uncaught exception 'ezcWorkflowExecutionException' with 
message 'Node activates less conditional outgoing nodes than required.' 
in 
/var/www/arendus/mehis/big/library/ezcomponents/Workflow/src/interfaces/node_conditional_branch.php:174
 
Stack trace: #0 
/var/www/arendus/mehis/big/library/ezcomponents/Workflow/src/interfaces/execution.php(494):
 
ezcWorkflowNodeConditionalBranch-execute(Object(ezcWorkflowDatabaseExecution)) 
#1 
/var/www/arendus/mehis/big/library/ezcomponents/Workflow/src/interfaces/execution.php(367):
 
ezcWorkflowExecution-execute() #2 
/var/www/arendus/mehis/big/exec.php(42): 
ezcWorkflowExecution-resume(Array) #3 {main} thrown in 
/var/www/arendus/mehis/big/library/ezcomponents/Workflow/src/interfaces/node_conditional_branch.php
 
on line 179

Any hint?
I see the main problem is: Node activates less conditional outgoing 
nodes than required
But how. I have a correct workflow and the relations between nodes.

digraph mtest { node1 [label=Start, color=#2e3436] node3 
[label=Input, color=#2e3436] node4 [label=Exclusive Choice, 
color=#2e3436] node5 [label=Class age not found., color=#2e3436] 
node6 [label=Simple Merge, color=#2e3436] node7 [label=Input, 
color=#2e3436] node8 [label=Exclusive Choice, color=#2e3436] node9 
[label=Class income not found., color=#2e3436] node10 
[label=Simple Merge, color=#2e3436] node2 [label=End, 
color=#2e3436] node11 [label=Class income not found., 
color=#2e3436] node12 [label=Class income not found., 
color=#2e3436] node13 [label=Class income not found., 
color=#2e3436] node14 [label=Class income not found., 
color=#2e3436] node15 [label=Class age not found., 
color=#2e3436] node16 [label=Class age not found., 
color=#2e3436] node1 - node3 node3 - node4 node4 - node5 
[label=age (  0  = 17 )] node4 - node15 [label=age (  18  = 
24 )] node4 - node16 [label=age (  66  = 150 )] node5 - node6 
node6 - node7 node7 - node8 node8 - node9 [label=income (  0  = 
3000 )] node8 - node11 [label=income (  3001  = 7000 )] node8 - 
node12 [label=income (  7001  = 11000 )] node8 - node13 
[label=income (  11001  = 25000 )] node8 - node14 [label=income 
(  25000  = 99 )] node9 - node10 node10 - node2 node11 - 
node10 node12 - node10 node13 - node10 node14 - node10 node15 - 
node6 node16 - node6 }


-- 
Regards Margusja
Phone: +372 51 48 780
MSN: margu...@kodila.ee
skype: margusja
web: http://margusja.pri.ee

-- 
Components mailing list
Components@lists.ez.no
http://lists.ez.no/mailman/listinfo/components


[Components] Uncaught exception 'ezcWorkflowInvalidInputException' with message 'children == '

2010-03-24 Thread margusja
Hi,

I have simple workflow

 $input = new ezcWorkflowNodeInput(
array( 'children' = new ezcWorkflowConditionAnd(
  array(
  new ezcWorkflowConditionIsGreaterThan ( 0 ),
  new ezcWorkflowConditionIsLessThan( 31 )
  )
 )));

$workflow-startNode-addOutNode( $input );

$branch = new ezcWorkflowNodeExclusiveChoice;
$branch-addInNode( $input );

$node10p  = new ezcWorkflowNodeAction(array( class = Children, 
arguments = array(1 = 10p)));
$node20p = new ezcWorkflowNodeAction(array( class = Children, 
arguments = array(2 = 20p)));
$node30p = new ezcWorkflowNodeAction(array( class = Children, 
arguments = array(3 = 30p)));

$condition1 = new ezcWorkflowConditionVariable('children', new 
ezcWorkflowConditionIsEqual(11) );
$branch-addConditionalOutNode($condition1, $node10p );

...


I can start my workflow and I'll get workwlow ID

Then I try to resume it:
$execution = new ezcWorkflowDatabaseExecution( $db, ID );
$execution-resume(array('children' = 31));
Then I get:
Fatal error: Uncaught exception 'ezcWorkflowInvalidInputException' with 
message 'children == ' in 
/usr/share/pear/ezc/Workflow/interfaces/execution.php:359 Stack trace: 
#0 /var/www/arendus/margusja/ecwf/demowf_continue.php(41): 
ezcWorkflowExecution-resume(Array) #1 {main} thrown in 
/usr/share/pear/ezc/Workflow/interfaces/execution.php on line 359

As much I understand in class ezcWorkflowConditionIsEqual extends 
ezcWorkflowConditionComparison method evaluate doesn't give a proper 
response.
I checked:

public function evaluate( $value )
{ 
return $value == $this-value;
}

There is no value in $this-value

Any hint?

-- 
Tervitades, Margusja
+3725148780
http://margusja.pri.ee
skype: margusja
msn: margu...@kodila.ee

-- 
Components mailing list
Components@lists.ez.no
http://lists.ez.no/mailman/listinfo/components


[suPHP] terminate called after throwing an instance of 'suPHP::LookupException'

2009-05-19 Thread Margusja
Hello, the old good terminate called after throwing an instance of 
'suPHP::LookupException' problem is back.

Some month ago I have had a problem and now I have it again.
The old problem description and at the moment I have almost the same 
situation. Now I have OS Fedora Core 10 and php:
[20:08:55 r...@h11 lepinguabi.ee]# php-cgi -v
PHP 5.2.6 (cgi-fcgi) (built: Sep 13 2008 11:12:16)
Copyright (c) 1997-2008 The PHP Group
Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies
with Zend Extension Manager v1.2.2, Copyright (c) 2003-2007, by Zend 
Technologies
with Zend Optimizer v3.3.0, Copyright (c) 1998-2007, by Zend 
Technologies


---
Hello,

I found a problem.
The problem was in directory: /home/virtual/tuuleke.ee/vhosts/
permissions. In old system (Fedora Core 7 and
mod_suphp-0.6.3-1.fc9.x86_64) it works fine fith permission:
[12:54:56 root at h11 ~]# ls -lah /var/www/tuuleke.ee/
total 28K
drwxrwxr-x   4 11933 10386 4.0K 2008-08-25 20:07 .
drwxr-xr-x 280 root  root   12K 2009-03-02 15:47 ..
drwxrwxr-x   2 11933 10386 4.0K 2008-08-25 20:07 users
drwxrwxr-x   3 11933 10386 4.0K 2008-09-03 21:21 vhosts

But not anymore in Fedora Core 9 and mod_suphp-0.6.3-3.fc10.x86_64 - I
tried even Fedora Core 10 package :) In current sustem it works with
directory permission:
[12:57:06 root at h11 tuuleke.ee]# ls -lah
total 40K
drwxrwxr-x   4 root  root  4.0K 2008-08-25 20:07 .
drwxr-xr-x 277 root  root   12K 2008-07-14 22:37 ..
drwxrwxr-x   2 11933 10386 4.0K 2008-08-25 20:07 users
drwxrwxr-x   3 root  root  4.0K 2008-09-03 21:21 vhosts

Ok that is not a problem but can someone describe what is changed?

---
Margus Margusja Roo
+3725148780
skype: margusja
msn: margusja at kodila.ee
homepage: http://margusja.pri.ee



Margusja wrote:
  Hello.
 
  I still use mod_suphp-0.6.2-1.fc7 on Fedora Core 7. All is fine.
 
  Set up new test server and moved all data to new server. Upgraded Fedora
  Core 7 to Fedora Core 9 and got mod_suphp-0.6.3-1.fc9.x86_64.
 
  One example virtualhost config.
  VirtualHost *:80
  ServerName  www.tuuleke.ee
  ServerAlias tuuleke.ee  

  DocumentRoot/home/virtual/tuuleke.ee/vhosts/www/htdocs
  CustomLog  
  /home/virtual/tuuleke.ee/vhosts/www/logs/www.tuuleke.ee.access combined
  ErrorLog   
  /home/virtual/tuuleke.ee/vhosts/www/logs/www.tuuleke.ee.errors
  Directory /home/virtual/tuuleke.ee/vhosts/www/htdocs
  Options MultiViews FollowSymLinks
  AllowOverride FileInfo AuthConfig Limit Indexes
  /Directory
  Alias /stats /home/www/logs/stats
 
  ScriptAlias /cgi-bin/ 
/home/virtual/tuuleke.ee/vhosts/www/cgi-bin/
  Directory /home/virtual/tuuleke.ee/vhosts/www/cgi-bin
  Options ExecCGI
  AllowOverride FileInfo AuthConfig Limit Indexes
  /Directory
 
 
  Directory /home/virtual/tuuleke.ee/vhosts/www/cgi-bin
  Options ExecCGI
  /Directory
 
 
 
  RewriteEngine On
  RewriteRule   ^/~([^./]+)(.*) 
  /ispman/domains/tuuleke.ee/users/$1_tuuleke_ee/public_html$2
 
  suPHP_Engine on
  suPHP_AddHandler php5-script
 
  CBandSpeed 128 3 1
  CBandRemoteSpeed 20kb/s 1 1
  CBandScoreboard /home/virtual/tuuleke.ee/vhosts/www/scoreboard
  CBandLimit 100M
  CBandPeriod 4W
 
  /VirtualHost
 
  /etc/suphp.conf:
 
  I   
  
/etc/suphp.conf 


  Row 1Col 1   10:58  Ctrl-K H for help
  [global]
  ;Path to logfile
  logfile=/var/log/suphp.log
 
  ;Loglevel
  loglevel=info
 
  ;User Apache is running as
  webserver_user=apache
 
  ;Path all scripts have to be in
  docroot=/
 
  ;Path to chroot() to before executing script
  ;chroot=/mychroot
 
  ; Security options
  allow_file_group_writeable=true
  allow_file_others_writeable=false
  allow_directory_group_writeable=true
  allow_directory_others_writeable=false
 
  ;Check wheter script is within DOCUMENT_ROOT
  check_vhost_docroot=false
 
  ;Send minor error messages to browser
  errors_to_browser=true
 
  ;PATH environment variable
  env_path=/bin:/usr/bin
 
  ;Umask to set, specify in octal notation
  umask=0077
 
  ; Minimum UID
  min_uid=500
 
  ; Minimum GID
  min_gid=500
 
  ; Use correct permissions for mod_userdir sites
  handle_userdir=true
 
  [handlers]
  ;Handler for php-scripts
  php5-script=php:/usr/bin/php-cgi
 
  ;Handler for CGI-scripts
  x-suphp-cgi=execute:!self
 
  /etc/httpd/conf.d/mod_suphp.conf:
  # This is the Apache server configuration file providing suPHP support..
  # It contains the configuration directives to instruct the server how to
  # serve php pages while switching to the user context before rendering.
 
  LoadModule suphp_module modules/mod_suphp.so
 
 
  ### Uncomment to activate mod_suphp
  #suPHP_AddHandler php5-script
 
 
  # This option tells mod_suphp if a PHP

Re: Timeout: No Response from client-server

2009-05-08 Thread Margusja
If I use tcpdump in client-server:

10:26:48 root[load: 0@client-server ~# tcpdump -i eth0 port 161
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes


10:32:14.174663 IP marvin.37634  client-server.snmp:  C=sw 
GetNextRequest(25) 
10:32:14.175457 IP client-server.snmp  monitoring-server.37634:  C=sw 
GetResponse(104)  system.sysDescr.0=[|snmp]
10:32:15.177018 IP monitoring-server.37634  client-server.snmp:  C=sw 
GetNextRequest(25) 
10:32:15.177793 IP client-server.snmp  monitoring-server.37634:  C=sw 
GetResponse(104)  system.sysDescr.0=[|snmp]
10:32:16.179959 IP monitoring-server.37634  client-server.snmp:  C=sw 
GetNextRequest(25) 
10:32:16.180662 IP client-server.snmp  monitoring-server.37634:  C=sw 
GetResponse(104)  system.sysDescr.0=[|snmp]
10:32:17.184903 IP monitoring-server.37634  client-server.snmp:  C=sw 
GetNextRequest(25) 
10:32:17.185615 IP client-server.snmp  monitoring-server.37634:  C=sw 
GetResponse(104)  system.sysDescr.0=[|snmp]
10:32:18.188970 IP monitoring-server.37634  client-server.snmp:  C=sw 
GetNextRequest(25) 
10:32:18.189696 IP client-server.snmp  monitoring-server.37634:  C=sw 
GetResponse(104)  system.sysDescr.0=[|snmp]
10:32:19.191773 IP monitoring-server.37634  client-server.snmp:  C=sw 
GetNextRequest(25) 
10:32:19.192529 IP client-server.snmp  monitoring-server.37634:  C=sw 
GetResponse(104)  system.sysDescr.0=[|snmp]

12 packets captured
13 packets received by filter
0 packets dropped by kernel
10:32:27 root[load: 0@client-server ~#


Best regards, Margus Margusja Roo
+3725148780
skype: margusja
msn: margu...@kodila.ee
homepage: http://margusja.pri.ee



Margusja wrote:
 Hi

 I have two server-rooms (DZ-A and DZ-B)

 in DZ-A there is a server called client-server and in a DZ-B there is a 
 monitoring server called monitoring-server.

 if i do in monitoring-server i got:
 ~# snmpwalk -c sw client-server -v1
 Timeout: No Response from client-server
 ~#
 I have a connection between monitoring-server and client-server:
 15:23:00 root[load: 0@monitoring-server ~# ping client-server
 PING client-server (xxx.xxx.xxx.xxx) 56(84) bytes of data.
 64 bytes from client-server (xxx.xxx.xxx.xxx): icmp_seq=1 ttl=56 
 time=1.23 ms
 ...

 --- client-server ping statistics ---
 3 packets transmitted, 3 received, 0% packet loss, time 2005ms
 rtt min/avg/max/mdev = 0.842/1.346/1.962/0.464 ms
 15:26:18 root[load: 0@monitorin-server ~#

 15:26:18 root[load: 0@monitoring-server ~# nmap -sU -v -p 161 
 client-server

 Starting Nmap 4.68 ( http://nmap.org ) at 2009-05-07 15:27 EEST
 Initiating Ping Scan at 15:27
 Scanning 194.106.120.83 [2 ports]
 Completed Ping Scan at 15:27, 0.03s elapsed (1 total hosts)
 Initiating UDP Scan at 15:27
 Scanning client-server (xxx.xxx.xxx.xxx) [1 port]
 Completed UDP Scan at 15:27, 0.23s elapsed (1 total ports)
 Host client-server (xxx.xxx.xxx.xxx) appears to be up ... good.
 Interesting ports on client-server (xxx.xxx.xxx.xxx):
 PORTSTATE SERVICE
 161/udp open|filtered snmp

 Read data files from: /usr/share/nmap
 Nmap done: 1 IP address (1 host up) scanned in 0.322 seconds
   Raw packets sent: 4 (124B) | Rcvd: 1 (46B)
 15:27:34 root[load: 0@monitoring-server ~#


 15:14:16 root[load: 0@client-server ~# ps aux | grep snmpd
 root 24039  0.0  0.8   8992  4248 ?SMay06   0:05 
 /usr/sbin/snmpd -Lsd -p /var/run/snmpd.pid
 root 26410  0.0  0.0   1712   452 pts/3S+   15:14   0:00 grep snmpd
 15:14:34 root[load: 0@beast ~# netstat -ln | grep 161
 udp0  0 0.0.0.0:161 
 0.0.0.0:*  15:29:31 root[load: 0@beast ~#

 15:29:31 root[load: 0@client-server ~# ping monitoring-server
 PING monitoring-server (xxx.xxx.xxx.xxx) 56(84) bytes of data.

 --- monitoring-server ping statistics ---
 3 packets transmitted, 3 received, 0% packet loss, time 2005ms
 rtt min/avg/max/mdev = 0.934/1.265/1.803/0.384 ms
 15:30:12 root[load: 0@client-server ~#

 so looks like there are no connection and firewall problems

 client-server snmpd.local.conf:
 com2sec local client-server sw
 com2sec local xxx.xxx.xxx.xxx (-- client-server IP) sw
 com2sec local monitoring-server sw


 group MyRWGroup any local

 view allincluded  .1   80

 access MyROGroup   any   noauthexact  allnone   none
 access MyRWGroup   any   noauthexact  allallnone


 SNMPD is running: root 24039  0.0  0.8   8992  4248 ?S
 May06   0:05 /usr/sbin/snmpd -Lsd -p /var/run/snmpd.pid
 and in both servers: NET-SNMP version:  5.4.2.1


 ---
 Any ideas?

   

--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner

Re: Timeout: No Response from client-server

2009-05-08 Thread Margusja
13:50:48 root[load: 1@monitoring-server ~# snmpgetnext -v1 -c sw  -d 
client-server system

Sending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
: 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
0032: 02 01 01 05  00   .


Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
: 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
0032: 02 01 01 05  00   .


Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
: 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
0032: 02 01 01 05  00   .


Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
: 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
0032: 02 01 01 05  00   .


Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
: 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
0032: 02 01 01 05  00   .


Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
: 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
0032: 02 01 01 05  00   .

Timeout: No Response from client-server.
13:51:03 root[load: 0@monitoring-server ~#

Best regards, Margus Margusja Roo
+3725148780
skype: margusja
msn: margu...@kodila.ee
homepage: http://margusja.pri.ee



Dave Shield wrote:
 What is the output of running

  snmpgetnext -v1 -c sw  -d  client-server  system

 ?

 Dave

   

--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


Re: Timeout: No Response from client-server

2009-05-08 Thread Margusja
Found a solution.

Big my mistake. There were firewall closed from client-server to 
master-server

Best regards, Margus Margusja Roo
+3725148780
skype: margusja
msn: margu...@kodila.ee
homepage: http://margusja.pri.ee



Margusja wrote:
 13:50:48 root[load: 1@monitoring-server ~# snmpgetnext -v1 -c sw  -d 
 client-server system

 Sending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
 : 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
 0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
 0032: 02 01 01 05  00   .


 Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
 : 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
 0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
 0032: 02 01 01 05  00   .


 Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
 : 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
 0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
 0032: 02 01 01 05  00   .


 Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
 : 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
 0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
 0032: 02 01 01 05  00   .


 Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
 : 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
 0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
 0032: 02 01 01 05  00   .


 Resending 37 bytes to UDP: [0.0.0.0]-[194.106.120.83]:161
 : 30 23 02 01  00 04 02 73  77 A1 1A 02  04 7C E2 6D0#.sw|.m
 0016: 3F 02 01 00  02 01 00 30  0C 30 0A 06  06 2B 06 01?..0.0...+..
 0032: 02 01 01 05  00   .

 Timeout: No Response from client-server.
 13:51:03 root[load: 0@monitoring-server ~#

 Best regards, Margus Margusja Roo
 +3725148780
 skype: margusja
 msn: margu...@kodila.ee
 homepage: http://margusja.pri.ee



 Dave Shield wrote:
   
 What is the output of running

  snmpgetnext -v1 -c sw  -d  client-server  system

 ?

 Dave

   
 

 --
 The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
 production scanning environment may not be a perfect world - but thanks to
 Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
 Series Scanner you'll get full speed at 300 dpi even with all image 
 processing features enabled. http://p.sf.net/sfu/kodak-com
 ___
 Net-snmp-users mailing list
 Net-snmp-users@lists.sourceforge.net
 Please see the following page to unsubscribe or change other options:
 https://lists.sourceforge.net/lists/listinfo/net-snmp-users

   

--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


Timeout: No Response from client-server

2009-05-07 Thread Margusja
Hi

I have two server-rooms (DZ-A and DZ-B)

in DZ-A there is a server called client-server and in a DZ-B there is a 
monitoring server called monitoring-server.

if i do in monitoring-server i got:
~# snmpwalk -c sw client-server -v1
Timeout: No Response from client-server
~#
I have a connection between monitoring-server and client-server:
15:23:00 root[load: 0@monitoring-server ~# ping client-server
PING client-server (xxx.xxx.xxx.xxx) 56(84) bytes of data.
64 bytes from client-server (xxx.xxx.xxx.xxx): icmp_seq=1 ttl=56 
time=1.23 ms
...

--- client-server ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 0.842/1.346/1.962/0.464 ms
15:26:18 root[load: 0@monitorin-server ~#

15:26:18 root[load: 0@monitoring-server ~# nmap -sU -v -p 161 
client-server

Starting Nmap 4.68 ( http://nmap.org ) at 2009-05-07 15:27 EEST
Initiating Ping Scan at 15:27
Scanning 194.106.120.83 [2 ports]
Completed Ping Scan at 15:27, 0.03s elapsed (1 total hosts)
Initiating UDP Scan at 15:27
Scanning client-server (xxx.xxx.xxx.xxx) [1 port]
Completed UDP Scan at 15:27, 0.23s elapsed (1 total ports)
Host client-server (xxx.xxx.xxx.xxx) appears to be up ... good.
Interesting ports on client-server (xxx.xxx.xxx.xxx):
PORTSTATE SERVICE
161/udp open|filtered snmp

Read data files from: /usr/share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.322 seconds
  Raw packets sent: 4 (124B) | Rcvd: 1 (46B)
15:27:34 root[load: 0@monitoring-server ~#


15:14:16 root[load: 0@client-server ~# ps aux | grep snmpd
root 24039  0.0  0.8   8992  4248 ?SMay06   0:05 
/usr/sbin/snmpd -Lsd -p /var/run/snmpd.pid
root 26410  0.0  0.0   1712   452 pts/3S+   15:14   0:00 grep snmpd
15:14:34 root[load: 0@beast ~# netstat -ln | grep 161
udp0  0 0.0.0.0:161 
0.0.0.0:*  15:29:31 root[load: 0@beast ~#

15:29:31 root[load: 0@client-server ~# ping monitoring-server
PING monitoring-server (xxx.xxx.xxx.xxx) 56(84) bytes of data.

--- monitoring-server ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 0.934/1.265/1.803/0.384 ms
15:30:12 root[load: 0@client-server ~#

so looks like there are no connection and firewall problems

client-server snmpd.local.conf:
com2sec local client-server sw
com2sec local xxx.xxx.xxx.xxx (-- client-server IP) sw
com2sec local monitoring-server sw


group MyRWGroup any local

view allincluded  .1   80

access MyROGroup   any   noauthexact  allnone   none
access MyRWGroup   any   noauthexact  allallnone


SNMPD is running: root 24039  0.0  0.8   8992  4248 ?S
May06   0:05 /usr/sbin/snmpd -Lsd -p /var/run/snmpd.pid
and in both servers: NET-SNMP version:  5.4.2.1


---
Any ideas?

-- 

Best regards, Margus Margusja Roo
+3725148780
skype: margusja
msn: margu...@kodila.ee
homepage: http://margusja.pri.ee


--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


[Fedora-xen] Problem to using virt-install

2008-07-11 Thread Margusja

Hello, can somebody give me some hint how to resolve my problem?

[EMAIL PROTECTED] ~]# uname -a
Linux bacula 2.6.21.7-3.fc8xen #1 SMP Thu Mar 20 14:57:53 EDT 2008 i686 
i686 i386 GNU/Linux


kernel-xen.i686  2.6.21.7-3.fc8 
installed  
kernel-xen-devel.i6862.6.21.7-3.fc8 
installed  
xen.i386 3.1.2-2.fc8
installed  
xen-devel.i386   3.1.2-2.fc8
installed  
xen-libs.i3863.1.2-2.fc8installed
libvirt.i386 0.4.4-1.fc8
installed  
libvirt-python.i386  0.4.4-1.fc8
installed  
python-virtinst.noarch   0.300.2-4.fc8  
installed  
virt-manager.i3860.5.3-2.fc8installed


[EMAIL PROTECTED] ~]# virt-install -f /var/xen/xen1 -r 512
libvir: Remote error : Connection refused
libvir: warning : Failed to find the network: Is the daemon running ?
libvir: Remote error : Connection refused
What is the name of your virtual machine? xen1
libvir: Xen Daemon error : GET operation failed: xend_get: error from 
xen daemon:

Would you like to enable graphics support? (yes or no) no
What is the install location? 
ftp://ftp.linux.ee/pub/fedora/linux/releases/8/Fedora/i386/os/


Starting install...
libvir: Xen Daemon error : GET operation failed: xend_get: error from 
xen daemon:
Retrieving file .treeinfo 100% |=|  430 B
00:00
Retrieving file vmlinuz.. 100% |=| 2.1 MB
00:04
Retrieving file initrd.im 100% |=| 6.4 MB
00:17
libvir: Xen Daemon error : GET operation failed: xend_get: error from 
xen daemon:
libvir: Xen Daemon error : GET operation failed: xend_get: error from 
xen daemon:
virDomainLookupByID() failed GET operation failed: xend_get: error from 
xen daemon:
Domain installation may not have been successful. 
If it was, you can restart your domain by running 'virsh start xen1'; 
otherwise, please restart your installation.
Fri, 11 Jul 2008 10:30:21 ERRORvirDomainLookupByID() failed GET 
operation failed: xend_get: error from xen daemon

Traceback (most recent call last)
File /usr/sbin/virt-install, line 502, in module
 main()
File /usr/sbin/virt-install, line 462, in main
 dom = guest.start_install(conscb,progresscb)
File /usr/lib/python2.5/site-packages/virtinst/Guest.py, line 813, in 
start_install

 return self._do_install(consolecb, meter)
File /usr/lib/python2.5/site-packages/virtinst/Guest.py, line 829, in 
_do_install

 self._create_devices(meter)
File /usr/lib/python2.5/site-packages/virtinst/Guest.py, line 727, in 
_create_devices

 nic.setup(self.conn)
File /usr/lib/python2.5/site-packages/virtinst/Guest.py, line 281, in 
setup

 vm = conn.lookupByID(id)
File /usr/lib/python2.5/site-packages/libvirt.py, line 920, in lookupByID
 if ret is None:raise libvirtError('virDomainLookupByID() failed', 
conn=self)
libvirtError: virDomainLookupByID() failed GET operation failed: 
xend_get: error from xen daemon


[EMAIL PROTECTED] ~]# xm list
NameID   Mem VCPUs  State   
Time(s)
Domain-0 0   489 1 r-
738.2


--
---
Margusja
+3725148780
skype: margusja
msn: [EMAIL PROTECTED]
homepage: http://margusja.pri.ee

--
Fedora-xen mailing list
Fedora-xen@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen


SMS reciving problem

2007-04-02 Thread Margusja

I use kannel:
Kannel bearerbox version `1.4.1'. Build `Oct 16 2006 10:48:53', compiler 
`4.1.1 20060525 (Red Hat 4.1.1-1)'. System Linux, release 
2.6.19-1.2288.2.4.fc5, version #1 SMP Sun Mar 4 15:57:52 EST 2007, 
machine x86_64. Hostname h8.dbweb.ee, IP 217.159.233.174. Libxml version 
2.6.23. Using SQLite 2.8.17. Using native malloc.


My kannel is connected to operator SMSC-es with two VPN IPSec tunnels.

First tunnel:
My kannel eth1: *.*.*.55 - VPN LAN: *.*.*.53 My VPN GW: 13.180.29.56 - 
Tunnel - Operator IPV GW: *.*.*.15 - Operator SMSC: *.*.*.119. I use it 
to send SMSes out.


Second tunnel:
My kannel eth1: *.*.*.55 - VPN LAN: *.*.*.53 My VPN GW: 13.180.29.56 - 
Tunnel - Operator IPV GW: *.*.*.15 - Operator SMSC: *.*.*.21 It's SMSC 
that sends mobile SMSes to my kannel.



Operator uses CMG UCP/EMI type SMSC.

Here is my kannel konf:


group = core
admin-port = 13000
admin-password = **
status-password = **
admin-deny-ip = *.*.*.*
admin-allow-ip = *.*.*.*
smsbox-port = 1301
box-allow-ip = *.*.*.*
wdp-interface-name = *
log-file = /var/log/kannel/bearerbox.log
access-log = /var/log/kannel/access.log
store-file = /var/log/kannel/store.log
log-level = 0

group = smsbox
bearerbox-host = localhost
sendsms-port = 13013
global-sender = 12014
log-file  = /var/log/kannel/smsbox.log
log-level = 0

group = sms-service
keyword = test
get-url = http://localhost/smsservice.php?sender=%ptext=%r;

group = smsc
smsc-id = EMT_MT
smsc = emi
port = 1
host = *.*.*.119
smsc-username = 
smsc-password = 

group = sendsms-user
username = 
password = 

My http status:
SMS: received 0 (0 queued), sent 1 (0 queued), store size 0

SMS: inbound 0.00 msg/sec, outbound 0.00 msg/sec

DLR: 0 queued, using internal storage

Box connections:
smsbox:(none), IP 127.0.0.1 (0 queued), (on-line 0d 5h 57m 13s)

SMSC connections:
EMT_MTEMI2:*.*.*.119:1:12014 (online 85s, rcvd 0, sent 1, 
failed 0, queued 0 msgs)


And what is my problem?
I can send sms out. But if I send SMS to my short number it's not coming 
to my kannel and I don't know what's wrong. If I change smsbox-port back 
to 13001 and run fakesmsc then all works fine. 1301 is because operator 
demands it.


Ok if I send sms to my shortnumber http status shows:
SMS: received 0 (0 queued), sent 1 (0 queued), store size 0

SMS: inbound 0.00 msg/sec, outbound 0.00 msg/sec

DLR: 0 queued, using internal storage

Box connections:
smsbox:(none), IP 127.0.0.1 (0 queued), (on-line 0d 6h 9m 13s)
smsbox:(none), IP *.*.*.21(Operator SMSC IP) (0 queued), (on-line 
0d 0h 0m 22s)


SMSC connections:
EMT_MTEMI2:217.71.32.119:1:12014 (online 85s, rcvd 0, sent 
1, failed 0, queued 0 msgs)


in bearerbox.log:
2007-03-30 23:17:31 [18836] [5] INFO: Client connected from *.*.*.21
2007-03-30 23:17:31 [18836] [5] DEBUG: Started thread 24 
(gw/bb_boxc.c:function)
2007-03-30 23:17:31 [18836] [24] DEBUG: Thread 24 
(gw/bb_boxc.c:function) maps to pid 18836.
2007-123.618723 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [SYN, 
ACK] Seq=0 Ack=0 Win=5840 Len=0 MSS=1460 WS=5

123.624779 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [ACK] Seq=1
Ack=107 Win=183 Len=0-30 23:17:31 [18836] [24] DEBUG: Started thread 25 
(gw/bb_boxc.c:boxc_sender)
2007-03-30 23:17:31 [18836] [25] DEBUG: Thread 25 
(gw/bb_boxc.c:boxc_sender) maps to pid 18836.


Same time thethereal shows:
123.618723 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [SYN, ACK] 
Seq=0 Ack=0 Win=5840 Len=0 MSS=1460 WS=5
123.624779 213.180.29.55 - 217.71.32.21 TCP ci3-software-1  12785 
[ACK] Seq=1 Ack=107 Win=183 Len=0


And after about one minute in bearbox:
2007-03-30 23:19:02 [18836] [24] INFO: Connection closed by the box 
*.*.*.21
2007-03-30 23:19:02 [18836] [25] DEBUG: send_msg: sending msg to box: 
*.*.*.21
2007-03-30 23:19:02 [18836] [25] DEBUG: Thread 25 
(gw/bb_boxc.c:boxc_sender) terminates.
2007-03-30 23:19:02 [18836] [24] DEBUG: Thread 24 
(gw/bb_boxc.c:function) terminates.


tcpflow is show nothing.

And the same time tethereal shows:
215.071237 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [PSH, ACK] 
Seq=1 Ack=108 Win=183 [TCP CHECKSUM INCORRECT] Len=16[Unreassembled 
Packet [incorrect TCP checksum]]
215.071360 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [FIN, ACK] 
Seq=17 Ack=108 Win=183 Len=0


tcpflow shows:
*.*.*.055.01301-*.*.*.021.16760: 

And it is always the same if sms tries to go into my kannel.
I don't know where to search the error. Operator tells that all is ok in 
their side.


Best regards, Margusja

---
Margusja
+37251780
skype: margusja
msn: [EMAIL PROTECTED]
homepage: http://margusja.pri.ee



---
Margusja
+37251780
skype: margusja
msn: [EMAIL PROTECTED]
homepage: http://margusja.pri.ee



Re: SMS reciving problem

2007-04-02 Thread Margusja

ci3-software-1 = 1301

---
Margusja
+37251780
skype: margusja
msn: [EMAIL PROTECTED]
homepage: http://margusja.pri.ee


Margusja wrote:

I use kannel:
Kannel bearerbox version `1.4.1'. Build `Oct 16 2006 10:48:53', compiler 
`4.1.1 20060525 (Red Hat 4.1.1-1)'. System Linux, release 
2.6.19-1.2288.2.4.fc5, version #1 SMP Sun Mar 4 15:57:52 EST 2007, 
machine x86_64. Hostname h8.dbweb.ee, IP 217.159.233.174. Libxml version 
2.6.23. Using SQLite 2.8.17. Using native malloc.


My kannel is connected to operator SMSC-es with two VPN IPSec tunnels.

First tunnel:
My kannel eth1: *.*.*.55 - VPN LAN: *.*.*.53 My VPN GW: 13.180.29.56 - 
Tunnel - Operator IPV GW: *.*.*.15 - Operator SMSC: *.*.*.119. I use it 
to send SMSes out.


Second tunnel:
My kannel eth1: *.*.*.55 - VPN LAN: *.*.*.53 My VPN GW: 13.180.29.56 - 
Tunnel - Operator IPV GW: *.*.*.15 - Operator SMSC: *.*.*.21 It's SMSC 
that sends mobile SMSes to my kannel.



Operator uses CMG UCP/EMI type SMSC.

Here is my kannel konf:


group = core
admin-port = 13000
admin-password = **
status-password = **
admin-deny-ip = *.*.*.*
admin-allow-ip = *.*.*.*
smsbox-port = 1301
box-allow-ip = *.*.*.*
wdp-interface-name = *
log-file = /var/log/kannel/bearerbox.log
access-log = /var/log/kannel/access.log
store-file = /var/log/kannel/store.log
log-level = 0

group = smsbox
bearerbox-host = localhost
sendsms-port = 13013
global-sender = 12014
log-file  = /var/log/kannel/smsbox.log
log-level = 0

group = sms-service
keyword = test
get-url = http://localhost/smsservice.php?sender=%ptext=%r;

group = smsc
smsc-id = EMT_MT
smsc = emi
port = 1
host = *.*.*.119
smsc-username = 
smsc-password = 

group = sendsms-user
username = 
password = 

My http status:
SMS: received 0 (0 queued), sent 1 (0 queued), store size 0

SMS: inbound 0.00 msg/sec, outbound 0.00 msg/sec

DLR: 0 queued, using internal storage

Box connections:
smsbox:(none), IP 127.0.0.1 (0 queued), (on-line 0d 5h 57m 13s)

SMSC connections:
EMT_MTEMI2:*.*.*.119:1:12014 (online 85s, rcvd 0, sent 1, 
failed 0, queued 0 msgs)


And what is my problem?
I can send sms out. But if I send SMS to my short number it's not coming 
to my kannel and I don't know what's wrong. If I change smsbox-port back 
to 13001 and run fakesmsc then all works fine. 1301 is because operator 
demands it.


Ok if I send sms to my shortnumber http status shows:
SMS: received 0 (0 queued), sent 1 (0 queued), store size 0

SMS: inbound 0.00 msg/sec, outbound 0.00 msg/sec

DLR: 0 queued, using internal storage

Box connections:
smsbox:(none), IP 127.0.0.1 (0 queued), (on-line 0d 6h 9m 13s)
smsbox:(none), IP *.*.*.21(Operator SMSC IP) (0 queued), (on-line 0d 
0h 0m 22s)


SMSC connections:
EMT_MTEMI2:217.71.32.119:1:12014 (online 85s, rcvd 0, sent 
1, failed 0, queued 0 msgs)


in bearerbox.log:
2007-03-30 23:17:31 [18836] [5] INFO: Client connected from *.*.*.21
2007-03-30 23:17:31 [18836] [5] DEBUG: Started thread 24 
(gw/bb_boxc.c:function)
2007-03-30 23:17:31 [18836] [24] DEBUG: Thread 24 
(gw/bb_boxc.c:function) maps to pid 18836.
2007-123.618723 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [SYN, 
ACK] Seq=0 Ack=0 Win=5840 Len=0 MSS=1460 WS=5

123.624779 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [ACK] Seq=1
Ack=107 Win=183 Len=0-30 23:17:31 [18836] [24] DEBUG: Started thread 25 
(gw/bb_boxc.c:boxc_sender)
2007-03-30 23:17:31 [18836] [25] DEBUG: Thread 25 
(gw/bb_boxc.c:boxc_sender) maps to pid 18836.


Same time thethereal shows:
123.618723 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [SYN, ACK] 
Seq=0 Ack=0 Win=5840 Len=0 MSS=1460 WS=5
123.624779 213.180.29.55 - 217.71.32.21 TCP ci3-software-1  12785 
[ACK] Seq=1 Ack=107 Win=183 Len=0


And after about one minute in bearbox:
2007-03-30 23:19:02 [18836] [24] INFO: Connection closed by the box 
*.*.*.21
2007-03-30 23:19:02 [18836] [25] DEBUG: send_msg: sending msg to box: 
*.*.*.21
2007-03-30 23:19:02 [18836] [25] DEBUG: Thread 25 
(gw/bb_boxc.c:boxc_sender) terminates.
2007-03-30 23:19:02 [18836] [24] DEBUG: Thread 24 
(gw/bb_boxc.c:function) terminates.


tcpflow is show nothing.

And the same time tethereal shows:
215.071237 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [PSH, ACK] 
Seq=1 Ack=108 Win=183 [TCP CHECKSUM INCORRECT] Len=16[Unreassembled 
Packet [incorrect TCP checksum]]
215.071360 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [FIN, ACK] 
Seq=17 Ack=108 Win=183 Len=0


tcpflow shows:
*.*.*.055.01301-*.*.*.021.16760: 

And it is always the same if sms tries to go into my kannel.
I don't know where to search the error. Operator tells that all is ok in 
their side.


Best regards, Margusja

---
Margusja
+37251780
skype: margusja
msn: [EMAIL PROTECTED]
homepage: http://margusja.pri.ee



---
Margusja
+37251780
skype: margusja
msn: [EMAIL PROTECTED]
homepage: http://margusja.pri.ee






sms reciving problem

2007-03-30 Thread Margusja

I use kannel:
Kannel bearerbox version `1.4.1'. Build `Oct 16 2006 10:48:53', compiler 
`4.1.1 20060525 (Red Hat 4.1.1-1)'. System Linux, release 
2.6.19-1.2288.2.4.fc5, version #1 SMP Sun Mar 4 15:57:52 EST 2007, 
machine x86_64. Hostname h8.dbweb.ee, IP 217.159.233.174. Libxml version 
2.6.23. Using SQLite 2.8.17. Using native malloc.


My kannel is connected to operator SMSC-es with two VPN IPSec tunnels.

First tunnel:
My kannel eth1: *.*.*.55 - VPN LAN: *.*.*.53 My VPN GW: 13.180.29.56 - 
Tunnel - Operator IPV GW: *.*.*.15 - Operator SMSC: *.*.*.119. I use it 
to send SMSes out.


Second tunnel:
My kannel eth1: *.*.*.55 - VPN LAN: *.*.*.53 My VPN GW: 13.180.29.56 - 
Tunnel - Operator IPV GW: *.*.*.15 - Operator SMSC: *.*.*.21 It's SMSC 
that sends mobile SMSes to my kannel.



Operator uses CMG UCP/EMI type SMSC.

Here is my kannel konf:


group = core
admin-port = 13000
admin-password = **
status-password = **
admin-deny-ip = *.*.*.*
admin-allow-ip = *.*.*.*
smsbox-port = 1301
box-allow-ip = *.*.*.*
wdp-interface-name = *
log-file = /var/log/kannel/bearerbox.log
access-log = /var/log/kannel/access.log
store-file = /var/log/kannel/store.log
log-level = 0

group = smsbox
bearerbox-host = localhost
sendsms-port = 13013
global-sender = 12014
log-file  = /var/log/kannel/smsbox.log
log-level = 0

group = sms-service
keyword = test
get-url = http://localhost/smsservice.php?sender=%ptext=%r;

group = smsc
smsc-id = EMT_MT
smsc = emi
port = 1
host = *.*.*.119
smsc-username = 
smsc-password = 

group = sendsms-user
username = 
password = 

My http status:
SMS: received 0 (0 queued), sent 1 (0 queued), store size 0

SMS: inbound 0.00 msg/sec, outbound 0.00 msg/sec

DLR: 0 queued, using internal storage

Box connections:
smsbox:(none), IP 127.0.0.1 (0 queued), (on-line 0d 5h 57m 13s)

SMSC connections:
EMT_MTEMI2:*.*.*.119:1:12014 (online 85s, rcvd 0, sent 1, 
failed 0, queued 0 msgs)


And what is my problem?
I can send sms out. But if I send SMS to my short number it's not coming 
to my kannel and I don't know what's wrong. If I change smsbox-port back 
to 13001 and run fakesmsc then all works fine. 1301 is because operator 
demands it.


Ok if I send sms to my shortnumber http status shows:
SMS: received 0 (0 queued), sent 1 (0 queued), store size 0

SMS: inbound 0.00 msg/sec, outbound 0.00 msg/sec

DLR: 0 queued, using internal storage

Box connections:
smsbox:(none), IP 127.0.0.1 (0 queued), (on-line 0d 6h 9m 13s)
smsbox:(none), IP *.*.*.21(Operator SMSC IP) (0 queued), (on-line 
0d 0h 0m 22s)


SMSC connections:
EMT_MTEMI2:217.71.32.119:1:12014 (online 85s, rcvd 0, sent 
1, failed 0, queued 0 msgs)


in bearerbox.log:
2007-03-30 23:17:31 [18836] [5] INFO: Client connected from *.*.*.21
2007-03-30 23:17:31 [18836] [5] DEBUG: Started thread 24 
(gw/bb_boxc.c:function)
2007-03-30 23:17:31 [18836] [24] DEBUG: Thread 24 
(gw/bb_boxc.c:function) maps to pid 18836.
2007-123.618723 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [SYN, 
ACK] Seq=0 Ack=0 Win=5840 Len=0 MSS=1460 WS=5

123.624779 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [ACK] Seq=1
Ack=107 Win=183 Len=0-30 23:17:31 [18836] [24] DEBUG: Started thread 25 
(gw/bb_boxc.c:boxc_sender)
2007-03-30 23:17:31 [18836] [25] DEBUG: Thread 25 
(gw/bb_boxc.c:boxc_sender) maps to pid 18836.


Same time thethereal shows:
123.618723 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [SYN, ACK] 
Seq=0 Ack=0 Win=5840 Len=0 MSS=1460 WS=5
123.624779 213.180.29.55 - 217.71.32.21 TCP ci3-software-1  12785 
[ACK] Seq=1 Ack=107 Win=183 Len=0


And after about one minute in bearbox:
2007-03-30 23:19:02 [18836] [24] INFO: Connection closed by the box 
*.*.*.21
2007-03-30 23:19:02 [18836] [25] DEBUG: send_msg: sending msg to box: 
*.*.*.21
2007-03-30 23:19:02 [18836] [25] DEBUG: Thread 25 
(gw/bb_boxc.c:boxc_sender) terminates.
2007-03-30 23:19:02 [18836] [24] DEBUG: Thread 24 
(gw/bb_boxc.c:function) terminates.


tcpflow is show nothing.

And the same time tethereal shows:
215.071237 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [PSH, ACK] 
Seq=1 Ack=108 Win=183 [TCP CHECKSUM INCORRECT] Len=16[Unreassembled 
Packet [incorrect TCP checksum]]
215.071360 *.*.*.55 - *.*.*.21 TCP ci3-software-1  12785 [FIN, ACK] 
Seq=17 Ack=108 Win=183 Len=0


tcpflow shows:
*.*.*.055.01301-*.*.*.021.16760: 

And it is always the same if sms tries to go into my kannel.
I don't know where to search the error. Operator tells that all is ok in 
their side.


Best regards, Margusja

---
Margusja
+37251780
skype: margusja
msn: [EMAIL PROTECTED]
homepage: http://margusja.pri.ee



[GENERAL] query is wery slow with _t() function

2005-05-03 Thread Margusja
  | integer |
multy_clf | integer[]   |
task_dur_act  | numeric |
task_dur_pln  | numeric |
task_resp | integer | not null
task_dur_minutes  | integer |
sys_assigned_total| integer | default 0
sys_assigned_accepted | integer | default 0
sys_assigned_rejected | integer | default 0
channel   | integer |
modify_on | timestamp without time zone | default now()
modify_by | character varying(50)   |
Indexes:
   taskid_id_key unique, btree (id)
   taskid_id_ukey unique, btree (id)
   taskid_modify_on_key btree (modify_on)
table sys_txt structure is:
  Table public.sys_txt
Column  |  Type  |Modifiers
-++-
id  | integer| not null default 
nextval('public.sys_txt_id_seq'::text)
lang_id | integer|
code_id | integer|
txt | character varying(255) |
Indexes:
   sys_txt__id_id_key btree (id)
table sys_txt_code structure is:
  Table public.sys_txt_code
 Column  |  Type  |  Modifiers
--++--
id   | integer| not null default 
nextval('public.sys_txt_code_id_seq'::text)
code | character varying(100) |
descr| character varying(255) |
group_id | integer[]  |
code_new | character varying(100) |
Indexes:
   sys_txt_code__code_ukey unique, btree (code)
   sys_txt_code_ukey unique, btree (id)
Reg, Margusja
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster