Re: Issue

2018-05-22 Thread sujith.j Sjk
yes

On Tue, May 22, 2018 at 3:04 PM, Abdur-Rahmaan Janhangeer <
arj.pyt...@gmail.com> wrote:

> greetings,
>
> did you send a log file attached?
>
> Abdur-Rahmaan Janhangeer
> https://github.com/Abdur-rahmaanJ
>
> On Tue, 22 May 2018, 10:28 sujith.j Sjk, <sujith@gmail.com> wrote:
>
>> > Hi,
>> >
>> > Am facing the below issue when starting pyton.
>> >
>> >
>> >
>> --
>> https://mail.python.org/mailman/listinfo/python-list
>>
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Issue

2018-05-22 Thread sujith.j Sjk
> Hi,
>
> Am facing the below issue when starting pyton.
>
>
>
-- 
https://mail.python.org/mailman/listinfo/python-list


[Discuss] Some problem Unit test

2017-01-23 Thread sjk
Hi, all

Surefire plugin default execute `default-test` when mvn test. There two 
execution in Flink’s surefire plugin configure, default-test and 
integration-tests.

I have three problem about unit test:
1. As default-test and integration-tests are mutual exclusion, when I local 
execute “mvn clean test -f flink-libraries/flink-table/pom.xml -U”, it will not 
execute all the unit test of ITCase.*, such as 
org.apache.flink.table.api.scala.stream.sql.SqlITCase will not be executed. I 
think it’s a bug.
   Where is integration-tests used?
2. *Test.* also will be not executed, there lots of *Test.* unit test, do they 
need be test?
3. Suite.* use in scala generally, flink-ml use scalatest-maven-plugin instead 
of surefire plugin

I think we should do something on unit test:
1. Choose one unit test plugin: surefire or scalatest-maven-plugin
2. Include given unit test wildcard class, such as **/*ITCase.*,   **/*Test.*,  
**/*Suite.*, **/*Test.*
   All such unit test should be executed. Clean the non unit test class with 
name end of “Test”.


After I try to modify the configure of surefire plugin, lots of error occur


default-test
test

test


${skip.default.test}

**/*ITCase.*
**/*Suite.*
**/*Test.*





 

cc Stephan Ewen


Best regards
-Jinkui Shi




[DISCUSS] schedule for execution from different list of ExecutionJobVertex

2016-12-30 Thread sjk
Hi, all

On  [FLINK-1425][1]  add executeMode supporting by Ufuk Celebi .
I want to know why two loop using different list object: task.value() and 
getVerticesTopologically(). 
task and getVerticesTopologically() all filled in attachJobGraph function:

public void attachJobGraph(List topologiallySorted) throws 
JobException {
...
ExecutionJobVertex previousTask = this.tasks.putIfAbsent(jobVertex.getID(), 
ejv);
...
this.verticesInCreationOrder.add(ejv);
…
}

At the moment of before starting run ExecutionJobVertex, are the task.value and 
getVerticesTopologically having same elements?


[1] 
https://github.com/apache/flink/commit/ad31f611150b4b95147dca26932b7ad11bb4b920#diff-db400d27f89469eca0a85a5e9b564bc7L326
 


Thanks

Best regards
from Jinkui Shi

Re: [DISCUSS] Hold copies in HeapStateBackend

2016-11-23 Thread sjk
hi,Fabian Hueske, Sorry for mistake for the whole PR #2792

> On Nov 23, 2016, at 17:10, Fabian Hueske <fhue...@gmail.com> wrote:
> 
> Hi,
> 
> Why do you think that this means "much code changes"?
> I think it would actually be a pretty lightweight change in
> HeapReducingState.
> 
> The proposal is to copy the *first* value that goes into a ReducingState.
> The copy would be done by a TypeSerializer and hence be a deep copy.
> This will allow to reuse the copy in each invocation of the ReduceFunction
> instead of creating a new result object of the same type that was initially
> copied.
> 
> I think the savings of reusing the object in each invocation of the
> ReduceFunction and not creating a new object should amortize the one-time
> object copy.
> 
> Fabian
> 
> 2016-11-23 3:04 GMT+01:00 sjk <shijinkui...@163.com>:
> 
>> Hi, Fabian
>> 
>> So much code changes. Can you show us the key changes code for the object
>> copy?
>> Object reference maybe hold more deep reference, it can be a bomb.
>> Can we renew a object with its data or direct use kryo for object
>> serialization?
>> I’m not prefer object copy.
>> 
>> 
>>> On Nov 22, 2016, at 20:33, Fabian Hueske <fhue...@gmail.com> wrote:
>>> 
>>> Does anybody have objections against copying the first record that goes
>>> into the ReduceState?
>>> 
>>> 2016-11-22 12:49 GMT+01:00 Aljoscha Krettek <aljos...@apache.org>:
>>> 
>>>> That's right, yes.
>>>> 
>>>> On Mon, 21 Nov 2016 at 19:14 Fabian Hueske <fhue...@gmail.com> wrote:
>>>> 
>>>>> Right, but that would be a much bigger change than "just" copying the
>>>>> *first* record that goes into the ReduceState, or am I missing
>> something?
>>>>> 
>>>>> 
>>>>> 2016-11-21 18:41 GMT+01:00 Aljoscha Krettek <aljos...@apache.org>:
>>>>> 
>>>>>> To bring over my comment from the Github PR that started this
>>>> discussion:
>>>>>> 
>>>>>> @wuchong <https://github.com/wuchong>, yes this is a problem with the
>>>>>> HeapStateBackend. The RocksDB backend does not suffer from this
>>>> problem.
>>>>> I
>>>>>> think in the long run we should migrate the HeapStateBackend to always
>>>>> keep
>>>>>> data in serialised form, then we also won't have this problem anymore.
>>>>>> 
>>>>>> So I'm very much in favour of keeping data serialised. Copying data
>>>> would
>>>>>> only ever be a stopgap solution.
>>>>>> 
>>>>>> On Mon, 21 Nov 2016 at 15:56 Fabian Hueske <fhue...@gmail.com> wrote:
>>>>>> 
>>>>>>> Another approach that would solve the problem for our use case
>>>> (object
>>>>>>> re-usage for incremental window ReduceFunctions) would be to copy the
>>>>>> first
>>>>>>> object that is put into the state.
>>>>>>> This would be a change on the ReduceState, not on the overall state
>>>>>>> backend, which should be feasible, no?
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 2016-11-21 15:43 GMT+01:00 Stephan Ewen <se...@apache.org>:
>>>>>>> 
>>>>>>>> -1 for copying objects.
>>>>>>>> 
>>>>>>>> Storing a serialized data where possible is good, but copying all
>>>>>> objects
>>>>>>>> by default is not a good idea, in my opinion.
>>>>>>>> A lot of scenarios use data types that are hellishly expensive to
>>>>> copy.
>>>>>>>> Even the current copy on chain handover is a problem.
>>>>>>>> 
>>>>>>>> Let's not introduce even more copies.
>>>>>>>> 
>>>>>>>> On Mon, Nov 21, 2016 at 3:16 PM, Maciek Próchniak <m...@touk.pl>
>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Hi,
>>>>>>>>> 
>>>>>>>>> it will come with performance overhead when updating the state,
>>>>> but I
>>>>>>>>> think it'll be possible to perform asynchronous snapshots using
>>>>>>>>> HeapStateBac

[DISCUSS] deprecated function need more detail

2016-11-22 Thread sjk
Hi, all

Let’s have look at Checkpointed interface below. It declared deprecated but 
have no detail for why, when and how replace this function. It’s a big trouble 
for the users.
 
@Deprecated
@PublicEvolving
public interface Checkpointed extends 
CheckpointedRestoring {


I think we should have more detail: when give up, who replace it, why 
deprecated.

For Java code, add detail  deprecated reason in code annotations.
For Scala code, replace Java annotation  @Deprecated(,,) with Scala annotation 
@deprecated, such as
@deprecated(message = "the reason", since = "when fully give up”)

Add this rule to customized checkstyle plugin of maven and SBT.

Best regard
-Jinkui Shi

Re: [DISCUSS] Hold copies in HeapStateBackend

2016-11-22 Thread sjk
Hi, Fabian

So much code changes. Can you show us the key changes code for the object copy?
Object reference maybe hold more deep reference, it can be a bomb. 
Can we renew a object with its data or direct use kryo for object serialization?
I’m not prefer object copy.


> On Nov 22, 2016, at 20:33, Fabian Hueske  wrote:
> 
> Does anybody have objections against copying the first record that goes
> into the ReduceState?
> 
> 2016-11-22 12:49 GMT+01:00 Aljoscha Krettek :
> 
>> That's right, yes.
>> 
>> On Mon, 21 Nov 2016 at 19:14 Fabian Hueske  wrote:
>> 
>>> Right, but that would be a much bigger change than "just" copying the
>>> *first* record that goes into the ReduceState, or am I missing something?
>>> 
>>> 
>>> 2016-11-21 18:41 GMT+01:00 Aljoscha Krettek :
>>> 
 To bring over my comment from the Github PR that started this
>> discussion:
 
 @wuchong , yes this is a problem with the
 HeapStateBackend. The RocksDB backend does not suffer from this
>> problem.
>>> I
 think in the long run we should migrate the HeapStateBackend to always
>>> keep
 data in serialised form, then we also won't have this problem anymore.
 
 So I'm very much in favour of keeping data serialised. Copying data
>> would
 only ever be a stopgap solution.
 
 On Mon, 21 Nov 2016 at 15:56 Fabian Hueske  wrote:
 
> Another approach that would solve the problem for our use case
>> (object
> re-usage for incremental window ReduceFunctions) would be to copy the
 first
> object that is put into the state.
> This would be a change on the ReduceState, not on the overall state
> backend, which should be feasible, no?
> 
> 
> 
> 2016-11-21 15:43 GMT+01:00 Stephan Ewen :
> 
>> -1 for copying objects.
>> 
>> Storing a serialized data where possible is good, but copying all
 objects
>> by default is not a good idea, in my opinion.
>> A lot of scenarios use data types that are hellishly expensive to
>>> copy.
>> Even the current copy on chain handover is a problem.
>> 
>> Let's not introduce even more copies.
>> 
>> On Mon, Nov 21, 2016 at 3:16 PM, Maciek Próchniak 
>>> wrote:
>> 
>>> Hi,
>>> 
>>> it will come with performance overhead when updating the state,
>>> but I
>>> think it'll be possible to perform asynchronous snapshots using
>>> HeapStateBackend (probably some changes to underlying data
>>> structures
>> would
>>> be needed) - which would bring more predictable performance.
>>> 
>>> thanks,
>>> maciek
>>> 
>>> 
>>> On 21/11/2016 13:48, Aljoscha Krettek wrote:
>>> 
 Hi,
 I would be in favour of this since it brings things in line with
>>> the
 RocksDB backend. This will, however, come with quite the
>>> performance
 overhead, depending on how fast the TypeSerializer can copy.
 
 Cheers,
 Aljoscha
 
 On Mon, 21 Nov 2016 at 11:30 Fabian Hueske 
 wrote:
 
 Hi everybody,
> 
> when implementing a ReduceFunction for incremental aggregation
>> of
> SQL /
> Table API window aggregates we noticed that the
>> HeapStateBackend
 does
>> not
> store copies but holds references to the original objects. In
>>> case
> of a
> SlidingWindow, the same object is referenced from different
>>> window
>> panes.
> Therefore, it is not possible to modify these objects (in order
>>> to
>> avoid
> object instantiations, see discussion [1]).
> 
> Other state backends serialize their data such that the
>> behavior
>>> is
> not
> consistent across backends.
> If we want to have light-weight tests, we have to create new
 objects
> in
> the
> ReduceFunction causing unnecessary overhead.
> 
> I would propose to copy objects when storing them in a
>> HeapStateBackend.
> This would ensure that objects returned from state to the user
 behave
> identical for different state backends.
> 
> We created a related JIRA [2] that asks to copy records that go
 into
> an
> incremental ReduceFunction. The scope is more narrow and would
 solve
>> our
> problem, but would leave the inconsistent behavior of state
 backends
> in
> place.
> 
> What do others think?
> 
> Cheers, Fabian
> 
> [1]
>>> https://github.com/apache/flink/pull/2792#discussion_r88653721
> [2] https://issues.apache.org/jira/browse/FLINK-5105
> 
> 
>>> 
>> 
> 
 
>>> 
>> 




Re: [Discuss] State Backend use external HBase storage

2016-11-17 Thread sjk
Hi, Chen Qin
I fount this issue. Does it kicked off?  What’s the current progress?
https://issues.apache.org/jira/browse/FLINK-4266 


> On Nov 16, 2016, at 19:35, Till Rohrmann  wrote:
> 
> Hi Jinkui,
> 
> the file system state backend and the RocksDB state backend can be
> configured (and usually should be) such that they store their checkpoint
> data on a reliable storage system such as HDFS. Then you also have the
> reliability guarantees.
> 
> Of course, one can start adding more state backends to Flink. At some point
> in time there was the idea to write a Cassandra backed state backend [1],
> for example. Similarly, one could think about a HBase backed state backend.
> 
> [1]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Cassandra-statebackend-td12690.html
> 
> 
> Cheers,
> Till
> 
> On Wed, Nov 16, 2016 at 3:10 AM, shijinkui  wrote:
> 
>> Hi, All
>> 
>> At present flink have three state backend: memory, file system, rocksdb.
>> MemoryStateBackend will tansform the snapshot to jobManager, 5MB limited
>> default. Even setting it bigger, that not suitable for very big state
>> storage.
>> HDFS can meet the reliability guarantee, but It's slow. File System and
>> RocksDB are fast, but they are have no reliability guarantee.
>> Three state backend all have no reliability guarantee.
>> 
>> Can we have a Hbase state backend, providing reliability guarantee of
>> state snapshot?
>> For user, only new a HbaseStateBackend object, provide hbase parameter and
>> optimization configure.
>> Maybe Hbase or other distributed key-value storage is heavyweight storage,
>> we only use hbase client to read/write asynchronously.
>> 
>> -Jinkui Shi
>> 



Unsubscribe

2016-07-19 Thread sjk


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: [outages] HE having an (IPv6) problem?

2016-06-06 Thread sjk via Outages
We've been having IPv4 peering issues since 14:30 CST. RTT to their
peering routers has been all over the map - from 16ms up to 500ms.

On 06/06/2016 02:19 PM, Jason Lixfeld via Outages wrote:
>
> My NLNOG ring node alerted me to issues that all seem to funnel
> through HE. From my vantage point, the funnel is in Chicago and is
> still present:
>
> BlackBox-3:~ jlixfeld$ mtr -rwc 1 amazon08.ring.nlnog.net Start: Mon
> Jun 6 15:18:49 2016 HOST: BlackBox-3.local Loss% Snt Last Avg Best
> Wrst StDev
>
> 1.|-- vl4091.trs01.002.77mowatav01.yyz.beanfield.com 0.0% 12.7   
> 2.7   2.7   2.7   0.0
> 2.|-- te0-2-0-2.bfr01.905kingstw01.yyz.beanfield.com 0.0% 12.3   
> 2.3   2.3   2.3   0.0
> 3.|-- te-0-2-0-14.bfr01.151frontstw01.yyz.beanfield.com  0.0% 13.4   
> 3.4   3.4   3.4   0.0
> 4.|-- he.ip4.torontointernetxchange.net  0.0% 12.2   
> 2.2   2.2   2.2   0.0
> 5.|-- 100ge13-1.core1.chi1.he.net0.0% 1   12.2  
> 12.2  12.2  12.2   0.0
> 6.|-- ???   100.0 10.0   
> 0.0   0.0   0.0   0.0
>
> BlackBox-3:~ jlixfeld$
>
> On Jun 6, 2016, at 3:15 PM, Josh Reynolds via Outages
>  wrote:
>
> Just started seeing ipv4 issues here, glad somebody else popped up
> on the list.
>
> Actually, it may have just resolved…
>
> On Mon, Jun 6, 2016 at 2:10 PM, Frank Bulk via Outages
>  wrote:
>
> Many IPv6 sites that I monitoring are repoting down … the
> first few I checked are with HE:
>
> root@nagios:~/tmp# tcptraceroute 6lab.cisco.com Selected
> device eth0.3, address 96.31.0.5, port 50468 for outgoing
> packets Tracing the path to 6lab.cisco.com (173.38.154.157) on
> TCP port 80 (www), 30 hops max 1 router-core-inside.mtcnet.net
> (96.31.0.254) 0.252 ms 0.266 ms 0.207 ms 2
> sxct.spnc.mtcnet.net (167.142.156.194) 0.269 ms 0.180 ms 0.142
> ms 3 premier.spnc-mlx.fbnt.netins.net (173.215.60.1) 4.780 ms
> 4.752 ms 14.526 ms 4 ins-kb1-te-12-2-3031.kmrr.netins.net
> (167.142.64.253) 8.517 ms 8.685 ms 8.556 ms 5
> ins-kc2-et-po101.kmrr.netins.net (167.142.67.49) 8.546 ms
> 8.579 ms 9.526 ms 6 v504.core1.oma1.he.net (184.105.18.169)
> 36.198 ms 30.499 ms 30.476 ms 7 *^C root@nagios:~/tmp#
> tcptraceroute6 www.informationweek.com traceroute to
> www.informationweek.com (2620:103::192:155:48:81) from
> 2607:fe28:0:1000::5, port 80, from port 43778, 30 hops max, 60
> bytes packets 1 router-core.mtcnet.net (2607:fe28:0:1000::1)
> 0.245 ms 0.211 ms 0.210 ms 2 sxct.spnc.mtcnet.net
> (2607:fe28:11:1002::194) 0.259 ms 0.148 ms 0.157 ms 3
> v6-premier.movl-mlx.fbnt.netins.net (2001:5f8:7f0a:1::1) 5.014
> ms 11.906 ms 4.972 ms 4
> v6-ins-kb1-te-12-2-3031.kmrr.netins.net (2001:5f8:2:2::1)
> 8.780 ms 8.597 ms 8.589 ms 5 v6-ins-kc2-et-9-3.kmrr.netins.net
> (2001:5f8::27:1) 8.643 ms 8.605 ms 8.678 ms 6
> 10gigabitethernet9.switch1.oma1.he.net (2001:470:1:9a::1)
> 13.342 ms 16.310 ms 16.173 ms 7 10ge15-8.core1.den1.he.net
> (2001:470:0:332::1) 23.416 ms 23.457 ms 38.828 ms 8
> asn-qwest-us-as209.10gigabitethernet13-2.core1.den1.he.net
> (2001:470:0:39f::2) 30.422 ms 30.385 ms 30.395 ms 9 * * * 10 *
> * * 11 * * * 12 * * * 13 * * * ^C46% completed… root@nagios:~/tmp#
>
> //_ Outages
> mailing list Outages@outages.org
> https://puck.nether.net/mailman/listinfo/outages
> 
> 
>
> //_ Outages mailing
> list Outages@outages.org
> https://puck.nether.net/mailman/listinfo/outages
> 
> 
>
>
>
> ___
> Outages mailing list
> Outages@outages.org
> https://puck.nether.net/mailman/listinfo/outages

-- 
s...@cupacoffee.net
fingerprint: 1024D/89420B8E 2001-09-16

No one can understand the truth until
he drinks of coffee's frothy goodness.
~Sheik Abd-al-Kadir


Re: Bulk loading Serialized RDD into Hbase throws KryoException - IndexOutOfBoundsException

2016-05-30 Thread sjk
org.apache.hadoop.hbase.client.{Mutation, Put}
org.apache.hadoop.hbase.io.ImmutableBytesWritable

if u used mutation, register the above class too

> On May 30, 2016, at 08:11, Nirav Patel  wrote:
> 
> Sure let me can try that. But from looks of it it seems kryo 
> kryo.util.MapReferenceResolver.getReadObject trying to access incorrect index 
> (100) 
> 
> On Sun, May 29, 2016 at 5:06 PM, Ted Yu  > wrote:
> Can you register Put with Kryo ?
> 
> Thanks
> 
> On May 29, 2016, at 4:58 PM, Nirav Patel  > wrote:
> 
>> I pasted code snipped for that method.
>> 
>> here's full def:
>> 
>>   def writeRddToHBase2(hbaseRdd: RDD[(ImmutableBytesWritable, Put)], 
>> tableName: String) {
>> 
>> 
>> 
>> hbaseRdd.values.foreachPartition{ itr =>
>> 
>> val hConf = HBaseConfiguration.create()
>> 
>> hConf.setInt("hbase.client.write.buffer", 16097152)
>> 
>> val table = new HTable(hConf, tableName)
>> 
>> //table.setWriteBufferSize(8388608)
>> 
>> itr.grouped(100).foreach(table.put(_))   // << Exception happens at 
>> this point
>> 
>> table.close()
>> 
>> }
>> 
>>   }
>> 
>> 
>> 
>> I am using hbase 0.98.12 mapr distribution.
>> 
>> 
>> 
>> Thanks
>> 
>> Nirav
>> 
>> 
>> On Sun, May 29, 2016 at 4:46 PM, Ted Yu > > wrote:
>> bq.  at 
>> com.mycorpt.myprojjobs.spark.jobs.hbase.HbaseUtils$$anonfun$writeRddToHBase2$1.apply(HbaseUtils.scala:80)
>> 
>> Can you reveal related code from HbaseUtils.scala ?
>> 
>> Which hbase version are you using ?
>> 
>> Thanks
>> 
>> On Sun, May 29, 2016 at 4:26 PM, Nirav Patel > > wrote:
>> Hi,
>> 
>> I am getting following Kryo deserialization error when trying to buklload 
>> Cached RDD into Hbase. It works if I don't cache the RDD. I cache it with 
>> MEMORY_ONLY_SER.
>> 
>> here's the code snippet:
>> 
>> 
>> hbaseRdd.values.foreachPartition{ itr =>
>> val hConf = HBaseConfiguration.create()
>> hConf.setInt("hbase.client.write.buffer", 16097152)
>> val table = new HTable(hConf, tableName)
>> itr.grouped(100).foreach(table.put(_))
>> table.close()
>> }
>> hbaseRdd is of type RDD[(ImmutableBytesWritable, Put)]
>> 
>> 
>> Exception I am getting. I read on Kryo JIRA that this may be issue with 
>> incorrect use of serialization library. So could this be issue with 
>> twitter-chill library or spark core it self ? 
>> 
>> Job aborted due to stage failure: Task 16 in stage 9.0 failed 10 times, most 
>> recent failure: Lost task 16.9 in stage 9.0 (TID 28614, 
>> hdn10.mycorptcorporation.local): com.esotericsoftware.kryo.KryoException: 
>> java.lang.IndexOutOfBoundsException: Index: 100, Size: 6
>> Serialization trace:
>> familyMap (org.apache.hadoop.hbase.client.Put)
>>  at 
>> com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:626)
>>  at 
>> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221)
>>  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
>>  at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:42)
>>  at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33)
>>  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
>>  at 
>> org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:192)
>>  at 
>> org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:181)
>>  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
>>  at 
>> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>>  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>  at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:966)
>>  at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:972)
>>  at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>  at 
>> com.mycorpt.myprojjobs.spark.jobs.hbase.HbaseUtils$$anonfun$writeRddToHBase2$1.apply(HbaseUtils.scala:80)
>>  at 
>> com.mycorpt.myprojjobs.spark.jobs.hbase.HbaseUtils$$anonfun$writeRddToHBase2$1.apply(HbaseUtils.scala:75)
>>  at 
>> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:902)
>>  at 
>> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:902)
>>  at 
>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
>>  at 
>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
>>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>  at 

Re: mess spark cluster mode error

2016-03-14 Thread sjk
when i change to default coarse-grained, it’s ok.

> On Mar 14, 2016, at 21:55, sjk <shijinkui...@163.com> wrote:
> 
> hi,all, when i run task on mesos, task error below.  for help, thanks a lot.
> 
> 
> cluster mode, command:
> 
> $SPARK_HOME/spark-submit --class com.xxx.ETL --master 
> mesos://192.168.191.116:7077 --deploy-mode cluster --supervise 
> --driver-memory 2G --executor-memory 10G —
> total-executor-cores 4 http://jar.xxx.info/streaming-etl-assembly-1.0.jar 
> 
> 
> task stderr:
> 
> 
> I0314 21:13:17.520845 29008 fetcher.cpp:424] Fetcher Info: 
> {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/appweb","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"\/data\/program\/spark-1.6.0-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/data\/mesos\/slaves\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/frameworks\/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044\/executors\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/runs\/92509aa4-7804-459b-857d-cfc08c31a993","user":"appweb"}
> I0314 21:13:17.522541 29008 fetcher.cpp:379] Fetching URI 
> '/data/program/spark-1.6.0-bin-hadoop2.6.tgz'
> I0314 21:13:17.522562 29008 fetcher.cpp:250] Fetching directly into the 
> sandbox directory
> I0314 21:13:17.522586 29008 fetcher.cpp:187] Fetching URI 
> '/data/program/spark-1.6.0-bin-hadoop2.6.tgz'
> I0314 21:13:17.522603 29008 fetcher.cpp:167] Copying resource with command:cp 
> '/data/program/spark-1.6.0-bin-hadoop2.6.tgz' 
> '/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
> I0314 21:13:17.880008 29008 fetcher.cpp:84] Extracting with command: tar -C 
> '/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993'
>  -xf 
> '/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
> I0314 21:13:20.911213 29008 fetcher.cpp:92] Extracted 
> '/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
>  into 
> '/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993'
> I0314 21:13:20.911278 29008 fetcher.cpp:456] Fetched 
> '/data/program/spark-1.6.0-bin-hadoop2.6.tgz' to 
> '/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/spark/launcher/Main
> Caused by: java.lang.ClassNotFoundException: org.apache.spark.launcher.Main
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
> Could not find the main class: org.apache.spark.launcher.Main. Program will 
> exit.
> 
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



mess spark cluster mode error

2016-03-14 Thread sjk
hi,all, when i run task on mesos, task error below.  for help, thanks a lot.


cluster mode, command:

$SPARK_HOME/spark-submit --class com.xxx.ETL --master 
mesos://192.168.191.116:7077  --deploy-mode 
cluster --supervise --driver-memory 2G --executor-memory 10G —
total-executor-cores 4 http://jar.xxx.info/streaming-etl-assembly-1.0.jar 
 


task stderr:


I0314 21:13:17.520845 29008 fetcher.cpp:424] Fetcher Info: 
{"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/appweb","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"\/data\/program\/spark-1.6.0-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/data\/mesos\/slaves\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/frameworks\/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044\/executors\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/runs\/92509aa4-7804-459b-857d-cfc08c31a993","user":"appweb"}
I0314 21:13:17.522541 29008 fetcher.cpp:379] Fetching URI 
'/data/program/spark-1.6.0-bin-hadoop2.6.tgz'
I0314 21:13:17.522562 29008 fetcher.cpp:250] Fetching directly into the sandbox 
directory
I0314 21:13:17.522586 29008 fetcher.cpp:187] Fetching URI 
'/data/program/spark-1.6.0-bin-hadoop2.6.tgz'
I0314 21:13:17.522603 29008 fetcher.cpp:167] Copying resource with command:cp 
'/data/program/spark-1.6.0-bin-hadoop2.6.tgz' 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
I0314 21:13:17.880008 29008 fetcher.cpp:84] Extracting with command: tar -C 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993'
 -xf 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
I0314 21:13:20.911213 29008 fetcher.cpp:92] Extracted 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
 into 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993'
I0314 21:13:20.911278 29008 fetcher.cpp:456] Fetched 
'/data/program/spark-1.6.0-bin-hadoop2.6.tgz' to 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/spark/launcher/Main
Caused by: java.lang.ClassNotFoundException: org.apache.spark.launcher.Main
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
Could not find the main class: org.apache.spark.launcher.Main. Program will 
exit.

mess spark cluster mode error

2016-03-14 Thread sjk
hi,all, when i run task on mesos, task error below.  for help, thanks a lot.


cluster mode, command:

$SPARK_HOME/spark-submit --class com.xxx.ETL --master 
mesos://192.168.191.116:7077 --deploy-mode cluster --supervise --driver-memory 
2G --executor-memory 10G —
total-executor-cores 4 http://jar.xxx.info/streaming-etl-assembly-1.0.jar 


task stderr:


I0314 21:13:17.520845 29008 fetcher.cpp:424] Fetcher Info: 
{"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/appweb","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"\/data\/program\/spark-1.6.0-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/data\/mesos\/slaves\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/frameworks\/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044\/executors\/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7\/runs\/92509aa4-7804-459b-857d-cfc08c31a993","user":"appweb"}
I0314 21:13:17.522541 29008 fetcher.cpp:379] Fetching URI 
'/data/program/spark-1.6.0-bin-hadoop2.6.tgz'
I0314 21:13:17.522562 29008 fetcher.cpp:250] Fetching directly into the sandbox 
directory
I0314 21:13:17.522586 29008 fetcher.cpp:187] Fetching URI 
'/data/program/spark-1.6.0-bin-hadoop2.6.tgz'
I0314 21:13:17.522603 29008 fetcher.cpp:167] Copying resource with command:cp 
'/data/program/spark-1.6.0-bin-hadoop2.6.tgz' 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
I0314 21:13:17.880008 29008 fetcher.cpp:84] Extracting with command: tar -C 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993'
 -xf 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
I0314 21:13:20.911213 29008 fetcher.cpp:92] Extracted 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
 into 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993'
I0314 21:13:20.911278 29008 fetcher.cpp:456] Fetched 
'/data/program/spark-1.6.0-bin-hadoop2.6.tgz' to 
'/data/mesos/slaves/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/frameworks/dd8e95f7-3626-4e46-b48c-b3b58b573c4d-0044/executors/c2f100e1-13a8-40d9-a00f-68389300dfc1-S7/runs/92509aa4-7804-459b-857d-cfc08c31a993/spark-1.6.0-bin-hadoop2.6.tgz'
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/spark/launcher/Main
Caused by: java.lang.ClassNotFoundException: org.apache.spark.launcher.Main
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
Could not find the main class: org.apache.spark.launcher.Main. Program will 
exit.



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[jira] [Commented] (SPARK-6932) A Prototype of Parameter Server

2015-04-15 Thread sjk (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14497271#comment-14497271
 ] 

sjk commented on SPARK-6932:


Spark-4590, just like it's name Early investigation of parameter server, is 
focusing on investigating possibility and difference of parameter server. It's 
a brainstorming issue with low activities. Thanks to its contribution, we know 
it's feasible to implement PS on Spark. And now it should come to the next 
stage. This issue focus on designing and implementing details of integrating 
parameter server into Spark, with better usability and performance. In fact, a 
prototype is already implemented  and workable. But some improvement is 
necessary before we commit it. And because there‘s some change to the core 
module, it's not proper to be put into spark-packages. [~srowen]

 A Prototype of Parameter Server
 ---

 Key: SPARK-6932
 URL: https://issues.apache.org/jira/browse/SPARK-6932
 Project: Spark
  Issue Type: New Feature
  Components: ML, MLlib
Reporter: Qiping Li

 h2. Introduction
 As specified in 
 [SPARK-4590|https://issues.apache.org/jira/browse/SPARK-4590],it would be 
 very helpful to integrate parameter server into Spark for machine learning 
 algorithms, especially for those with ultra high dimensions features. 
 After carefully studying the design doc of [Parameter 
 Servers|https://docs.google.com/document/d/1SX3nkmF41wFXAAIr9BgqvrHSS5mW362fJ7roBXJm06o/edit?usp=sharing],and
  the paper of [Factorbird|http://stanford.edu/~rezab/papers/factorbird.pdf], 
 we proposed a prototype of Parameter Server on Spark(Ps-on-Spark), with 
 several key design concerns:
 * *User friendly interface*
   Careful investigation is done to most existing Parameter Server 
 systems(including:  [petuum|http://petuum.github.io], [parameter 
 server|http://parameterserver.org], 
 [paracel|https://github.com/douban/paracel]) and a user friendly interface is 
 design by absorbing essence from all these system. 
 * *Prototype of distributed array*
 IndexRDD (see 
 [SPARK-4590|https://issues.apache.org/jira/browse/SPARK-4590]) doesn't seem 
 to be a good option for distributed array, because in most case, the #key 
 updates/second is not be very high. 
 So we implement a distributed HashMap to store the parameters, which can 
 be easily extended to get better performance.
 
 * *Minimal code change*
   Quite a lot of effort in done to avoid code change of Spark core. Tasks 
 which need parameter server are still created and scheduled by Spark's 
 scheduler. Tasks communicate with parameter server with a client object, 
 through *akka* or *netty*.
 With all these concerns we propose the following architecture:
 h2. Architecture
 !https://cloud.githubusercontent.com/assets/1285855/7158179/f2d25cc4-e3a9-11e4-835e-89681596c478.jpg!
 Data is stored in RDD and is partitioned across workers. During each 
 iteration, each worker gets parameters from parameter server then computes 
 new parameters based on old parameters and data in the partition. Finally 
 each worker updates parameters to parameter server.Worker communicates with 
 parameter server through a parameter server client,which is initialized in 
 `TaskContext` of this worker.
 The current implementation is based on YARN cluster mode, 
 but it should not be a problem to transplanted it to other modes. 
 h3. Interface
 We refer to existing parameter server systems(petuum, parameter server, 
 paracel) when design the interface of parameter server. 
 *`PSClient` provides the following interface for workers to use:*
 {code}
 //  get parameter indexed by key from parameter server
 def get[T](key: String): T
 // get multiple parameters from parameter server
 def multiGet[T](keys: Array[String]): Array[T]
 // add parameter indexed by `key` by `delta`, 
 // if multiple `delta` to update on the same parameter,
 // use `reduceFunc` to reduce these `delta`s frist.
 def update[T](key: String, delta: T, reduceFunc: (T, T) = T): Unit
 // update multiple parameters at the same time, use the same `reduceFunc`.
 def multiUpdate(keys: Array[String], delta: Array[T], reduceFunc: (T, T) = 
 T: Unit
 
 // advance clock to indicate that current iteration is finished.
 def clock(): Unit
  
 // block until all workers have reached this line of code.
 def sync(): Unit
 {code}
 *`PSContext` provides following functions to use on driver:*
 {code}
 // load parameters from existing rdd.
 def loadPSModel[T](model: RDD[String, T]) 
 // fetch parameters from parameter server to construct model.
 def fetchPSModel[T](keys: Array[String]): Array[T]
 {code} 
 
 *A new function has been add to `RDD` to run parameter server tasks:*
 {code}
 // run the provided `func` on each partition of this RDD. 
 // This function can use data of this partition

[jira] [Created] (SPARK-6494) rdd polymorphic method zipPartitions refactor

2015-03-24 Thread sjk (JIRA)
sjk created SPARK-6494:
--

 Summary: rdd polymorphic method zipPartitions refactor
 Key: SPARK-6494
 URL: https://issues.apache.org/jira/browse/SPARK-6494
 Project: Spark
  Issue Type: Improvement
Reporter: sjk


no need so many polymorphic method, only add default value instead.
modify partition.size instead of partition.length, partitions is Array object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-5062) Pregel use aggregateMessage instead of mapReduceTriplets function

2015-01-29 Thread sjk (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296601#comment-14296601
 ] 

sjk commented on SPARK-5062:


is any one care?

 Pregel use aggregateMessage instead of mapReduceTriplets function
 -

 Key: SPARK-5062
 URL: https://issues.apache.org/jira/browse/SPARK-5062
 Project: Spark
  Issue Type: Wish
  Components: GraphX
Reporter: sjk
 Attachments: graphx_aggreate_msg.jpg


 since spark 1.2 introduce aggregateMessage instead of mapReduceTriplets, it 
 improve the performance indeed.
 it's time to replace mapReduceTriplets with aggregateMessage in Pregel.
 we can discuss it.
 i have draw a graph of aggregateMessage to show why it can improve the 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5036) Better support sending partial messages in Pregel API

2015-01-02 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-5036:
---
Description: 
Better support sending partial messages in Pregel API

1. the reqirement

In many iterative graph algorithms, only a part of the vertexes (we call them 
ActiveVertexes) need to send messages to their neighbours in each iteration. In 
many cases, ActiveVertexes are the vertexes that their attributes do not change 
between the previous and current iteration. To implement this requirement, we 
can use Pregel API + a flag (e.g., `bool isAttrChanged`) in each vertex's 
attribute. 

However, after `aggregateMessage` or `mapReduceTriplets` of each iteration, we 
need to reset this flag to the init value in every vertex, which needs a heavy 
`joinVertices`. 

We find a more efficient way to meet this requirement and want to discuss it 
here.


Look at a simple example as follows:

In i-th iteartion, the previous attribute of each vertex is `Attr` and the 
newly computed attribute is `NewAttr`:

|VID| Attr| NewAttr| Neighbours|
|:|:-|:|:--|
| 1 | 4| 5| 2, 3 |
| 2 | 3| 2| 1, 4 |
| 3 | 2| 2| 1, 4 |
| 4|  3| 4| 1, 2, 3 |

Our requirement is that: 

1.  Set each vertex's `Attr` to be `NewAttr` in i-th iteration
2.  For each vertex whose `Attr!=NewAttr`, send message to its neighbours 
in the next iteration's `aggregateMessage`.


We found it is hard to implement this requirment using current Pregel API 
efficiently. The reason is that we not only need to perform `pregel()` to  
compute the `NewAttr`  (2) but also need to perform `outJoin()` to satisfy (1).

A simple idea is to keep a `isAttrChanged:Boolean` (solution 1)  or `flag:Int` 
(solution 2) in each vertex's attribute.

 2. two solution  
---

2.1 solution 1: label and reset `isAttrChanged:Boolean` of Vertex Attr

![alt text](s1.jpeg Title)

1. init message by `aggregateMessage`
it return a messageRDD
2. `innerJoin`
compute the messages on the received vertex, return a new VertexRDD 
which have the computed value by customed logic function `vprog`, set 
`isAttrChanged = true`
3. `outerJoinVertices`
update the changed vertex to the whole graph. now the graph is new.
4. `aggregateMessage`. it return a messageRDD
5. `joinVertices`  reset erery `isAttrChanged` of Vertex attr to false

```
//  here reset the isAttrChanged to false
g = updateG.joinVertices(updateG.vertices) {
(vid, oriVertex, updateGVertex) = updateGVertex.reset()
}
   ```
   here need to reset the vertex attribute object's variable as false

if don't reset the `isAttrChanged`, it will send message next iteration 
directly.

**result:**  

*   Edge: 890041895 
*   Vertex: 181640208
*   Iterate: 150 times
*   Cost total: 8.4h
*   can't run until the 0 message 


solution 2. color vertex

![alt text](s2.jpeg Title)

iterate process:

1. innerJoin 
  `vprog` using as a partial function, looks like `vprog(curIter, _: VertexId, 
_: VD, _: A)`
  ` i = i + 1; val curIter = i`. 
  in `vprog`, user can fetch `curIter` and assign to `falg`.
2. outerJoinVertices
`graph = graph.outerJoinVertices(changedVerts) { (vid, old, newOpt) = 
newOpt.getOrElse(old)}.cache()`
3. aggregateMessages 
sendMsg is partial function, looks like `sendMsg(curIter, _: 
EdgeContext[VD, ED, A]`
**in `sendMsg`, compare `curIter` with `flag`, determine whether 
sending message**

result

raw data   from

*   vertex: 181640208
*   edge: 890041895


|  | iteration average cost | 150 iteration cost | 420 iteration cost | 
|  | - |  |  |
|  solution 1 | 188m | 7.8h | cannot finish  |
|  solution 2 | 24 | 1.2h   | 3.1h | 
| compare  | 7x  | 6.5x  | finished in 3.1 |


##  the end

i think the second solution(Pregel + a flag) is better.
this can really support the iterative graph algorithms which only part of the 
vertexes send messages to their neighbours in each iteration.

we shall use it in product environment.

pr: https://github.com/apache/spark/pull/3866

EOF


  was:
Better support sending partial messages in Pregel API

1. the reqirement

In many iterative graph algorithms, only a part of the vertexes (we call them 
ActiveVertexes) need to send messages to their neighbours in each iteration. In 
many cases, ActiveVertexes are the vertexes that their attributes do not change 
between the previous and current iteration. To implement this requirement, we 
can use Pregel API + a flag (e.g., `bool isAttrChanged`) in each vertex's 
attribute. 

However, after `aggregateMessage` or `mapReduceTriplets` of each iteration, we 
need to reset this flag to the init value in every vertex, which needs a heavy 
`joinVertices`. 

We find a more efficient way to meet this requirement and want to discuss

[jira] [Updated] (SPARK-5062) Pregel use aggregateMessage instead of mapReduceTriplets function

2015-01-02 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-5062:
---
Attachment: graphx_aggreate_msg.jpg

 Pregel use aggregateMessage instead of mapReduceTriplets function
 -

 Key: SPARK-5062
 URL: https://issues.apache.org/jira/browse/SPARK-5062
 Project: Spark
  Issue Type: Wish
  Components: GraphX
Reporter: sjk
 Attachments: graphx_aggreate_msg.jpg


 since spark 1.2 introduce aggregateMessage instead of mapReduceTriplets, it 
 improve the performance indeed.
 it's time to replace mapReduceTriplets with aggregateMessage in Pregel.
 we can discuss it.
 i have draw a graph of aggregateMessage to show why it can improve the 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5062) Pregel use aggregateMessage instead of mapReduceTriplets function

2015-01-02 Thread sjk (JIRA)
sjk created SPARK-5062:
--

 Summary: Pregel use aggregateMessage instead of mapReduceTriplets 
function
 Key: SPARK-5062
 URL: https://issues.apache.org/jira/browse/SPARK-5062
 Project: Spark
  Issue Type: Wish
  Components: GraphX
Reporter: sjk


since spark 1.2 introduce aggregateMessage instead of mapReduceTriplets, it 
improve the performance indeed.

it's time to replace mapReduceTriplets with aggregateMessage in Pregel.

we can discuss it.

i have draw a graph of aggregateMessage to show why it can improve the 
performance.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5036) Better support sending partial messages in Pregel API

2014-12-31 Thread sjk (JIRA)
sjk created SPARK-5036:
--

 Summary: Better support sending partial messages in Pregel API
 Key: SPARK-5036
 URL: https://issues.apache.org/jira/browse/SPARK-5036
 Project: Spark
  Issue Type: Improvement
  Components: GraphX
Reporter: sjk


# Better support sending partial messages in Pregel API

### 1. the reqirement

In many iterative graph algorithms, only a part of the vertexes (we call them 
ActiveVertexes) need to send messages to their neighbours in each iteration. In 
many cases, ActiveVertexes are the vertexes that their attributes do not change 
between the previous and current iteration. To implement this requirement, we 
can use Pregel API + a flag (e.g., `bool isAttrChanged`) in each vertex's 
attribute. 

However, after `aggregateMessage` or `mapReduceTriplets` of each iteration, we 
need to reset this flag to the init value in every vertex, which needs a heavy 
`joinVertices`. 

We find a more efficient way to meet this requirement and want to discuss it 
here.


Look at a simple example as follows:

In i-th iteartion, the previous attribute of each vertex is `Attr` and the 
newly computed attribute is `NewAttr`:

|VID| Attr| NewAttr| Neighbours|
|:|:-|:|:--|
| 1 | 4| 5| 2, 3 |
| 2 | 3| 2| 1, 4 |
| 3 | 2| 2| 1, 4 |
| 4|  3| 4| 1, 2, 3 |

Our requirement is that: 

1.  Set each vertex's `Attr` to be `NewAttr` in i-th iteration
2.  For each vertex whose `Attr!=NewAttr`, send message to its neighbours 
in the next iteration's `aggregateMessage`.


We found it is hard to implement this requirment using current Pregel API 
efficiently. The reason is that we not only need to perform `pregel()` to  
compute the `NewAttr`  (2) but also need to perform `outJoin()` to satisfy (1).

A simple idea is to keep a `isAttrChanged:Boolean` (solution 1)  or `flag:Int` 
(solution 2) in each vertex's attribute.

### 2. two solution  
---

2.1 solution 1: label and reset `isAttrChanged:Boolean` of Vertex Attr

![alt text](s1.jpeg Title)

1. init message by `aggregateMessage`
it return a messageRDD
2. `innerJoin`
compute the messages on the received vertex, return a new VertexRDD 
which have the computed value by customed logic function `vprog`, set 
`isAttrChanged = true`
3. `outerJoinVertices`
update the changed vertex to the whole graph. now the graph is new.
4. `aggregateMessage`. it return a messageRDD
5. `joinVertices`  reset erery `isAttrChanged` of Vertex attr to false

```
//  here reset the isAttrChanged to false
g = updateG.joinVertices(updateG.vertices) {
(vid, oriVertex, updateGVertex) = updateGVertex.reset()
}
   ```
   here need to reset the vertex attribute object's variable as false

if don't reset the `isAttrChanged`, it will send message next iteration 
directly.

**result:**  

*   Edge: 890041895 
*   Vertex: 181640208
*   Iterate: 150 times
*   Cost total: 8.4h
*   can't run until the 0 message 


solution 2. color vertex

![alt text](s2.jpeg Title)

iterate process:

1. innerJoin 
  `vprog` using as a partial function, looks like `vprog(curIter, _: VertexId, 
_: VD, _: A)`
  ` i = i + 1; val curIter = i`. 
  in `vprog`, user can fetch `curIter` and assign to `falg`.
2. outerJoinVertices
`graph = graph.outerJoinVertices(changedVerts) { (vid, old, newOpt) = 
newOpt.getOrElse(old)}.cache()`
3. aggregateMessages 
sendMsg is partial function, looks like `sendMsg(curIter, _: 
EdgeContext[VD, ED, A]`
**in `sendMsg`, compare `curIter` with `flag`, determine whether 
sending message**

result

raw data   from

*   vertex: 181640208
*   edge: 890041895


|  | iteration average cost | 150 iteration cost | 420 iteration cost | 
|  | - |  |  |
|  solution 1 | 188m | 7.8h | cannot finish  |
|  solution 2 | 24 | 1.2h   | 3.1h | 
| compare  | 7x  | 6.5x  | finished in 3.1 |


##  the end

i think the second solution(Pregel + a flag) is better.
this can really support the iterative graph algorithms which only part of the 
vertexes send messages to their neighbours in each iteration.

we shall use it in product environment.

EOF




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5036) Better support sending partial messages in Pregel API

2014-12-31 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-5036:
---
Description: 
Better support sending partial messages in Pregel API

1. the reqirement

In many iterative graph algorithms, only a part of the vertexes (we call them 
ActiveVertexes) need to send messages to their neighbours in each iteration. In 
many cases, ActiveVertexes are the vertexes that their attributes do not change 
between the previous and current iteration. To implement this requirement, we 
can use Pregel API + a flag (e.g., `bool isAttrChanged`) in each vertex's 
attribute. 

However, after `aggregateMessage` or `mapReduceTriplets` of each iteration, we 
need to reset this flag to the init value in every vertex, which needs a heavy 
`joinVertices`. 

We find a more efficient way to meet this requirement and want to discuss it 
here.


Look at a simple example as follows:

In i-th iteartion, the previous attribute of each vertex is `Attr` and the 
newly computed attribute is `NewAttr`:

|VID| Attr| NewAttr| Neighbours|
|:|:-|:|:--|
| 1 | 4| 5| 2, 3 |
| 2 | 3| 2| 1, 4 |
| 3 | 2| 2| 1, 4 |
| 4|  3| 4| 1, 2, 3 |

Our requirement is that: 

1.  Set each vertex's `Attr` to be `NewAttr` in i-th iteration
2.  For each vertex whose `Attr!=NewAttr`, send message to its neighbours 
in the next iteration's `aggregateMessage`.


We found it is hard to implement this requirment using current Pregel API 
efficiently. The reason is that we not only need to perform `pregel()` to  
compute the `NewAttr`  (2) but also need to perform `outJoin()` to satisfy (1).

A simple idea is to keep a `isAttrChanged:Boolean` (solution 1)  or `flag:Int` 
(solution 2) in each vertex's attribute.

 2. two solution  
---

2.1 solution 1: label and reset `isAttrChanged:Boolean` of Vertex Attr

![alt text](s1.jpeg Title)

1. init message by `aggregateMessage`
it return a messageRDD
2. `innerJoin`
compute the messages on the received vertex, return a new VertexRDD 
which have the computed value by customed logic function `vprog`, set 
`isAttrChanged = true`
3. `outerJoinVertices`
update the changed vertex to the whole graph. now the graph is new.
4. `aggregateMessage`. it return a messageRDD
5. `joinVertices`  reset erery `isAttrChanged` of Vertex attr to false

```
//  here reset the isAttrChanged to false
g = updateG.joinVertices(updateG.vertices) {
(vid, oriVertex, updateGVertex) = updateGVertex.reset()
}
   ```
   here need to reset the vertex attribute object's variable as false

if don't reset the `isAttrChanged`, it will send message next iteration 
directly.

**result:**  

*   Edge: 890041895 
*   Vertex: 181640208
*   Iterate: 150 times
*   Cost total: 8.4h
*   can't run until the 0 message 


solution 2. color vertex

![alt text](s2.jpeg Title)

iterate process:

1. innerJoin 
  `vprog` using as a partial function, looks like `vprog(curIter, _: VertexId, 
_: VD, _: A)`
  ` i = i + 1; val curIter = i`. 
  in `vprog`, user can fetch `curIter` and assign to `falg`.
2. outerJoinVertices
`graph = graph.outerJoinVertices(changedVerts) { (vid, old, newOpt) = 
newOpt.getOrElse(old)}.cache()`
3. aggregateMessages 
sendMsg is partial function, looks like `sendMsg(curIter, _: 
EdgeContext[VD, ED, A]`
**in `sendMsg`, compare `curIter` with `flag`, determine whether 
sending message**

result

raw data   from

*   vertex: 181640208
*   edge: 890041895


|  | iteration average cost | 150 iteration cost | 420 iteration cost | 
|  | - |  |  |
|  solution 1 | 188m | 7.8h | cannot finish  |
|  solution 2 | 24 | 1.2h   | 3.1h | 
| compare  | 7x  | 6.5x  | finished in 3.1 |


##  the end

i think the second solution(Pregel + a flag) is better.
this can really support the iterative graph algorithms which only part of the 
vertexes send messages to their neighbours in each iteration.

we shall use it in product environment.

EOF


  was:
# Better support sending partial messages in Pregel API

### 1. the reqirement

In many iterative graph algorithms, only a part of the vertexes (we call them 
ActiveVertexes) need to send messages to their neighbours in each iteration. In 
many cases, ActiveVertexes are the vertexes that their attributes do not change 
between the previous and current iteration. To implement this requirement, we 
can use Pregel API + a flag (e.g., `bool isAttrChanged`) in each vertex's 
attribute. 

However, after `aggregateMessage` or `mapReduceTriplets` of each iteration, we 
need to reset this flag to the init value in every vertex, which needs a heavy 
`joinVertices`. 

We find a more efficient way to meet this requirement and want to discuss it 
here.


Look at a simple example

[jira] [Updated] (SPARK-5036) Better support sending partial messages in Pregel API

2014-12-31 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-5036:
---
Attachment: s1.jpeg

 Better support sending partial messages in Pregel API
 -

 Key: SPARK-5036
 URL: https://issues.apache.org/jira/browse/SPARK-5036
 Project: Spark
  Issue Type: Improvement
  Components: GraphX
Reporter: sjk
 Attachments: s1.jpeg


 Better support sending partial messages in Pregel API
 1. the reqirement
 In many iterative graph algorithms, only a part of the vertexes (we call them 
 ActiveVertexes) need to send messages to their neighbours in each iteration. 
 In many cases, ActiveVertexes are the vertexes that their attributes do not 
 change between the previous and current iteration. To implement this 
 requirement, we can use Pregel API + a flag (e.g., `bool isAttrChanged`) in 
 each vertex's attribute. 
 However, after `aggregateMessage` or `mapReduceTriplets` of each iteration, 
 we need to reset this flag to the init value in every vertex, which needs a 
 heavy `joinVertices`. 
 We find a more efficient way to meet this requirement and want to discuss it 
 here.
 Look at a simple example as follows:
 In i-th iteartion, the previous attribute of each vertex is `Attr` and the 
 newly computed attribute is `NewAttr`:
 |VID| Attr| NewAttr| Neighbours|
 |:|:-|:|:--|
 | 1 | 4| 5| 2, 3 |
 | 2 | 3| 2| 1, 4 |
 | 3 | 2| 2| 1, 4 |
 | 4|  3| 4| 1, 2, 3 |
 Our requirement is that: 
 1.Set each vertex's `Attr` to be `NewAttr` in i-th iteration
 2.For each vertex whose `Attr!=NewAttr`, send message to its neighbours 
 in the next iteration's `aggregateMessage`.
 We found it is hard to implement this requirment using current Pregel API 
 efficiently. The reason is that we not only need to perform `pregel()` to  
 compute the `NewAttr`  (2) but also need to perform `outJoin()` to satisfy 
 (1).
 A simple idea is to keep a `isAttrChanged:Boolean` (solution 1)  or 
 `flag:Int` (solution 2) in each vertex's attribute.
  2. two solution  
 ---
 2.1 solution 1: label and reset `isAttrChanged:Boolean` of Vertex Attr
 ![alt text](s1.jpeg Title)
 1. init message by `aggregateMessage`
   it return a messageRDD
 2. `innerJoin`
   compute the messages on the received vertex, return a new VertexRDD 
 which have the computed value by customed logic function `vprog`, set 
 `isAttrChanged = true`
 3. `outerJoinVertices`
   update the changed vertex to the whole graph. now the graph is new.
 4. `aggregateMessage`. it return a messageRDD
 5. `joinVertices`  reset erery `isAttrChanged` of Vertex attr to false
   ```
   //  here reset the isAttrChanged to false
   g = updateG.joinVertices(updateG.vertices) {
   (vid, oriVertex, updateGVertex) = updateGVertex.reset()
   }
```
here need to reset the vertex attribute object's variable as false
 if don't reset the `isAttrChanged`, it will send message next iteration 
 directly.
 **result:**  
 * Edge: 890041895 
 * Vertex: 181640208
 * Iterate: 150 times
 * Cost total: 8.4h
 * can't run until the 0 message 
 solution 2. color vertex
 ![alt text](s2.jpeg Title)
 iterate process:
 1. innerJoin 
   `vprog` using as a partial function, looks like `vprog(curIter, _: 
 VertexId, _: VD, _: A)`
   ` i = i + 1; val curIter = i`. 
   in `vprog`, user can fetch `curIter` and assign to `falg`.
 2. outerJoinVertices
   `graph = graph.outerJoinVertices(changedVerts) { (vid, old, newOpt) = 
 newOpt.getOrElse(old)}.cache()`
 3. aggregateMessages 
   sendMsg is partial function, looks like `sendMsg(curIter, _: 
 EdgeContext[VD, ED, A]`
   **in `sendMsg`, compare `curIter` with `flag`, determine whether 
 sending message**
   result
 raw data   from
 * vertex: 181640208
 * edge: 890041895
 |  | iteration average cost | 150 iteration cost | 420 iteration cost | 
 |  | - |  |  |
 |  solution 1 | 188m | 7.8h | cannot finish  |
 |  solution 2 | 24 | 1.2h   | 3.1h | 
 | compare  | 7x  | 6.5x  | finished in 3.1 |
 
 ##the end
 
 i think the second solution(Pregel + a flag) is better.
 this can really support the iterative graph algorithms which only part of the 
 vertexes send messages to their neighbours in each iteration.
 we shall use it in product environment.
 EOF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5036) Better support sending partial messages in Pregel API

2014-12-31 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-5036:
---
Attachment: s2.jpeg

 Better support sending partial messages in Pregel API
 -

 Key: SPARK-5036
 URL: https://issues.apache.org/jira/browse/SPARK-5036
 Project: Spark
  Issue Type: Improvement
  Components: GraphX
Reporter: sjk
 Attachments: s1.jpeg, s2.jpeg


 Better support sending partial messages in Pregel API
 1. the reqirement
 In many iterative graph algorithms, only a part of the vertexes (we call them 
 ActiveVertexes) need to send messages to their neighbours in each iteration. 
 In many cases, ActiveVertexes are the vertexes that their attributes do not 
 change between the previous and current iteration. To implement this 
 requirement, we can use Pregel API + a flag (e.g., `bool isAttrChanged`) in 
 each vertex's attribute. 
 However, after `aggregateMessage` or `mapReduceTriplets` of each iteration, 
 we need to reset this flag to the init value in every vertex, which needs a 
 heavy `joinVertices`. 
 We find a more efficient way to meet this requirement and want to discuss it 
 here.
 Look at a simple example as follows:
 In i-th iteartion, the previous attribute of each vertex is `Attr` and the 
 newly computed attribute is `NewAttr`:
 |VID| Attr| NewAttr| Neighbours|
 |:|:-|:|:--|
 | 1 | 4| 5| 2, 3 |
 | 2 | 3| 2| 1, 4 |
 | 3 | 2| 2| 1, 4 |
 | 4|  3| 4| 1, 2, 3 |
 Our requirement is that: 
 1.Set each vertex's `Attr` to be `NewAttr` in i-th iteration
 2.For each vertex whose `Attr!=NewAttr`, send message to its neighbours 
 in the next iteration's `aggregateMessage`.
 We found it is hard to implement this requirment using current Pregel API 
 efficiently. The reason is that we not only need to perform `pregel()` to  
 compute the `NewAttr`  (2) but also need to perform `outJoin()` to satisfy 
 (1).
 A simple idea is to keep a `isAttrChanged:Boolean` (solution 1)  or 
 `flag:Int` (solution 2) in each vertex's attribute.
  2. two solution  
 ---
 2.1 solution 1: label and reset `isAttrChanged:Boolean` of Vertex Attr
 ![alt text](s1.jpeg Title)
 1. init message by `aggregateMessage`
   it return a messageRDD
 2. `innerJoin`
   compute the messages on the received vertex, return a new VertexRDD 
 which have the computed value by customed logic function `vprog`, set 
 `isAttrChanged = true`
 3. `outerJoinVertices`
   update the changed vertex to the whole graph. now the graph is new.
 4. `aggregateMessage`. it return a messageRDD
 5. `joinVertices`  reset erery `isAttrChanged` of Vertex attr to false
   ```
   //  here reset the isAttrChanged to false
   g = updateG.joinVertices(updateG.vertices) {
   (vid, oriVertex, updateGVertex) = updateGVertex.reset()
   }
```
here need to reset the vertex attribute object's variable as false
 if don't reset the `isAttrChanged`, it will send message next iteration 
 directly.
 **result:**  
 * Edge: 890041895 
 * Vertex: 181640208
 * Iterate: 150 times
 * Cost total: 8.4h
 * can't run until the 0 message 
 solution 2. color vertex
 ![alt text](s2.jpeg Title)
 iterate process:
 1. innerJoin 
   `vprog` using as a partial function, looks like `vprog(curIter, _: 
 VertexId, _: VD, _: A)`
   ` i = i + 1; val curIter = i`. 
   in `vprog`, user can fetch `curIter` and assign to `falg`.
 2. outerJoinVertices
   `graph = graph.outerJoinVertices(changedVerts) { (vid, old, newOpt) = 
 newOpt.getOrElse(old)}.cache()`
 3. aggregateMessages 
   sendMsg is partial function, looks like `sendMsg(curIter, _: 
 EdgeContext[VD, ED, A]`
   **in `sendMsg`, compare `curIter` with `flag`, determine whether 
 sending message**
   result
 raw data   from
 * vertex: 181640208
 * edge: 890041895
 |  | iteration average cost | 150 iteration cost | 420 iteration cost | 
 |  | - |  |  |
 |  solution 1 | 188m | 7.8h | cannot finish  |
 |  solution 2 | 24 | 1.2h   | 3.1h | 
 | compare  | 7x  | 6.5x  | finished in 3.1 |
 
 ##the end
 
 i think the second solution(Pregel + a flag) is better.
 this can really support the iterative graph algorithms which only part of the 
 vertexes send messages to their neighbours in each iteration.
 we shall use it in product environment.
 EOF



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



a question of Graph build api

2014-12-04 Thread jinkui . sjk
hi, all

where build a graph from edge tuples with api  Graph.fromEdgeTuples, 
the edges object type is RDD[Edge], inside of  EdgeRDD.fromEdge,  
EdgePartitionBuilder.add func’s param is better to be Edge object.
no need to create a new Edge object again.



  def fromEdgeTuples[VD: ClassTag](
  rawEdges: RDD[(VertexId, VertexId)],
  defaultValue: VD,
  uniqueEdges: Option[PartitionStrategy] = None,
  edgeStorageLevel: StorageLevel = StorageLevel.MEMORY_ONLY,
  vertexStorageLevel: StorageLevel = StorageLevel.MEMORY_ONLY): Graph[VD, 
Int] =
  {
val edges = rawEdges.map(p = Edge(p._1, p._2, 1))
val graph = GraphImpl(edges, defaultValue, edgeStorageLevel, 
vertexStorageLevel)
uniqueEdges match {
  case Some(p) = graph.partitionBy(p).groupEdges((a, b) = a + b)
  case None = graph
}
  }




  object GraphImpl {

  /** Create a graph from edges, setting referenced vertices to 
`defaultVertexAttr`. */
  def apply[VD: ClassTag, ED: ClassTag](
  edges: RDD[Edge[ED]],
  defaultVertexAttr: VD,
  edgeStorageLevel: StorageLevel,
  vertexStorageLevel: StorageLevel): GraphImpl[VD, ED] = {
fromEdgeRDD(EdgeRDD.fromEdges(edges), defaultVertexAttr, edgeStorageLevel, 
vertexStorageLevel)
  }



  object EdgeRDD {
  /**
   * Creates an EdgeRDD from a set of edges.
   *
   * @tparam ED the edge attribute type
   * @tparam VD the type of the vertex attributes that may be joined with the 
returned EdgeRDD
   */
  def fromEdges[ED: ClassTag, VD: ClassTag](edges: RDD[Edge[ED]]): EdgeRDD[ED, 
VD] = {
val edgePartitions = edges.mapPartitionsWithIndex { (pid, iter) =
  val builder = new EdgePartitionBuilder[ED, VD]
  iter.foreach { e =
builder.add(e.srcId, e.dstId, e.attr)
  }
  Iterator((pid, builder.toEdgePartition))
}
EdgeRDD.fromEdgePartitions(edgePartitions)
  }




smime.p7s
Description: S/MIME cryptographic signature


[jira] [Commented] (SPARK-3894) Scala style: line length increase to 160

2014-10-13 Thread sjk (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14170414#comment-14170414
 ] 

sjk commented on SPARK-3894:


too much function's parameters are more than four, the length is bigger than 
100.
it's not friendly on reading code.

maybe we can change to 120.

for the remained code, we do nothing change, only the new merged code use line 
120 length.

OK?

 Scala style: line length increase to 160
 

 Key: SPARK-3894
 URL: https://issues.apache.org/jira/browse/SPARK-3894
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk

 100 is shorter
 our screen is bigger



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-3894) Scala style: line length increase to 120 for standard

2014-10-13 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-3894:
---
Summary: Scala style: line length increase to 120 for standard  (was: Scala 
style: line length increase to 160)

 Scala style: line length increase to 120 for standard
 -

 Key: SPARK-3894
 URL: https://issues.apache.org/jira/browse/SPARK-3894
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk

 100 is shorter
 our screen is bigger



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-3894) Scala style: line length increase to 120 for standard

2014-10-13 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk reopened SPARK-3894:


use 120 instead

scala function parameter effect the code readability

 Scala style: line length increase to 120 for standard
 -

 Key: SPARK-3894
 URL: https://issues.apache.org/jira/browse/SPARK-3894
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk

 100 is shorter
 our screen is bigger



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-3897) Scala style: format example code

2014-10-13 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-3897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk closed SPARK-3897.
--

 Scala style: format example code
 

 Key: SPARK-3897
 URL: https://issues.apache.org/jira/browse/SPARK-3897
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk

 https://github.com/apache/spark/pull/2754



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-3895) Scala style: Indentation of method

2014-10-13 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-3895:
---
Description: 

{code:title=core/src/main/scala/org/apache/spark/Aggregator.scala|borderStyle=solid}
// for example
  def combineCombinersByKey(iter: Iterator[_ : Product2[K, C]], context: 
TaskContext)
  : Iterator[(K, C)] =
  {

...

  def combineValuesByKey(iter: Iterator[_ : Product2[K, V]],
 context: TaskContext): Iterator[(K, C)] = {

{code}

there are not conform to the 
rule.https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide

there are so much code like this


  was:


such as https://github.com/apache/spark/pull/2734

{code:title=core/src/main/scala/org/apache/spark/Aggregator.scala|borderStyle=solid}
// for example
  def combineCombinersByKey(iter: Iterator[_ : Product2[K, C]], context: 
TaskContext)
  : Iterator[(K, C)] =
  {

...

  def combineValuesByKey(iter: Iterator[_ : Product2[K, V]],
 context: TaskContext): Iterator[(K, C)] = {

{code}

there are not conform to the 
rule.https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide

there are so much code like this



 Scala style: Indentation of method
 --

 Key: SPARK-3895
 URL: https://issues.apache.org/jira/browse/SPARK-3895
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk

 {code:title=core/src/main/scala/org/apache/spark/Aggregator.scala|borderStyle=solid}
 // for example
   def combineCombinersByKey(iter: Iterator[_ : Product2[K, C]], context: 
 TaskContext)
   : Iterator[(K, C)] =
   {
 ...
   def combineValuesByKey(iter: Iterator[_ : Product2[K, V]],
  context: TaskContext): Iterator[(K, C)] = {
 {code}
 there are not conform to the 
 rule.https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide
 there are so much code like this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-3895) Scala style: Indentation of method

2014-10-13 Thread sjk (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14170541#comment-14170541
 ] 

sjk commented on SPARK-3895:


i have close the pr that code format changes are much.

but this sub task is the current code brace if not confirm the rule. 

shall we observe the rule of `Spark+Code+Style+Guide` ?

 Scala style: Indentation of method
 --

 Key: SPARK-3895
 URL: https://issues.apache.org/jira/browse/SPARK-3895
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk

 {code:title=core/src/main/scala/org/apache/spark/Aggregator.scala|borderStyle=solid}
 // for example
   def combineCombinersByKey(iter: Iterator[_ : Product2[K, C]], context: 
 TaskContext)
   : Iterator[(K, C)] =
   {
 ...
   def combineValuesByKey(iter: Iterator[_ : Product2[K, V]],
  context: TaskContext): Iterator[(K, C)] = {
 {code}
 there are not conform to the 
 rule.https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide
 there are so much code like this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-3895) Scala style: Indentation of method

2014-10-13 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk reopened SPARK-3895:


 Scala style: Indentation of method
 --

 Key: SPARK-3895
 URL: https://issues.apache.org/jira/browse/SPARK-3895
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk

 {code:title=core/src/main/scala/org/apache/spark/Aggregator.scala|borderStyle=solid}
 // for example
   def combineCombinersByKey(iter: Iterator[_ : Product2[K, C]], context: 
 TaskContext)
   : Iterator[(K, C)] =
   {
 ...
   def combineValuesByKey(iter: Iterator[_ : Product2[K, V]],
  context: TaskContext): Iterator[(K, C)] = {
 {code}
 there are not conform to the 
 rule.https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide
 there are so much code like this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-3897) Scala style: format example code

2014-10-10 Thread sjk (JIRA)
sjk created SPARK-3897:
--

 Summary: Scala style: format example code
 Key: SPARK-3897
 URL: https://issues.apache.org/jira/browse/SPARK-3897
 Project: Spark
  Issue Type: Sub-task
Reporter: sjk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-3897) Scala style: format example code

2014-10-10 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-3897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-3897:
---

https://github.com/apache/spark/pull/2754

 Scala style: format example code
 

 Key: SPARK-3897
 URL: https://issues.apache.org/jira/browse/SPARK-3897
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-3897) Scala style: format example code

2014-10-10 Thread sjk (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-3897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sjk updated SPARK-3897:
---
Description: https://github.com/apache/spark/pull/2754

 Scala style: format example code
 

 Key: SPARK-3897
 URL: https://issues.apache.org/jira/browse/SPARK-3897
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: sjk

 https://github.com/apache/spark/pull/2754



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-3893) declare mutableMap/mutableSet explicitly

2014-10-09 Thread sjk (JIRA)
sjk created SPARK-3893:
--

 Summary: declare  mutableMap/mutableSet explicitly
 Key: SPARK-3893
 URL: https://issues.apache.org/jira/browse/SPARK-3893
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core
Affects Versions: 1.1.0
Reporter: sjk



{code:java}
  // current
  val workers = new HashSet[WorkerInfo]
  // sugguest
  val workers = new mutable.HashSet[WorkerInfo]
{code}

the other benefit is reminding us whether can use immutable collection instead 
of.

most of map we used is mutable.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-3894) Scala style: line length increase to 160

2014-10-09 Thread sjk (JIRA)
sjk created SPARK-3894:
--

 Summary: Scala style: line length increase to 160
 Key: SPARK-3894
 URL: https://issues.apache.org/jira/browse/SPARK-3894
 Project: Spark
  Issue Type: Sub-task
Reporter: sjk


100 is shorter

our screen is bigger




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-3854) Scala style: require spaces before `{`

2014-10-09 Thread sjk (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166362#comment-14166362
 ] 

sjk commented on SPARK-3854:


i think all the code after symbol should have one space length 

 Scala style: require spaces before `{`
 --

 Key: SPARK-3854
 URL: https://issues.apache.org/jira/browse/SPARK-3854
 Project: Spark
  Issue Type: Sub-task
  Components: Project Infra
Reporter: Josh Rosen

 We should require spaces before opening curly braces.  This isn't in the 
 style guide, but it probably should be:
 {code}
 // Correct:
 if (true) {
   println(Wow!)
 }
 // Incorrect:
 if (true){
println(Wow!)
 }
 {code}
 See https://github.com/apache/spark/pull/1658#discussion-diff-18611791 for an 
 example in the wild.
 {{git grep ){}} shows only a few occurrences of this style.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-3895) Scala style: Indentation of method

2014-10-09 Thread sjk (JIRA)
sjk created SPARK-3895:
--

 Summary: Scala style: Indentation of method
 Key: SPARK-3895
 URL: https://issues.apache.org/jira/browse/SPARK-3895
 Project: Spark
  Issue Type: Sub-task
Reporter: sjk




such as https://github.com/apache/spark/pull/2734

{code:title=core/src/main/scala/org/apache/spark/Aggregator.scala|borderStyle=solid}
// for example
  def combineCombinersByKey(iter: Iterator[_ : Product2[K, C]], context: 
TaskContext)
  : Iterator[(K, C)] =
  {

...

  def combineValuesByKey(iter: Iterator[_ : Product2[K, V]],
 context: TaskContext): Iterator[(K, C)] = {

{code}

there are not conform to the 
rule.https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide

there are so much code like this




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-3781) code style format

2014-10-03 Thread sjk (JIRA)
sjk created SPARK-3781:
--

 Summary: code style format
 Key: SPARK-3781
 URL: https://issues.apache.org/jira/browse/SPARK-3781
 Project: Spark
  Issue Type: Improvement
Reporter: sjk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: [growl-discuss] Clearing Notifications

2012-12-25 Thread sjk
On Tuesday, December 18, 2012 8:53:36 PM UTC-8, Chas4 wrote:

 Does it still happen with Growl 2.0.1?


For me, yes.  I don't remember when this issue started.  More details:

The number in There were # notifications while you were away text at the 
bottom of the Notification Rollup accurately decreases depending which 'x' 
is clicked/tapped but specific corresponding alert details never disappear 
until the NR is closed.  Console messages look like this:

Dec 25 13:51:41 aura.local Growl[278]: Row not found, or not application
Dec 25 13:51:53 --- last message repeated 1 time ---
Dec 25 13:51:53 aura.local Growl[278]: *** -[NSArray objectsAtIndexes:]: 
index 1 in index set beyond bounds for empty array
Dec 25 13:51:53 aura.local Growl[278]: (
0   CoreFoundation  0x7fff9641e0a6 
__exceptionPreprocess + 198
1   libobjc.A.dylib 0x7fff954bc3f0 
objc_exception_throw + 43
2   CoreFoundation  0x7fff96446339 -[NSArray 
objectsAtIndexes:] + 137
3   Growl   0x00010defe0e0 
-[GrowlNotificationHistoryWindow deleteNotifications:] + 492
4   AppKit  0x7fff92df3a59 -[NSApplication 
sendAction:to:from:] + 342
5   AppKit  0x7fff92df38b7 -[NSControl 
sendAction:to:] + 85
6   AppKit  0x7fff92df37eb -[NSCell 
_sendActionFrom:] + 138
7   AppKit  0x7fff92df1cd3 -[NSCell 
trackMouse:inRect:ofView:untilMouseUp:] + 1855
8   AppKit  0x7fff92df1521 -[NSButtonCell 
trackMouse:inRect:ofView:untilMouseUp:] + 504
9   AppKit  0x7fff92df0c9c -[NSControl 
mouseDown:] + 820
10  AppKit  0x7fff92de860e -[NSWindow 
sendEvent:] + 6853
11  AppKit  0x7fff92de4744 -[NSApplication 
sendEvent:] + 5761
12  AppKit  0x7fff92cfa2fa -[NSApplication 
run] + 636
13  AppKit  0x7fff92c9ecb6 
NSApplicationMain + 869
14  Growl   0x00010dee2685 main + 99
15  Growl   0x00010dedcf84 start + 52
)

-- 
You received this message because you are subscribed to the Google Groups 
Growl Discuss group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/growldiscuss/-/AAS8zJnmMXIJ.
To post to this group, send email to growldiscuss@googlegroups.com.
To unsubscribe from this group, send email to 
growldiscuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/growldiscuss?hl=en.



Re: [growl-discuss] Clearing Notifications

2012-12-25 Thread sjk
I've also found these related topics:

Growl rollup clicking
https://groups.google.com/d/topic/growldiscuss/X_wis2tF9Yo/discussion 

Rollup notifications have no action
https://groups.google.com/d/topic/growldiscuss/jldTld67AmM/discussion

-- 
You received this message because you are subscribed to the Google Groups 
Growl Discuss group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/growldiscuss/-/oshBZKt_tsIJ.
To post to this group, send email to growldiscuss@googlegroups.com.
To unsubscribe from this group, send email to 
growldiscuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/growldiscuss?hl=en.



[id-android] Re: [Id -android]wta : samsung fascinate

2012-11-10 Thread sjk
pilih bahasa {bebas}  sentuh  tombol 'emergency call'  ketik **#83786633*  
 sentuh tobol 'home'  baru diijek ulang.

mdh2an membantu


Pada Sabtu, 10 November 2012 7:10:06 UTC+7, akhmed menulis:

 Pagi
 Mau mnta bantuan dri rekan2 sekalian.kebetulan sya pegang fascinate punya 
 temen yg habis di factory reset.ternyata inject lepas...saya bantu untuk 
 inject fascinate tmen saya tapi tampilan nya selelu masuk ke menu 
 bahasa...di tekan tombol back atau home ndak mau...mungkin rekan2 ada 
 solusi...
 Terima kasih


-- 
==
Download Aplikasi Kompas  versi Digital dan Keren
https://play.google.com/store/apps/details?id=com.kompas.android.kec
--
Download Aplikasi AR MONSTAR dari Indosat 
https://play.google.com/store/apps/details?id=com.ar.monstarunity
-
Samsung GALAXY SIII hanya Rp 5.965.000 di Multiply.com
Super Sale! Click http://bit.ly/drogadtor
--
GSM-AKU  http://www.gsmaku.com - BEC Bandung
E-mail: syaf...@gsmaku.com  Hp: 0881-1515151 
-
EceranShop  http://eceranshop.com - BEC  Bandung
E-mail: wi...@eceranshop.com  Hp: 0815-56599888

Web + email + domain .web.id: 75rb / TAHUN - http://www.hostune.com

Aturan Umum  ID-Android: http://goo.gl/MpVq8
Join Forum  ID-ANDROID: http://forum.android.or.id
==




Re: DS3 mux recommendation

2010-09-09 Thread sjk
We use Adtran MX2820s which have been pretty reliable. They are designed
for medium density, so I am not sure if they'll be applicable to your
situation. We pull and trap a fair amount of snmp from them with no
problems.

Jay Nakamura wrote:
 I haven't researched stand alone DS3 mux in a long time and was
 wondering if anyone can recommend a DS3 Mux.  I have used Adtran
 before. (Long ago)  The products back then worked fine on line level
 but management interface was awful and if you threw too much SNMP at
 it and the management interface locked up.
 
 Are there anything better out there these days?
 
 TIA,
 
 -Jay
 



Re: U.S. Plans Cyber Shield for Utilities, Companies

2010-07-11 Thread sjk
$100M is for the first phase, which I would think would be the initial
deployment of intrusions sensors with out of band data feeds, and the
building of a baseline traffic model. The real question is why do any
critical control networks ever touch anything remotely connected to a
public network? Laziness - that's why.

Tomas L. Byrnes wrote:
 Because no-one who could do it for less can afford to respond to government 
 contracts, and make sure they comply with all the applicable laws and 
 regulations, and keep the sort of records, and be prepared for the audits of 
 said records, required.
 
 As soon as you do business with the govt, the overhead goes through the roof.
 
 
 -Original Message-
 From: Patrick Giagnocavo [mailto:patr...@zill.net]
 Sent: Wednesday, July 07, 2010 7:02 PM
 To: nanog@nanog.org
 Subject: Re: U.S. Plans Cyber Shield for Utilities, Companies

 andrew.wallace wrote:
 Article:

 http://online.wsj.com/article/SB100014240527487045450045753529838504631
 08.html
 Why does it cost $100 million to install and configure OpenBSD on a
 bunch of old systems?

 --Patrick
 



Re: Strange practices?

2010-06-07 Thread sjk
Hve seen it a few times -- usually with enterprise customers who are
unable to manage their own routers and one ISP which has problems
configuring BGP on their client facing equipment.


Dale Cornman wrote:
 Has anyone ever heard of a multi-homed enterprise not running bgp with
 either of 2 providers, but instead, each provider statically routes a block
 to their common customer and also each originates this block in BGP?   One
 of the ISP's in this case owns the block and has even provided a letter of
 authorization to the other, allowing them to announce it in BGP as well.
   I had personally never heard of this and am curious if this is a common
 practice as well as if this would potentially create any problems by 2
 Autonomous Systems both originating the same prefix.
 
 Thanks
 
 -Bill



Cyclops Down?

2009-12-15 Thread sjk
Is anyone else seeing cyclops down -- or is it just me?

 mtr -c10 -r 131.179.96.253

4. osh-2828-peer.onshore.net 0.0%101.3   1.3   1.2   1.6   0.1
  5. ip65-47-181-105.z181-47-65.c  0.0%101.4   2.0   1.3   3.7   0.8
  6. ge11-1-4d0.mcr2.chicago-il.u  0.0%102.1   1.7   1.4   2.1   0.3
  7. ae1d0.mcr1.chicago-il.us.xo.  0.0%102.7  11.8   1.8  34.9  13.4
  8. 216.156.0.161.ptr.us.xo.net   0.0%10   62.2  62.3  62.0  62.8   0.3
  9. te-3-2-0.rar3.dallas-tx.us.x  0.0%10   61.1  61.8  61.0  64.2   1.0
 10. 207.88.12.46.ptr.us.xo.net0.0%10   61.6  61.6  60.7  63.8   1.1
 11. 207.88.12.158.ptr.us.xo.net   0.0%10   60.7  61.0  60.7  61.7   0.4
 12. lax-px1--xo-ge.cenic.net  0.0%10   60.5  60.8  60.4  61.4   0.4
 13. dc-lax-core1--lax-peer1-ge.c  0.0%10   61.5  61.5  61.1  62.1   0.4
 14. dc-lax-agg1--lax-core1-ge.ce  0.0%10   61.1  61.6  60.8  63.5   0.9
 15. dc-ucla--lax-agg1-ge-2.cenic  0.0%10   62.0  62.6  61.7  65.1   1.3
 16. border-2--core-1-ge.backbone  0.0%10   62.4  62.4  61.8  63.4   0.5
 17. core-1--mathsci-10ge.backbon  0.0%10   61.9  61.7  61.4  62.1   0.2
 18. ???  100.0100.0   0.0   0.0   0.0   0.0



Re: NetFlow analyzer software

2009-10-19 Thread sjk
We currently use nfsen - http://nfsen.sourceforge.net/ -- It works
pretty well, not as fancy as others I've worked with, but provides the
basic analytical needs.

Michael J McCafferty wrote:
 All,
I am looking for decent netflow analyzer and reporting  software with good 
 support for AS data. 
ManagEngine's product crashes or locks up my browser when I try to 
 list/sort the AS info because it's too large of a list and there is no way to 
 tell it to show just the top x results.
Plixer's Scrutenizer, while it seems like it's a pretty decent product, is 
 no longer supporting Linux... We are a Linux shop (servers, desktops, 
 laptops). 
What else is there that I might want to look at?
 
 Thanks!
 Mike
 M5Hosting.com
 Sent from my Verizon Wireless BlackBerry
 



Re: Invalid prefix announcement from AS9035 for 129.77.0.0/16

2009-10-09 Thread sjk
We are seeing the same ting with 66.146.192.0/19  66.251.224.0/19.
According to cyclopes this is still continuing. . .

Dylan Ebner wrote:
 We also received a notification that our IP block 67.135.55.0/24 (AS19629) is 
 being annouced by AS9035. Hopefully someone is receiving my emails.
 
 Thanks 
 
 
 Dylan Ebner, Network Engineer
 Consulting Radiologists, Ltd.
 1221 Nicollet Mall, Minneapolis, MN 55403
 ph. 612.573.2236 fax. 612.573.2250
 dylan.eb...@crlmed.com
 www.consultingradiologists.com
 
 
 -Original Message-
 From: Matthew Huff [mailto:mh...@ox.com] 
 Sent: Friday, October 09, 2009 7:28 AM
 To: nanog@nanog.org
 Subject: Invalid prefix announcement from AS9035 for 129.77.0.0/16
 
 About 4 hours ago BGPmon picked up a rogue announcement of 129.77.0.0 from 
 AS9035 (ASN-WIND Wind Telecomunicazioni spa) with an upstream of AS1267 
 (ASN-INFOSTRADA Infostrada S.p.A.). I don't see it now on any looking glass 
 sites. Hopefully this was just a typo that was quickly corrected. I would 
 appreciate if people have time and can double check let me know if any 
 announcements are active except from our AS6128/AS6395 upstreams.
 
 If this were to persist, what would be the best course of action to resolve 
 it, especially given that the AS was within RIPE.
 
 
 
 
 Matthew Huff   | One Manhattanville Rd OTA Management LLC | Purchase, NY 
 10577 http://www.ox.com  | Phone: 914-460-4039
 aim: matthewbhuff  | Fax:   914-460-4139
 
 
 
 
 



Residential BW Planning

2009-08-11 Thread sjk
I am trying to perform some capacity planning for some of our
residential pops, but the old calcs I used to use seem useless -- as
they were adapted from the dialup days and relied upon a percentage of
users online (~50%) and a percentage of concurrent transmission (~19%).
My present scenario involves a micro-pop terminating 250 residences
where users are expecting 4 mb/s. So I am looking for some baseline to
begin at, so I am wondering what others are doing.

Any thoughts are appreciated.

Thanks
--steve




Re: DOS in progress ?

2009-08-06 Thread sjk
We are presently seeing some weird FB behavior -- timeouts and retry
issues. We've had several reports from our users and just began
investigating. Any info you have would be appreciated.

--sjk

Jorge Amodio wrote:
 Are folks seeing any major DOS in progress ?
 
 Twitter seems to be under one and FB is flaky.
 



Re: cisco.com

2009-08-04 Thread sjk
We have seen the route for cisco withdrawn from 208 and 2828. Facebook
seems fine

Dominic J. Eidson wrote:
 
 Both work from Austin, TX.
 
 
 
  - d.
 
 On Tue, 4 Aug 2009, Alex Nderitu wrote:
 
 Facebook seems to also be affected.


 -Original Message-
 From: R. Benjamin Kessler r...@mnsginc.com
 To: nanog@nanog.org
 Subject: cisco.com
 Date: Tue, 4 Aug 2009 09:34:46 -0400


 Hey Gang -

 I'm unable to get to cisco.com from multiple places on the 'net
 (including downforeveryoneorjustme.com); any ideas on the cause and ETR?

 Thanks,

 Ben




 



Re: cisco.com

2009-08-04 Thread sjk
Seeing them off of Sprint now. . . weird

sjk wrote:
 We have seen the route for cisco withdrawn from 208 and 2828. Facebook
 seems fine
 

 



Re: Anomalies with AS13214 ?

2009-07-28 Thread sjk


Russell Heilling wrote:
 2009/5/11 Ricardo Oliveira rvel...@cs.ucla.edu:
 Hi all,

 First, thanks for using Cyclops, and thanks for all the Cyclops users that
 drop me a message about this.

 It seems some router in AS13214 decided to originate all the prefixes and
 send them to AS48285 in the Caymans, all the ASPATHs are 48285 13214.
 The first announcement was on 2009-05-11 11:03:11 UTC and last on 2009-05-11
 12:16:32 UTC, there were 266,289 prefixes leaked (they were withdrawn
 afterwards)
 
 It looks like AS13214 are misbehaving again...  We have just started
 receiving cyclops alerts indicating that AS13214 is announcing our
 prefixes again:

We are seeing the same thing for two of our prefixes:

Offending attribute:  66.251.224.0/19-13214

Offending attribute:  66.146.192.0/19-48285

Pretty annoying

--steve




Bug#530673: xsmbrowser: Does not appear to utilize the supplied password

2009-05-26 Thread sjk
Package: xsmbrowser
Version: 3.4.0-16
Severity: grave
Justification: renders package unusable


xsmbroswer passes the following command: smbclient \\Server\share -U
username -I server.ip -N -W workgroup -c dir

returns the following error: tree connect failed: NT_STATUS_BAD_NETWORK_NAME

The -N flag causes the smbclient to attempt an anonymous connection --
without password. I believe Samba 3 requires full authentication for
browsing.

-- System Information:
Debian Release: squeeze/sid
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: i386 (i686)

Kernel: Linux 2.6.26-2-686 (SMP w/2 CPU cores)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash

Versions of packages xsmbrowser depends on:
ii  expectk   5.43.0-17  A program that can automate
intera
ii  smbclient 2:3.3.3-1  command-line SMB/CIFS
clients for

Versions of packages xsmbrowser recommends:
ii  smbfs 2:3.3.3-1  Samba file system utilities

Versions of packages xsmbrowser suggests:
pn  mc | gmc | nautilus | konquer none (no description available)

-- no debconf information




-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#530673: xsmbrowser: Does not appear to utilize the supplied password

2009-05-26 Thread sjk
Package: xsmbrowser
Version: 3.4.0-16
Severity: grave
Justification: renders package unusable


xsmbroswer passes the following command: smbclient \\Server\share -U
username -I server.ip -N -W workgroup -c dir

returns the following error: tree connect failed: NT_STATUS_BAD_NETWORK_NAME

The -N flag causes the smbclient to attempt an anonymous connection --
without password. I believe Samba 3 requires full authentication for
browsing.

-- System Information:
Debian Release: squeeze/sid
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: i386 (i686)

Kernel: Linux 2.6.26-2-686 (SMP w/2 CPU cores)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash

Versions of packages xsmbrowser depends on:
ii  expectk   5.43.0-17  A program that can automate
intera
ii  smbclient 2:3.3.3-1  command-line SMB/CIFS
clients for

Versions of packages xsmbrowser recommends:
ii  smbfs 2:3.3.3-1  Samba file system utilities

Versions of packages xsmbrowser suggests:
pn  mc | gmc | nautilus | konquer none (no description available)

-- no debconf information




-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



DSX cross-connect solution

2009-05-04 Thread sjk
I am trying to find hardware for a rebuild of our DS1 cross-connect
frame and can't seem to find much out there. We've got ~300 DS1s that
need to be x-connected between our M13s and I'm seeking an easy to
manage solution. I've looked at the Telect panels but I'm concerned that
my staff can't deal with wirewrap terminations. Has anyone seen, simply,
a high density 66 field that can fit in a 23 rack?

TIA -- steve



[Alsa-user] Questions/Problems with hda_intel

2008-07-29 Thread sjk
I am hoping someone can help -- I may just be a bit crazy here: Am 
running Debain w/alsa 1.0.16, kernel 2.6.26 on a latitude D630 - Intel 
ICH8. I've got the snd_hda_intel driver to load and I can get some 
sound, but alsamixer shows only a single channel: master, pcm, and front 
-- no left/right, and no input channels. any ideas appreciated...

Thanks -- sjk

---
00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio 
Controller (rev 02)

---

cat /proc/asound/cards
  0 [Intel  ]: HDA-Intel - HDA Intel
   HDA Intel at 0xfebfc000 irq 21
---
lsmod | grep snd

snd_hda_intel 313084  0
snd_pcm_oss32288  0
snd_mixer_oss  12544  1 snd_pcm_oss
snd_pcm62596  2 snd_hda_intel,snd_pcm_oss
snd_timer  17928  1 snd_pcm
snd_page_alloc  7944  2 snd_hda_intel,snd_pcm
snd_hwdep   6468  1 snd_hda_intel
snd45688  6 
snd_hda_intel,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer,snd_hwdep
soundcore   6472  1 snd



ii  alsa-base 1.0.16-2ALSA 
driver configuration files
ii  alsa-oss  1.0.15-1ALSA 
wrapper for OSS applications
ii  alsa-utils1.0.16-2ALSA 
utilities
ii  libsdl1.2debian-alsa  1.2.13-2Simple 
DirectMedia Layer (with X11 and ALSA


-- 
http://www.sleepycatz.com
[EMAIL PROTECTED]
fingerprint: 1024D/89420B8E 2001-09-16

No one can understand the truth until
he drinks of coffee's frothy goodness.
~Sheik Abd-al-Kadir

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Alsa-user mailing list
Alsa-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/alsa-user


#44344 [Fbk-Opn]: duplicate symbol error during make

2008-03-10 Thread sam dot sjk at gmail dot com
 ID:   44344
 User updated by:  sam dot sjk at gmail dot com
 Reported By:  sam dot sjk at gmail dot com
-Status:   Feedback
+Status:   Open
 Bug Type: Compile Failure
 Operating System: OSX 10.5.2
 PHP Version:  5.2.5
 New Comment:

i tried the suggested  http://snaps.php.net/php5.2-latest.tar.gz but 
with same results.

I tried dozens of compile tests and determined that the issue only 
arrises for me if I configure/compile with --with-pic

# with just --with-pic
# make fails
./configure --with-pic

# all my options except --with-pic
# configures and makes fine
./configure --prefix=/usr/local/php5 \
--mandir=/usr/share/man \
--infodir=/usr/share/info \
--sysconfdir=/etc \
--without-mysql \
--without-sqlite \
--with-apxs2=/usr/sbin/apxs \
--enable-gd-native-ttf \
--with-jpeg-dir=/sw \
--with-pgsql=/Library/PostgreSQL8 \
--with-mime-magic=/etc/apache2/magic \
--with-png-dir=/usr/X11R6


Previous Comments:


[2008-03-10 11:59:43] [EMAIL PROTECTED]

Please try using this CVS snapshot:

  http://snaps.php.net/php5.2-latest.tar.gz
 
For Windows (zip):
 
  http://snaps.php.net/win32/php5.2-win32-latest.zip

For Windows (installer):

  http://snaps.php.net/win32/php5.2-win32-installer-latest.msi





[2008-03-05 22:07:08] sam dot sjk at gmail dot com

sorry had OSX10.5.1,  should be 10.5.2



[2008-03-05 22:03:58] sam dot sjk at gmail dot com

Description:

I've tried to compile 5.2.5 and latest 5.2 snap.  I get an error 
during 
make:
ld: duplicate symbol _yytext in Zend/zend_ini_scanner.o and 
Zend/zend_language_scanner.o

collect2: ld returned 1 exit status
make: *** [libs/libphp5.bundle] Error 1

I'm on OSX 10.5.2 build 9c31 Intel Core 2 Duo
gcc version 4.0.1 (Apple Inc. build 5465)

Bug 42106 states same problem.  Not sure if it was proper to re-open 
that bug (and how thats done)







Reproduce code:
---
$ ./configure --prefix=/usr/local/php5 \
--mandir=/usr/share/man \
--infodir=/usr/share/info \
--sysconfdir=/etc \
--with-pgsql=/Library/PostgreSQL8 \
--without-mysql \
--with-apxs2=/usr/sbin/apxs \
--without-sqlite \
--with-mime-magic=/etc/apache2/magic \
--enable-gd-native-ttf \
--with-pic \
--with-jpeg-dir=/opt/local \
--with-png-dir=/opt/local

$ make 



ld: duplicate symbol _yytext in Zend/zend_ini_scanner.o and 
Zend/zend_language_scanner.o

collect2: ld returned 1 exit status
make: *** [libs/libphp5.bundle] Error 1

Expected result:

finish make with out error

Actual result:
--
$ make

   ...

t/standard/reg.o ext/standard/soundex.o ext/standard/string.o 
ext/standard/scanf.o ext/standard/syslog.o ext/standard/type.o 
ext/standard/uniqid.o ext/standard/url.o ext/standard/url_scanner.o 
ext/standard/var.o ext/standard/versioning.o ext/standard/assert.o 
ext/standard/strnatcmp.o ext/standard/levenshtein.o 
ext/standard/incomplete_class.o ext/standard/url_scanner_ex.o 
ext/standard/ftp_fopen_wrapper.o ext/standard/http_fopen_wrapper.o 
ext/standard/php_fopen_wrapper.o ext/standard/credits.o 
ext/standard/css.o ext/standard/var_unserializer.o ext/standard/ftok.o

ext/standard/sha1.o ext/standard/user_filters.o 
ext/standard/uuencode.o ext/standard/filters.o 
ext/standard/proc_open.o ext/standard/streamsfuncs.o 
ext/standard/http.o ext/tokenizer/tokenizer.o 
ext/tokenizer/tokenizer_data.o ext/xml/xml.o ext/xml/compat.o 
ext/xmlreader/php_xmlreader.o ext/xmlwriter/php_xmlwriter.o 
TSRM/TSRM.o TSRM/tsrm_strtok_r.o TSRM/tsrm_virtual_cwd.o main/main.o 
main/snprintf.o main/spprintf.o main/php_sprintf.o main/safe_mode.o 
main/fopen_wrappers.o main/alloca.o main/php_scandir.o main/php_ini.o 
main/SAPI.o main/rfc1867.o main/php_content_types.o main/strlcpy.o 
main/strlcat.o main/mergesort.o main/reentrancy.o main/php_variables.o

main/php_ticks.o main/network.o main/php_open_temporary_file.o 
main/php_logos.o main/output.o main/streams/streams.o 
main/streams/cast.o main/streams/memory.o main/streams/filter.o 
main/streams/plain_wrapper.o main/streams/userspace.o 
main/streams/transports.o main/streams/xp_socket.o main/streams/mmap.o

Zend/zend_language_parser.o Zend/zend_language_scanner.o 
Zend/zend_ini_parser.o Zend/zend_ini_scanner.o Zend/zend_alloc.o 
Zend/zend_compile.o Zend/zend_constants.o Zend/zend_dynamic_array.o 
Zend/zend_execute_API.o Zend/zend_highlight.o Zend/zend_llist.o 
Zend/zend_opcode.o Zend/zend_operators.o Zend/zend_ptr_stack.o 
Zend/zend_stack.o Zend/zend_variables.o Zend/zend.o Zend/zend_API.o 
Zend/zend_extensions.o Zend/zend_hash.o Zend/zend_list.o 
Zend/zend_indent.o Zend/zend_builtin_functions.o Zend/zend_sprintf.o 
Zend/zend_ini.o Zend/zend_qsort.o Zend/zend_multibyte.o 
Zend/zend_ts_hash.o Zend/zend_stream.o Zend/zend_iterators.o 
Zend/zend_interfaces.o Zend

#44344 [NEW]: duplicate symbol error during make

2008-03-05 Thread sam dot sjk at gmail dot com
From: sam dot sjk at gmail dot com
Operating system: OSX 10.5.1
PHP version:  5.2.5
PHP Bug Type: Compile Failure
Bug description:  duplicate symbol error during make

Description:

I've tried to compile 5.2.5 and latest 5.2 snap.  I get an error 
during 
make:
ld: duplicate symbol _yytext in Zend/zend_ini_scanner.o and 
Zend/zend_language_scanner.o

collect2: ld returned 1 exit status
make: *** [libs/libphp5.bundle] Error 1

I'm on OSX 10.5.2 build 9c31 Intel Core 2 Duo
gcc version 4.0.1 (Apple Inc. build 5465)

Bug 42106 states same problem.  Not sure if it was proper to re-open 
that bug (and how thats done)







Reproduce code:
---
$ ./configure --prefix=/usr/local/php5 \
--mandir=/usr/share/man \
--infodir=/usr/share/info \
--sysconfdir=/etc \
--with-pgsql=/Library/PostgreSQL8 \
--without-mysql \
--with-apxs2=/usr/sbin/apxs \
--without-sqlite \
--with-mime-magic=/etc/apache2/magic \
--enable-gd-native-ttf \
--with-pic \
--with-jpeg-dir=/opt/local \
--with-png-dir=/opt/local

$ make 



ld: duplicate symbol _yytext in Zend/zend_ini_scanner.o and 
Zend/zend_language_scanner.o

collect2: ld returned 1 exit status
make: *** [libs/libphp5.bundle] Error 1

Expected result:

finish make with out error

Actual result:
--
$ make

   ...

t/standard/reg.o ext/standard/soundex.o ext/standard/string.o 
ext/standard/scanf.o ext/standard/syslog.o ext/standard/type.o 
ext/standard/uniqid.o ext/standard/url.o ext/standard/url_scanner.o 
ext/standard/var.o ext/standard/versioning.o ext/standard/assert.o 
ext/standard/strnatcmp.o ext/standard/levenshtein.o 
ext/standard/incomplete_class.o ext/standard/url_scanner_ex.o 
ext/standard/ftp_fopen_wrapper.o ext/standard/http_fopen_wrapper.o 
ext/standard/php_fopen_wrapper.o ext/standard/credits.o 
ext/standard/css.o ext/standard/var_unserializer.o ext/standard/ftok.o 
ext/standard/sha1.o ext/standard/user_filters.o 
ext/standard/uuencode.o ext/standard/filters.o 
ext/standard/proc_open.o ext/standard/streamsfuncs.o 
ext/standard/http.o ext/tokenizer/tokenizer.o 
ext/tokenizer/tokenizer_data.o ext/xml/xml.o ext/xml/compat.o 
ext/xmlreader/php_xmlreader.o ext/xmlwriter/php_xmlwriter.o 
TSRM/TSRM.o TSRM/tsrm_strtok_r.o TSRM/tsrm_virtual_cwd.o main/main.o 
main/snprintf.o main/spprintf.o main/php_sprintf.o main/safe_mode.o 
main/fopen_wrappers.o main/alloca.o main/php_scandir.o main/php_ini.o 
main/SAPI.o main/rfc1867.o main/php_content_types.o main/strlcpy.o 
main/strlcat.o main/mergesort.o main/reentrancy.o main/php_variables.o 
main/php_ticks.o main/network.o main/php_open_temporary_file.o 
main/php_logos.o main/output.o main/streams/streams.o 
main/streams/cast.o main/streams/memory.o main/streams/filter.o 
main/streams/plain_wrapper.o main/streams/userspace.o 
main/streams/transports.o main/streams/xp_socket.o main/streams/mmap.o 
Zend/zend_language_parser.o Zend/zend_language_scanner.o 
Zend/zend_ini_parser.o Zend/zend_ini_scanner.o Zend/zend_alloc.o 
Zend/zend_compile.o Zend/zend_constants.o Zend/zend_dynamic_array.o 
Zend/zend_execute_API.o Zend/zend_highlight.o Zend/zend_llist.o 
Zend/zend_opcode.o Zend/zend_operators.o Zend/zend_ptr_stack.o 
Zend/zend_stack.o Zend/zend_variables.o Zend/zend.o Zend/zend_API.o 
Zend/zend_extensions.o Zend/zend_hash.o Zend/zend_list.o 
Zend/zend_indent.o Zend/zend_builtin_functions.o Zend/zend_sprintf.o 
Zend/zend_ini.o Zend/zend_qsort.o Zend/zend_multibyte.o 
Zend/zend_ts_hash.o Zend/zend_stream.o Zend/zend_iterators.o 
Zend/zend_interfaces.o Zend/zend_exceptions.o Zend/zend_strtod.o 
Zend/zend_objects.o Zend/zend_object_handlers.o 
Zend/zend_objects_API.o Zend/zend_default_classes.o 
Zend/zend_execute.o sapi/apache2handler/mod_php5.o 
sapi/apache2handler/sapi_apache2.o sapi/apache2handler/apache_config.o 
sapi/apache2handler/php_functions.o main/internal_functions.o  -lpq -
liconv -liconv -lm -lxml2 -lz -licucore -lm -lxml2 -lz -licucore -lm -
lxml2 -lz -licucore -lm -lxml2 -lz -licucore -lm -lxml2 -lz -licucore 
-lm -lxml2 -lz -licucore -lm  -o libs/libphp5.bundle  cp 
libs/libphp5.bundle libs/libphp5.so
ld: duplicate symbol _yytext in Zend/zend_ini_scanner.o and 
Zend/zend_language_scanner.o

collect2: ld returned 1 exit status
make: *** [libs/libphp5.bundle] Error 1







-- 
Edit bug report at http://bugs.php.net/?id=44344edit=1
-- 
Try a CVS snapshot (PHP 5.2): 
http://bugs.php.net/fix.php?id=44344r=trysnapshot52
Try a CVS snapshot (PHP 5.3): 
http://bugs.php.net/fix.php?id=44344r=trysnapshot53
Try a CVS snapshot (PHP 6.0): 
http://bugs.php.net/fix.php?id=44344r=trysnapshot60
Fixed in CVS: http://bugs.php.net/fix.php?id=44344r=fixedcvs
Fixed in release: 
http://bugs.php.net/fix.php?id=44344r=alreadyfixed
Need backtrace:   http://bugs.php.net/fix.php?id=44344r=needtrace
Need Reproduce Script:http://bugs.php.net/fix.php?id=44344r=needscript
Try newer version:http://bugs.php.net/fix.php?id=44344r

#44344 [Opn]: duplicate symbol error during make

2008-03-05 Thread sam dot sjk at gmail dot com
 ID:   44344
 User updated by:  sam dot sjk at gmail dot com
 Reported By:  sam dot sjk at gmail dot com
 Status:   Open
 Bug Type: Compile Failure
-Operating System: OSX 10.5.1
+Operating System: OSX 10.5.2
 PHP Version:  5.2.5
 New Comment:

sorry had OSX10.5.1,  should be 10.5.2


Previous Comments:


[2008-03-05 22:03:58] sam dot sjk at gmail dot com

Description:

I've tried to compile 5.2.5 and latest 5.2 snap.  I get an error 
during 
make:
ld: duplicate symbol _yytext in Zend/zend_ini_scanner.o and 
Zend/zend_language_scanner.o

collect2: ld returned 1 exit status
make: *** [libs/libphp5.bundle] Error 1

I'm on OSX 10.5.2 build 9c31 Intel Core 2 Duo
gcc version 4.0.1 (Apple Inc. build 5465)

Bug 42106 states same problem.  Not sure if it was proper to re-open 
that bug (and how thats done)







Reproduce code:
---
$ ./configure --prefix=/usr/local/php5 \
--mandir=/usr/share/man \
--infodir=/usr/share/info \
--sysconfdir=/etc \
--with-pgsql=/Library/PostgreSQL8 \
--without-mysql \
--with-apxs2=/usr/sbin/apxs \
--without-sqlite \
--with-mime-magic=/etc/apache2/magic \
--enable-gd-native-ttf \
--with-pic \
--with-jpeg-dir=/opt/local \
--with-png-dir=/opt/local

$ make 



ld: duplicate symbol _yytext in Zend/zend_ini_scanner.o and 
Zend/zend_language_scanner.o

collect2: ld returned 1 exit status
make: *** [libs/libphp5.bundle] Error 1

Expected result:

finish make with out error

Actual result:
--
$ make

   ...

t/standard/reg.o ext/standard/soundex.o ext/standard/string.o 
ext/standard/scanf.o ext/standard/syslog.o ext/standard/type.o 
ext/standard/uniqid.o ext/standard/url.o ext/standard/url_scanner.o 
ext/standard/var.o ext/standard/versioning.o ext/standard/assert.o 
ext/standard/strnatcmp.o ext/standard/levenshtein.o 
ext/standard/incomplete_class.o ext/standard/url_scanner_ex.o 
ext/standard/ftp_fopen_wrapper.o ext/standard/http_fopen_wrapper.o 
ext/standard/php_fopen_wrapper.o ext/standard/credits.o 
ext/standard/css.o ext/standard/var_unserializer.o ext/standard/ftok.o

ext/standard/sha1.o ext/standard/user_filters.o 
ext/standard/uuencode.o ext/standard/filters.o 
ext/standard/proc_open.o ext/standard/streamsfuncs.o 
ext/standard/http.o ext/tokenizer/tokenizer.o 
ext/tokenizer/tokenizer_data.o ext/xml/xml.o ext/xml/compat.o 
ext/xmlreader/php_xmlreader.o ext/xmlwriter/php_xmlwriter.o 
TSRM/TSRM.o TSRM/tsrm_strtok_r.o TSRM/tsrm_virtual_cwd.o main/main.o 
main/snprintf.o main/spprintf.o main/php_sprintf.o main/safe_mode.o 
main/fopen_wrappers.o main/alloca.o main/php_scandir.o main/php_ini.o 
main/SAPI.o main/rfc1867.o main/php_content_types.o main/strlcpy.o 
main/strlcat.o main/mergesort.o main/reentrancy.o main/php_variables.o

main/php_ticks.o main/network.o main/php_open_temporary_file.o 
main/php_logos.o main/output.o main/streams/streams.o 
main/streams/cast.o main/streams/memory.o main/streams/filter.o 
main/streams/plain_wrapper.o main/streams/userspace.o 
main/streams/transports.o main/streams/xp_socket.o main/streams/mmap.o

Zend/zend_language_parser.o Zend/zend_language_scanner.o 
Zend/zend_ini_parser.o Zend/zend_ini_scanner.o Zend/zend_alloc.o 
Zend/zend_compile.o Zend/zend_constants.o Zend/zend_dynamic_array.o 
Zend/zend_execute_API.o Zend/zend_highlight.o Zend/zend_llist.o 
Zend/zend_opcode.o Zend/zend_operators.o Zend/zend_ptr_stack.o 
Zend/zend_stack.o Zend/zend_variables.o Zend/zend.o Zend/zend_API.o 
Zend/zend_extensions.o Zend/zend_hash.o Zend/zend_list.o 
Zend/zend_indent.o Zend/zend_builtin_functions.o Zend/zend_sprintf.o 
Zend/zend_ini.o Zend/zend_qsort.o Zend/zend_multibyte.o 
Zend/zend_ts_hash.o Zend/zend_stream.o Zend/zend_iterators.o 
Zend/zend_interfaces.o Zend/zend_exceptions.o Zend/zend_strtod.o 
Zend/zend_objects.o Zend/zend_object_handlers.o 
Zend/zend_objects_API.o Zend/zend_default_classes.o 
Zend/zend_execute.o sapi/apache2handler/mod_php5.o 
sapi/apache2handler/sapi_apache2.o sapi/apache2handler/apache_config.o

sapi/apache2handler/php_functions.o main/internal_functions.o  -lpq -
liconv -liconv -lm -lxml2 -lz -licucore -lm -lxml2 -lz -licucore -lm -
lxml2 -lz -licucore -lm -lxml2 -lz -licucore -lm -lxml2 -lz -licucore 
-lm -lxml2 -lz -licucore -lm  -o libs/libphp5.bundle  cp 
libs/libphp5.bundle libs/libphp5.so
ld: duplicate symbol _yytext in Zend/zend_ini_scanner.o and 
Zend/zend_language_scanner.o

collect2: ld returned 1 exit status
make: *** [libs/libphp5.bundle] Error 1











-- 
Edit this bug report at http://bugs.php.net/?id=44344edit=1



Re: SSL/TLS and port 587

2008-01-23 Thread sjk

Ed Gerck wrote:

List,

I would like to address and request comments on the use of SSL/TLS and 
port 587 for email security.


The often expressed idea that SSL/TLS and port 587 are somehow able to 
prevent warrantless wiretapping and so on, or protect any private 
communications, is IMO simply not supported by facts.


Warrantless wiretapping and so on, and private communications 
eavesdropping are done more efficiently and covertly directly at the 
ISPs (hence the name warrantless wiretapping), where SSL/TLS 
protection does NOT apply. There is a security gap at every negotiated 
SSL/TLS session.


It is misleading to claim that port 587 solves the security problem of 
email eavesdropping, and gives people a false sense of security. It is 
worse than using a 56-bit DES key -- the email is in plaintext where it 
is most vulnerable.


Perhaps you'd like to expand upon this a bit. I am a bit confused by 
your assertion. tcp/587 is the standard authenticated submission port, 
while tcp/465 is the normal smtp/ssl port - of course one could run any 
mix of one or the other on either port. Are you suggesting that some 
postmasters/admins are claiming that their Submission ports are encrypted?


--

[EMAIL PROTECTED]
fingerprint: 1024D/89420B8E 2001-09-16

No one can understand the truth until
he drinks of coffee's frothy goodness.
~~Sheik Abd-al-Kadir

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: more reports of terrorist steganography

2007-08-20 Thread sjk
Dave Korn wrote:

 
   That's gotta stand out like a statistical sore thumb.
 
 
   The article is pretty poor if you ask me.  It outlines three techniques for
 stealth: steganography, using a shared email account as a dead-letter box, and
 blocking or redirecting known IP addresses from a mail server.  Then all of a
 sudden, there's this conclusion ...
 
  Internet-based attacks are extremely popular with terrorist organizations
 because they are relatively cheap to perform, offer a high degree of
 anonymity, and can be tremendously effective. 
 
 ... that comes completely out of left-field and has nothing to do with
 anything the rest of the article mentioned.  I would conclude that someone's
 done ten minutes worth of web searching and dressed up a bunch of
 long-established facts as 'research', then slapped a The sky is falling!
 Hay-ulp, hay-ulp security dramaqueen ending on it and will now be busily
 pitching for government grants or contracts of some sort.

This struck me oddly as well. I cannot think of a single significant
Internet attack which has been traced to any terrorist organizations. I
would agree that this article seems to be designed to alarm rather than
inform, and, no doubt, pick up a government contract.

Additionally, the author seems to make a big deal about asymmetric
encryption without considering how key exchange is accomplished. The
logistics of key exchange remains one of the vulnerabilities any
asymmetric encryption system.


-- 
-
[EMAIL PROTECTED]
No one can understand the truth until
he drinks of coffee's frothy goodness.
~~Sheik Abd-al-Kadir

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Exchange Points

2006-02-17 Thread sjk


We're a small facilities based ISP in Chicago and I am looking for a 
public exchange point for peering. I have been told, by someone at SBC, 
that the public NAP here is no longer accepting connections and is 
essentially going to shut down over time. Has anyone else heard this? Are 
there other exchange options - other then to haul transport to multiple 
net operators?


Thanks -- Steve


Re: Address Space ASN Allocation Process

2005-09-26 Thread sjk


On Mon, 26 Sep 2005, Vicky Rode wrote:


Hi,

Just trying to get some clarity and direction regarding obtaining
address space/ASN for my client.

Is there a minimum address space (?) an entity would need to justify to
go directly to RIR (ARIN in this case) as opposed to the upstream
provider? Is /20 the minimum allocation? Can my client approach RIR and
request for a /23?

If my client do procure a /23 how do they make make sure that this
address space will be globally routable?

Multihome will also be part of their network implementation, can they
apply for an ASN number?


Yes, minimum assignment is /20 (and this is considered temporary, as the 
official minimum is /19) -- there used to be some experimental /24s, 
but I believe these are now gone. ARIN will only assign /20 or more -- 
larger prefixes must come from your upstream provider. Being multihomed 
means you will be required to get an AS number. Once you have your address 
block, you can fill out the request from ARIN.


--sjk


Re: FCC Issues Rule Allowing FBI to Dictate Wiretap-Friendly Design for In ternet Services

2005-08-06 Thread sjk


On Sat, 6 Aug 2005, Randy Bush wrote:




It also hobbles technical innovation by forcing companies involved in
broadband to redesign their products to meet government requirements.


As opposed to hobbling innovation by meeting customer requirements?


who's paying the bill?  and sorry to hear from a vendor that meeting
the customers' requirements is such a negative thing.

randy



We all pay the bill with higher equipment costs, the maintenance of 
configurations, and possible storage costs. CALEA was bound to include 
VoIP services - given the definition telecom carrier in the act; however, 
as I recall -- and I may be wrong -- when CALEA was first passed the 
carriers were given tax breaks and subsidies to implement changes. Is 
such financial help being offered today?


--sjk


Moving Files to new hardware

2005-06-29 Thread SJK

Hi Guys:

This is my first question after subscribing .. that makes me the newest fool on 
the block ..:)

I have a FreeBSD 5.0 box online currently. I have just bought a new hardware to 
replace the existing one online. The new box I am building is sitting in my 
basement on a broadband connection. I am using router on this home network. My 
new box can go out on the internet using lynx. I have done no further 
configuration at this point which is why I am sending you this.

Is it possible to copy the files from the old server already online directly to 
this new server using broadband? What do I need to know and do to accomplish 
this? I appreciate any other insight in transitioning this change over/

Thanks,

Ketamia

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Service providers that NAT their whole network?

2005-04-15 Thread sjk



 A number of IETF documents(*) state that there are some service providers
 that place a NAT box in front of their entire network, so all their
 customers get private addresses rather than public address.
 It is often stated that these are primarily cable-based providers.

 I am trying to get a handle on how common this practice is.
 No one that I have asked seems to know any provider that does this,
 and a search of a few FAQs plus about an hour of Googling hasn't
 turned up anything definite (but maybe I am using the wrong keywords ...).

We nat a portion of our residentail users -- not all of our network. As I
recall our current nat pools are comprised of a /21

--sjk




[JBoss-user] [Beginners Corner] - Monitoring Entity Bean Locks

2005-03-04 Thread sjk
Does any one know of a way to monitor entity bean locks in JBoss 3.2.1 other 
than the entity lock monitor? I need more detail than what the entity lock 
monitor provides. Ideally, I would like to know every time a bean is locked and 
unlocked within a transaction. 

Thanks in advance for your help.
Stan

View the original post : 
http://www.jboss.org/index.html?module=bbop=viewtopicp=3868884#3868884

Reply to the post : 
http://www.jboss.org/index.html?module=bbop=postingmode=replyp=3868884


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
___
JBoss-user mailing list
JBoss-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jboss-user


[JBoss-user] [Beginners Corner] - Identifying entity beans involved in an ApplicationDeadlockE

2004-12-08 Thread sjk
We are using JBoss 3.2.1. 
Our application has lengthy transactions that access many entity beans  through 
CMRs. Whenever an entity bean is accessed through a CMR, that bean is locked by 
JBoss. As a result, we've experienced ApplicationDeadlockException for a number 
of scenarios in our application. 

When an ApplicationDeadlockException occurs, one of the two(or more) entities 
involved in the deadlock can be identified easily by examining the stack trace. 
How can I identify the other entities that are involved in the deadlock?

Our application is complex enough that it is very difficult to keep track of of 
the beans involved in a transaction.

I have tried using the entity lock monitor and it has been helpful from time to 
time but not always.

Thanks in advance for any help.
Stan

View the original post : 
http://www.jboss.org/index.html?module=bbop=viewtopicp=3857935#3857935

Reply to the post : 
http://www.jboss.org/index.html?module=bbop=postingmode=replyp=3857935


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. 
http://productguide.itmanagersjournal.com/
___
JBoss-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-user


Re: password managers

2004-06-14 Thread sjk
We use PMS (http://passwordms.sourceforge.net), but I keep meaning to
re-write parts of the code to make it multi-user freindly.


On Mon, 14 Jun 2004, andrew lattis wrote:

 currently i've got an ever growing password list in a plain text file
 stored on an encrypted loopback fs, this is getting cumbersome...

 figaro's password manager (package fpm) looks nice and uses blowfish to
 encrypt data but i can't find anything showing any type of third party
 audit.

 what does everyone else use to keep track of all there passwords?

 thanks,
 andrew

 --
 don't ask questions that lead to answers you don't want to hear



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: password managers

2004-06-14 Thread sjk
We use PMS (http://passwordms.sourceforge.net), but I keep meaning to
re-write parts of the code to make it multi-user freindly.


On Mon, 14 Jun 2004, andrew lattis wrote:

 currently i've got an ever growing password list in a plain text file
 stored on an encrypted loopback fs, this is getting cumbersome...

 figaro's password manager (package fpm) looks nice and uses blowfish to
 encrypt data but i can't find anything showing any type of third party
 audit.

 what does everyone else use to keep track of all there passwords?

 thanks,
 andrew

 --
 don't ask questions that lead to answers you don't want to hear




gdbserver and gdb cross debug?

2002-11-18 Thread sjk

William A. Gatliff wrote:

 Jikun:

 On Mon, Nov 11, 2002 at 03:39:07PM +0800, sjk wrote:
  I am porting Linux on EP405 board. So I use gdbserver and gdb to debug
  applications,but only the first
  breakpoint could be stopped.Then if i press 'step', the program will
  execute to end straightforwardly . Any other breakpoints could not break
  the execution.

 Have you compiled the application in question with optimizations?  Gdb
 isn't so good at debugging optimized code.  Compile with -g -O0 if you
 intend to debug.

I tried to compile hello.c as following:

ppc_405-gcc -g -O0 -o hello hello.c

Then set up the connection of gdbserver and ppc_405-gdb. The program could not
be debugged step by step either.
Is it possible that there is bug in ptrace.c or traps.c


Thanks a lot

Jikun

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/





gdbserver and gdb cross debug?

2002-11-11 Thread sjk

Hi,

I am porting Linux on EP405 board. So I use gdbserver and gdb to debug
applications,but only the first
breakpoint could be stopped.Then if i press 'step', the program will
execute to end straightforwardly . Any other breakpoints could not break
the execution.

Please give me some advice!


Best Regards


Jikun Sun

BMR

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/





gdbserver and gdb cross debug?

2002-11-11 Thread sjk

Dr. Craig,

I am using MontaVista Linux LSP. The ibm-walnut-ppc405gp LSP works correctly
on walnut board. We can cross debug applications by ppc_405-gdb and
gdbserver(version 5.1).
But the following problem occurs on EP405 board.
Is it possible that the problem is caused by library, memory or initialization
of PPC405GP registers?

Sun

Dr. Craig Hollabaugh wrote:

  I am porting Linux on EP405 board. So I use gdbserver and gdb to debug
  applications,but only the first
  breakpoint could be stopped.Then if i press 'step', the program will
  execute to end straightforwardly . Any other breakpoints could not break
  the execution.

 Which tools are you using? I've been using the fine ELDK from denx on a
 walnut board without problems like you mention above.

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/





Why the gdbserver can't debug step by step?

2002-11-04 Thread sjk

Hi,

I have a very curious problem about EP405 board. I make a kernel image
from ibm-walnut405gp LSP and boot up the Linux kernel on EP405 board. So
I use gdbserver and ppc_405-gdb to debug applications,but only the first
breakpoint could be stopped.Then the
programm will execute to end if you press 'step'. Any other breakpoints
could break the execution.

We have encounter this problem on CSB272 405GP also.I think that
IBM-WALNUT405gp should not have this problem.

Please give me some advice!


Best Regards


Jikun Sun

BMR

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/





cross debug with gdb/gdbserver on 405GP

2002-09-20 Thread sjk

Hi,

I am porting Linux on CSB272 405gp board.The Linux kernel works.
But I can't debug applications with gdb and gdbserver.
The problem is described as following:

I want to  cross debug applications by gdbserver and gdb.But only one
breakpoint  can break. Any excute comand including stepi make the
application run over and exit. The application runs correctlly. Why
can't I debug the program step by step?

ppc_405gdb version 5.2.
debug information:
on host:
#ppc_405 -g -ggdb -o hello hello.c
#ddd --debugger ppc_405-gdb  --gdb hello
(gdb) target remote target:1000
0x30013a00 in ?? ()
(gdb) break hello.c:4
Breakpoint 1 at 0x14f8: file hello.c, line 4.
(gdb) cont
Breakpoint 1, main () at hello.c:4
(gdb) step
Program exited with code 037.
(gdb)

on target:
#gdbserver host:1000 hello
rocess hello created; pid = 97
Remote debugging using 192.9.200.69:1000
Hello World
Child exited with retcode = c
Child exited with status 253
GDBserver exiting

please give some suggestions!
thanks


Jikun Sun
jack.sun at bmrtech.com


** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/





Re: Is snort-stat and 5snort really broken in sid?

2001-09-11 Thread sjk

What version are you using??
make sure the following line is in your snort.conf -- I think the debian
equiv is snort-lib:

output alert_syslog: LOG_AUTH

--sjk

On 12 Sep, Andrew Pollock wrote:
 Hi,
 
 I've always had problems with 5snort killing snort daily when snort's running in
 dialup mode (I fixed that by commenting out the restart line) but I'm not
 getting anything in the daily notification emails either.
 
 /etc/ppp/ip-up.d/snort doesn't start snort with -s, so nothing goes into
 /var/log/auth.log, everything goes into /var/log/snort/alert
 
 /etc/cron.daily/5snort doesn't read this particular file, it only looks at
 auth.log
 
 Even if I run snort-stat manually on auth.log (after I've made snort start with
 -s) it doesn't return anything when there are alerts in the log.
 
 Any suggestions appreciated, I'd like to get daily summary emails.
 
 Andrew
 
 

-- 
 Aude Sepere ---
[EMAIL PROTECTED]
 Audax et Cautus ---



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: Is snort-stat and 5snort really broken in sid?

2001-09-11 Thread sjk
What version are you using??
make sure the following line is in your snort.conf -- I think the debian
equiv is snort-lib:

output alert_syslog: LOG_AUTH

--sjk

On 12 Sep, Andrew Pollock wrote:
 Hi,
 
 I've always had problems with 5snort killing snort daily when snort's running 
 in
 dialup mode (I fixed that by commenting out the restart line) but I'm not
 getting anything in the daily notification emails either.
 
 /etc/ppp/ip-up.d/snort doesn't start snort with -s, so nothing goes into
 /var/log/auth.log, everything goes into /var/log/snort/alert
 
 /etc/cron.daily/5snort doesn't read this particular file, it only looks at
 auth.log
 
 Even if I run snort-stat manually on auth.log (after I've made snort start 
 with
 -s) it doesn't return anything when there are alerts in the log.
 
 Any suggestions appreciated, I'd like to get daily summary emails.
 
 Andrew
 
 

-- 
 Aude Sepere ---
[EMAIL PROTECTED]
 Audax et Cautus ---




Re: Running/Compiling latest snort on potato

2001-09-03 Thread sjk

Compiled and ran fine for me with libpcap 0.4a6. 

--sjk

On  4 Sep, Shane Machon wrote:
 Greetings,
 
 Anyone had success compiling snort 1.81 on a stable potato box?
 
 Looking at the snort website, there is a question regarding libpcap 
 0.5 under Redhat that will cause problems, does anyone know if this is
 this redhat specific? Potato only offers libpcap0 0.4a6-3.
 
 I dont have to have 1.81 of snort (would be nice though!), just db
 support (1.7 or above)
 
 Any success stories?
 
 I know there are now debian packages for snort, but going to
 unstable/testing isnt an option ;)
 
 
 Any responces appreciated.
 
 Cheers,
  
 SHANE MACHON
 Network Administrator
 Technical Project Manager
 Two Purple Plums Pty Ltd.
 TPP Internet Development 
 (NetNames Australasia) 
 
   PO Box 334, Manly 
   NSW, 1655, Australia 
   Tel. +61 2 9970 5242 
   Fax. +61 2 9970 8262 
   Eml. [EMAIL PROTECTED] 
 
 == 
 TPP Internet Development (NetNames Australasia) 
 The International Domain Name Registry 
 Registering Domain Names in over 200 countries 
 http://www.netnames.com.au 
 http://www.internetdevelopment.com.au 
 http://www.twoplums.com.au 
 ==
 
 

-- 
 Aude Sepere ---
[EMAIL PROTECTED]
 Audax et Cautus ---



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: Running/Compiling latest snort on potato

2001-09-03 Thread sjk
Compiled and ran fine for me with libpcap 0.4a6. 

--sjk

On  4 Sep, Shane Machon wrote:
 Greetings,
 
 Anyone had success compiling snort 1.81 on a stable potato box?
 
 Looking at the snort website, there is a question regarding libpcap 
 0.5 under Redhat that will cause problems, does anyone know if this is
 this redhat specific? Potato only offers libpcap0 0.4a6-3.
 
 I dont have to have 1.81 of snort (would be nice though!), just db
 support (1.7 or above)
 
 Any success stories?
 
 I know there are now debian packages for snort, but going to
 unstable/testing isnt an option ;)
 
 
 Any responces appreciated.
 
 Cheers,
  
 SHANE MACHON
 Network Administrator
 Technical Project Manager
 Two Purple Plums Pty Ltd.
 TPP Internet Development 
 (NetNames Australasia) 
 
   PO Box 334, Manly 
   NSW, 1655, Australia 
   Tel. +61 2 9970 5242 
   Fax. +61 2 9970 8262 
   Eml. [EMAIL PROTECTED] 
 
 == 
 TPP Internet Development (NetNames Australasia) 
 The International Domain Name Registry 
 Registering Domain Names in over 200 countries 
 http://www.netnames.com.au 
 http://www.internetdevelopment.com.au 
 http://www.twoplums.com.au 
 ==
 
 

-- 
 Aude Sepere ---
[EMAIL PROTECTED]
 Audax et Cautus ---




Re: FW: ArcServe Client for Linux

2000-05-31 Thread sjk

The only issue I am aware of, it that shadow passwords must be turned off.
Other than that it was pretty simple to install and write a init.d script
for. If you have any other question, please feel free to contact me.

- sjk

 On Tue, 30 May 2000, [iso-8859-1] Carlos Bambó wrote:

 Excuse me the previous message, it was rather messy, the question is as
 follows:
 
 
  Could you please tell me the release of the Linux agent  you've used and
 if
  you did anything special (on the installation) to make it work?.
 
  Thanks for your help.
 
 
  - Mensaje original -
  De: [EMAIL PROTECTED]
  Para: Carlos Bambó [EMAIL PROTECTED]
  Enviado: martes 23 de mayo de 2000 16:46
  Asunto: Re: ArcServe Client for Linux
 
 
  Yes, I have used it frequently - and perform bot regular backups and
  restores. I have installed the agent usually on potato and woody and
  ArcServe versions 6.5 Enterprise and 6.61 Advanced running on NT 4 SP5.
 
 
 
  On Tue, 23 May 2000, [iso-8859-1] Carlos Bambó wrote:
 
   Has anybody tried to perform a backup of a Debian workstation using the
   ArcServe (CAI) client for Linux?.
   More specifically from a NT Server.
   If you did succeed please let me know the release and conditions.
  
   Thanks
  
 
  --
  --
   Carlos Bambo
   [EMAIL PROTECTED]
 
  --
  -
  
  
  
  
  
   --
   Unsubscribe?  mail -s unsubscribe [EMAIL PROTECTED] 
  /dev/null
  
  
 
 
 
 
 
 



potato install w/ aic7880

2000-05-20 Thread sjk

I am having a terrible time trying to get potato to install on a
machine with an aic7880 scsi controller. The current rescue.bin hangs
at loading sym53c416 - just after the aic78xxx mods. I have tried
compiling a new kernel with the options listed in the install doc -
and the install begins, but 1) it can't write the tmp keyboard config, 
and 2) the driver script fails. I can't seem to mount any of the
driver disks to update the modules.tgz file - what file system do
these disks use?? I have tried re-writing them several times.

Any help would be much appreciated - Thanks



Re: X windows Crashes

1999-02-23 Thread SJK



 Old box works fine. A little slow, but stable.
 
 The Cyrix box works great outside of X windows.  It loads x windows fine,
 and appears to operate correctly.  But after about 5 minutes in X, it
 crashes, locking up completely and I haven't found a way out w/out a total
 re-boot.
 
 I played around with the video settings, but I'm unable to resolve it.
 

 I do not know what video card you  are using. If you  are using 
 an accelerated card, try using the XF86_SVGA server instead of
 the accelerated server. I found such X crash with accel server

sjk


-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]