[jira] [Resolved] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-08-05 Thread Erik van Oosten (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten resolved KAFKA-14972.
-
Resolution: Won't Fix

> Make KafkaConsumer usable in async runtimes
> ---
>
> Key: KAFKA-14972
> URL: https://issues.apache.org/jira/browse/KAFKA-14972
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Erik van Oosten
>Priority: Major
>  Labels: needs-kip
>
> KafkaConsumer contains a check that rejects nested invocations from different 
> threads (method {{{}acquire{}}}). For users that use an async runtime, this 
> is an almost impossible requirement. Examples of async runtimes that are 
> affected are Kotlin co-routines (see KAFKA-7143) and Zio.
> It should be possible for a thread to pass on its capability to access the 
> consumer to another thread. See 
> [KIP-944|https://cwiki.apache.org/confluence/x/chw0Dw] for a proposal and 
> [https://github.com/apache/kafka/pull/13914] for an implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-08-05 Thread Erik van Oosten (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten resolved KAFKA-14972.
-
Resolution: Won't Fix

> Make KafkaConsumer usable in async runtimes
> ---
>
> Key: KAFKA-14972
> URL: https://issues.apache.org/jira/browse/KAFKA-14972
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Erik van Oosten
>Priority: Major
>  Labels: needs-kip
>
> KafkaConsumer contains a check that rejects nested invocations from different 
> threads (method {{{}acquire{}}}). For users that use an async runtime, this 
> is an almost impossible requirement. Examples of async runtimes that are 
> affected are Kotlin co-routines (see KAFKA-7143) and Zio.
> It should be possible for a thread to pass on its capability to access the 
> consumer to another thread. See 
> [KIP-944|https://cwiki.apache.org/confluence/x/chw0Dw] for a proposal and 
> [https://github.com/apache/kafka/pull/13914] for an implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-08-05 Thread Erik van Oosten (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17751330#comment-17751330
 ] 

Erik van Oosten commented on KAFKA-14972:
-

I am closing this task as won't fix as the committers do not seem to be 
convinced it is needed to support async run times.

> Make KafkaConsumer usable in async runtimes
> ---
>
> Key: KAFKA-14972
> URL: https://issues.apache.org/jira/browse/KAFKA-14972
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Erik van Oosten
>Priority: Major
>  Labels: needs-kip
>
> KafkaConsumer contains a check that rejects nested invocations from different 
> threads (method {{{}acquire{}}}). For users that use an async runtime, this 
> is an almost impossible requirement. Examples of async runtimes that are 
> affected are Kotlin co-routines (see KAFKA-7143) and Zio.
> It should be possible for a thread to pass on its capability to access the 
> consumer to another thread. See 
> [KIP-944|https://cwiki.apache.org/confluence/x/chw0Dw] for a proposal and 
> [https://github.com/apache/kafka/pull/13914] for an implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-08-05 Thread Erik van Oosten (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten reassigned KAFKA-14972:
---

Assignee: (was: Erik van Oosten)

> Make KafkaConsumer usable in async runtimes
> ---
>
> Key: KAFKA-14972
> URL: https://issues.apache.org/jira/browse/KAFKA-14972
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Erik van Oosten
>Priority: Major
>  Labels: needs-kip
>
> KafkaConsumer contains a check that rejects nested invocations from different 
> threads (method {{{}acquire{}}}). For users that use an async runtime, this 
> is an almost impossible requirement. Examples of async runtimes that are 
> affected are Kotlin co-routines (see KAFKA-7143) and Zio.
> It should be possible for a thread to pass on its capability to access the 
> consumer to another thread. See 
> [KIP-944|https://cwiki.apache.org/confluence/x/chw0Dw] for a proposal and 
> [https://github.com/apache/kafka/pull/13914] for an implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-06-28 Thread Erik van Oosten (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated KAFKA-14972:

Description: 
KafkaConsumer contains a check that rejects nested invocations from different 
threads (method {{{}acquire{}}}). For users that use an async runtime, this is 
an almost impossible requirement. Examples of async runtimes that are affected 
are Kotlin co-routines (see KAFKA-7143) and Zio.

It should be possible for a thread to pass on its capability to access the 
consumer to another thread. See 
[KIP-944|https://cwiki.apache.org/confluence/x/chw0Dw] for a proposal and 
[https://github.com/apache/kafka/pull/13914] for an implementation.

  was:
KafkaConsumer contains a check that rejects nested invocations from different 
threads (method {{{}acquire{}}}). For users that use an async runtime, this is 
an almost impossible requirement. Examples of async runtimes that are affected 
are Kotlin co-routines (see KAFKA-7143) and Zio.

It should be possible for a thread to pass on its capability to access the 
consumer to another thread. See KIP-944 for a proposal and


> Make KafkaConsumer usable in async runtimes
> ---
>
> Key: KAFKA-14972
> URL: https://issues.apache.org/jira/browse/KAFKA-14972
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Erik van Oosten
>Assignee: Erik van Oosten
>Priority: Major
>  Labels: needs-kip
>
> KafkaConsumer contains a check that rejects nested invocations from different 
> threads (method {{{}acquire{}}}). For users that use an async runtime, this 
> is an almost impossible requirement. Examples of async runtimes that are 
> affected are Kotlin co-routines (see KAFKA-7143) and Zio.
> It should be possible for a thread to pass on its capability to access the 
> consumer to another thread. See 
> [KIP-944|https://cwiki.apache.org/confluence/x/chw0Dw] for a proposal and 
> [https://github.com/apache/kafka/pull/13914] for an implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-06-28 Thread Erik van Oosten (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated KAFKA-14972:

Description: 
KafkaConsumer contains a check that rejects nested invocations from different 
threads (method {{{}acquire{}}}). For users that use an async runtime, this is 
an almost impossible requirement. Examples of async runtimes that are affected 
are Kotlin co-routines (see KAFKA-7143) and Zio.

It should be possible for a thread to pass on its capability to access the 
consumer to another thread. See KIP-944 for a proposal and

  was:
KafkaConsumer contains a check that rejects nested invocations from different 
threads (method {{{}acquire{}}}). For users that use an async runtime, this is 
an almost impossible requirement. Examples of async runtimes that are affected 
are Kotlin co-routines (see KAFKA-7143) and Zio.

We propose to replace the thread-id check with an access-id that is stored on a 
thread-local variable. Existing programs will not be affected. Developers that 
work in an async runtime can pick up the access-id and set it on the 
thread-local variable in a thread of their choosing.

Every time a callback is invoked a new access-id is generated. When the 
callback completes, the previous access-id is restored.

This proposal does not make it impossible to use the client incorrectly. 
However, we think it strikes a good balance between making correct usage from 
an async runtime possible while making incorrect usage difficult.

Alternatives considered:
 # Configuration that switches off the check completely.


> Make KafkaConsumer usable in async runtimes
> ---
>
> Key: KAFKA-14972
> URL: https://issues.apache.org/jira/browse/KAFKA-14972
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Erik van Oosten
>Assignee: Erik van Oosten
>Priority: Major
>  Labels: needs-kip
>
> KafkaConsumer contains a check that rejects nested invocations from different 
> threads (method {{{}acquire{}}}). For users that use an async runtime, this 
> is an almost impossible requirement. Examples of async runtimes that are 
> affected are Kotlin co-routines (see KAFKA-7143) and Zio.
> It should be possible for a thread to pass on its capability to access the 
> consumer to another thread. See KIP-944 for a proposal and



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-06-28 Thread Erik van Oosten (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738014#comment-17738014
 ] 

Erik van Oosten commented on KAFKA-14972:
-

KIP-944 https://cwiki.apache.org/confluence/x/chw0Dw

> Make KafkaConsumer usable in async runtimes
> ---
>
> Key: KAFKA-14972
> URL: https://issues.apache.org/jira/browse/KAFKA-14972
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Erik van Oosten
>Assignee: Erik van Oosten
>Priority: Major
>  Labels: needs-kip
>
> KafkaConsumer contains a check that rejects nested invocations from different 
> threads (method {{{}acquire{}}}). For users that use an async runtime, this 
> is an almost impossible requirement. Examples of async runtimes that are 
> affected are Kotlin co-routines (see KAFKA-7143) and Zio.
> We propose to replace the thread-id check with an access-id that is stored on 
> a thread-local variable. Existing programs will not be affected. Developers 
> that work in an async runtime can pick up the access-id and set it on the 
> thread-local variable in a thread of their choosing.
> Every time a callback is invoked a new access-id is generated. When the 
> callback completes, the previous access-id is restored.
> This proposal does not make it impossible to use the client incorrectly. 
> However, we think it strikes a good balance between making correct usage from 
> an async runtime possible while making incorrect usage difficult.
> Alternatives considered:
>  # Configuration that switches off the check completely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-06-27 Thread Erik van Oosten (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17737711#comment-17737711
 ] 

Erik van Oosten commented on KAFKA-14972:
-

I will complete the KIP tomorrow.

> Make KafkaConsumer usable in async runtimes
> ---
>
> Key: KAFKA-14972
> URL: https://issues.apache.org/jira/browse/KAFKA-14972
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Erik van Oosten
>Assignee: Erik van Oosten
>Priority: Major
>  Labels: needs-kip
>
> KafkaConsumer contains a check that rejects nested invocations from different 
> threads (method {{{}acquire{}}}). For users that use an async runtime, this 
> is an almost impossible requirement. Examples of async runtimes that are 
> affected are Kotlin co-routines (see KAFKA-7143) and Zio.
> We propose to replace the thread-id check with an access-id that is stored on 
> a thread-local variable. Existing programs will not be affected. Developers 
> that work in an async runtime can pick up the access-id and set it on the 
> thread-local variable in a thread of their choosing.
> Every time a callback is invoked a new access-id is generated. When the 
> callback completes, the previous access-id is restored.
> This proposal does not make it impossible to use the client incorrectly. 
> However, we think it strikes a good balance between making correct usage from 
> an async runtime possible while making incorrect usage difficult.
> Alternatives considered:
>  # Configuration that switches off the check completely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-10337) Wait for pending async commits in commitSync() even if no offsets are specified

2023-06-08 Thread Erik van Oosten (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17730665#comment-17730665
 ] 

Erik van Oosten commented on KAFKA-10337:
-

Thanks for your PR [~thomaslee]. It has been merged now with little changes.

> Wait for pending async commits in commitSync() even if no offsets are 
> specified
> ---
>
> Key: KAFKA-10337
> URL: https://issues.apache.org/jira/browse/KAFKA-10337
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Tom Lee
>Assignee: Erik van Oosten
>Priority: Major
> Fix For: 3.6.0
>
>
> The JavaDoc for commitSync() states the following:
> {quote}Note that asynchronous offset commits sent previously with the
> {@link #commitAsync(OffsetCommitCallback)}
>  (or similar) are guaranteed to have their callbacks invoked prior to 
> completion of this method.
> {quote}
> But should we happen to call the method with an empty offset map
> (i.e. commitSync(Collections.emptyMap())) the callbacks for any incomplete 
> async commits will not be invoked because of an early return in 
> ConsumerCoordinator.commitOffsetsSync() when the input map is empty.
> If users are doing manual offset commits and relying on commitSync as a 
> barrier for in-flight async commits prior to a rebalance, this could be an 
> important (though somewhat implementation-dependent) detail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-05-07 Thread Erik van Oosten (Jira)
Erik van Oosten created KAFKA-14972:
---

 Summary: Make KafkaConsumer usable in async runtimes
 Key: KAFKA-14972
 URL: https://issues.apache.org/jira/browse/KAFKA-14972
 Project: Kafka
  Issue Type: Wish
  Components: consumer
Reporter: Erik van Oosten


KafkaConsumer contains a check that rejects nested invocations from different 
threads (method {{{}acquire{}}}). For users that use an async runtime, this is 
an almost impossible requirement. Examples of async runtimes that are affected 
are Kotlin co-routines (see KAFKA-7143) and Zio.

We propose to replace the thread-id check with an access-id that is stored on a 
thread-local variable. Existing programs will not be affected. Developers that 
work in an async runtime can pick up the access-id and set it on the 
thread-local variable in a thread of their choosing.

Every time a callback is invoked a new access-id is generated. When the 
callback completes, the previous access-id is restored.

This proposal does not make it impossible to use the client incorrectly. 
However, we think it strikes a good balance between making correct usage from 
an async runtime possible while making incorrect usage difficult.

Alternatives considered:
 # Configuration that switches off the check completely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14972) Make KafkaConsumer usable in async runtimes

2023-05-07 Thread Erik van Oosten (Jira)
Erik van Oosten created KAFKA-14972:
---

 Summary: Make KafkaConsumer usable in async runtimes
 Key: KAFKA-14972
 URL: https://issues.apache.org/jira/browse/KAFKA-14972
 Project: Kafka
  Issue Type: Wish
  Components: consumer
Reporter: Erik van Oosten


KafkaConsumer contains a check that rejects nested invocations from different 
threads (method {{{}acquire{}}}). For users that use an async runtime, this is 
an almost impossible requirement. Examples of async runtimes that are affected 
are Kotlin co-routines (see KAFKA-7143) and Zio.

We propose to replace the thread-id check with an access-id that is stored on a 
thread-local variable. Existing programs will not be affected. Developers that 
work in an async runtime can pick up the access-id and set it on the 
thread-local variable in a thread of their choosing.

Every time a callback is invoked a new access-id is generated. When the 
callback completes, the previous access-id is restored.

This proposal does not make it impossible to use the client incorrectly. 
However, we think it strikes a good balance between making correct usage from 
an async runtime possible while making incorrect usage difficult.

Alternatives considered:
 # Configuration that switches off the check completely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-10337) Wait for pending async commits in commitSync() even if no offsets are specified

2023-05-06 Thread Erik van Oosten (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720114#comment-17720114
 ] 

Erik van Oosten commented on KAFKA-10337:
-

[~thomaslee] when we use commitAsync from the rebalance listener (potentially 
with empty offsets), no polling takes place anymore. Shall I amend the PR so 
that it does polling from commitAsync as well? WDYT?

> Wait for pending async commits in commitSync() even if no offsets are 
> specified
> ---
>
> Key: KAFKA-10337
> URL: https://issues.apache.org/jira/browse/KAFKA-10337
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Tom Lee
>Assignee: Kirk True
>Priority: Major
>
> The JavaDoc for commitSync() states the following:
> {quote}Note that asynchronous offset commits sent previously with the
> {@link #commitAsync(OffsetCommitCallback)}
>  (or similar) are guaranteed to have their callbacks invoked prior to 
> completion of this method.
> {quote}
> But should we happen to call the method with an empty offset map
> (i.e. commitSync(Collections.emptyMap())) the callbacks for any incomplete 
> async commits will not be invoked because of an early return in 
> ConsumerCoordinator.commitOffsetsSync() when the input map is empty.
> If users are doing manual offset commits and relying on commitSync as a 
> barrier for in-flight async commits prior to a rebalance, this could be an 
> important (though somewhat implementation-dependent) detail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-10337) Wait for pending async commits in commitSync() even if no offsets are specified

2023-05-06 Thread Erik van Oosten (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720113#comment-17720113
 ] 

Erik van Oosten commented on KAFKA-10337:
-

Opened [~thomaslee] 's PR again: https://github.com/apache/kafka/pull/13678

> Wait for pending async commits in commitSync() even if no offsets are 
> specified
> ---
>
> Key: KAFKA-10337
> URL: https://issues.apache.org/jira/browse/KAFKA-10337
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Tom Lee
>Assignee: Kirk True
>Priority: Major
>
> The JavaDoc for commitSync() states the following:
> {quote}Note that asynchronous offset commits sent previously with the
> {@link #commitAsync(OffsetCommitCallback)}
>  (or similar) are guaranteed to have their callbacks invoked prior to 
> completion of this method.
> {quote}
> But should we happen to call the method with an empty offset map
> (i.e. commitSync(Collections.emptyMap())) the callbacks for any incomplete 
> async commits will not be invoked because of an early return in 
> ConsumerCoordinator.commitOffsetsSync() when the input map is empty.
> If users are doing manual offset commits and relying on commitSync as a 
> barrier for in-flight async commits prior to a rebalance, this could be an 
> important (though somewhat implementation-dependent) detail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (SPARK-27025) Speed up toLocalIterator

2019-03-05 Thread Erik van Oosten (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-27025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten resolved SPARK-27025.
-
Resolution: Incomplete

> Speed up toLocalIterator
> 
>
> Key: SPARK-27025
> URL: https://issues.apache.org/jira/browse/SPARK-27025
> Project: Spark
>  Issue Type: Wish
>  Components: Spark Core
>Affects Versions: 2.3.3
>Reporter: Erik van Oosten
>Priority: Major
>
> Method {{toLocalIterator}} fetches the partitions to the driver one by one. 
> However, as far as I can see, any required computation for the 
> yet-to-be-fetched-partitions is not kicked off until it is fetched. 
> Effectively only one partition is being computed at the same time. 
> Desired behavior: immediately start calculation of all partitions while 
> retaining the download-a-partition at a time behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27025) Speed up toLocalIterator

2019-03-05 Thread Erik van Oosten (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784854#comment-16784854
 ] 

Erik van Oosten commented on SPARK-27025:
-

If there is no obvious way to improve Spark, then its probably better to close 
this issue until someone finds a better angle.

BTW, the cache/count/iterate/unpersist cycle did not make it faster for my use 
case. I will try the 2-partition implementation of toLocalIterator.

[~srowen], [~hyukjin.kwon], thanks for your input!

> Speed up toLocalIterator
> 
>
> Key: SPARK-27025
> URL: https://issues.apache.org/jira/browse/SPARK-27025
> Project: Spark
>  Issue Type: Wish
>  Components: Spark Core
>Affects Versions: 2.3.3
>Reporter: Erik van Oosten
>Priority: Major
>
> Method {{toLocalIterator}} fetches the partitions to the driver one by one. 
> However, as far as I can see, any required computation for the 
> yet-to-be-fetched-partitions is not kicked off until it is fetched. 
> Effectively only one partition is being computed at the same time. 
> Desired behavior: immediately start calculation of all partitions while 
> retaining the download-a-partition at a time behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-27025) Speed up toLocalIterator

2019-03-04 Thread Erik van Oosten (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783220#comment-16783220
 ] 

Erik van Oosten edited comment on SPARK-27025 at 3/4/19 10:36 AM:
--

[~hyukjin.kwon] maybe I misunderstood Sean's comment. I understood that every 
invocation of toLocalIterator will either benefit, or not have any negative 
side effect.

Under this assumption, it would be better to put the 
cache/count/iterate/unpersist logic directly in toLocalIterator.

I can not make any assumptions on the number of use cases.


was (Author: erikvanoosten):
[~hyukjin.kwon] maybe I misunderstood Sean's comment. I understood that every 
invocation of toLocalIterator will either benefit, or not have any negative 
side effect.

Under this assumption, it would be better to put the 
cache/count/iterate/unpersist logic directly in toLocalIterator.

> Speed up toLocalIterator
> 
>
> Key: SPARK-27025
> URL: https://issues.apache.org/jira/browse/SPARK-27025
> Project: Spark
>  Issue Type: Wish
>  Components: Spark Core
>Affects Versions: 2.3.3
>Reporter: Erik van Oosten
>Priority: Major
>
> Method {{toLocalIterator}} fetches the partitions to the driver one by one. 
> However, as far as I can see, any required computation for the 
> yet-to-be-fetched-partitions is not kicked off until it is fetched. 
> Effectively only one partition is being computed at the same time. 
> Desired behavior: immediately start calculation of all partitions while 
> retaining the download-a-partition at a time behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27025) Speed up toLocalIterator

2019-03-04 Thread Erik van Oosten (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783220#comment-16783220
 ] 

Erik van Oosten commented on SPARK-27025:
-

[~hyukjin.kwon] maybe I misunderstood Sean's comment. I understood that every 
invocation of toLocalIterator will either benefit, or not have any negative 
side effect.

Under this assumption, it would be better to put the 
cache/count/iterate/unpersist logic directly in toLocalIterator.

> Speed up toLocalIterator
> 
>
> Key: SPARK-27025
> URL: https://issues.apache.org/jira/browse/SPARK-27025
> Project: Spark
>  Issue Type: Wish
>  Components: Spark Core
>Affects Versions: 2.3.3
>Reporter: Erik van Oosten
>Priority: Major
>
> Method {{toLocalIterator}} fetches the partitions to the driver one by one. 
> However, as far as I can see, any required computation for the 
> yet-to-be-fetched-partitions is not kicked off until it is fetched. 
> Effectively only one partition is being computed at the same time. 
> Desired behavior: immediately start calculation of all partitions while 
> retaining the download-a-partition at a time behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27025) Speed up toLocalIterator

2019-03-04 Thread Erik van Oosten (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783110#comment-16783110
 ] 

Erik van Oosten commented on SPARK-27025:
-

Thanks Sean, that is very useful.

In my use case the entire data set is too big for the driver, but I can easily 
fit 1/10th of it. So even with as little as 20 partitions, 2 partitions on the 
driver would be fine.
In the use case there are 2 joins, and a groupby/count so this is probably a 
wide transformation. So it seems that the cache/count/toLocalIterator/unpersist 
approach is applicable.

The ergonomics of this approach are way worse, so I don't agree that it is 
'better' to do this in application code.

> Speed up toLocalIterator
> 
>
> Key: SPARK-27025
> URL: https://issues.apache.org/jira/browse/SPARK-27025
> Project: Spark
>  Issue Type: Wish
>  Components: Spark Core
>Affects Versions: 2.3.3
>Reporter: Erik van Oosten
>Priority: Major
>
> Method {{toLocalIterator}} fetches the partitions to the driver one by one. 
> However, as far as I can see, any required computation for the 
> yet-to-be-fetched-partitions is not kicked off until it is fetched. 
> Effectively only one partition is being computed at the same time. 
> Desired behavior: immediately start calculation of all partitions while 
> retaining the download-a-partition at a time behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-27025) Speed up toLocalIterator

2019-03-02 Thread Erik van Oosten (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782322#comment-16782322
 ] 

Erik van Oosten edited comment on SPARK-27025 at 3/2/19 8:43 AM:
-

The point is to _not_ fetch pro-actively.

I have a program in which several steps need to be executed before anything can 
be transferred to the driver. So why can't the executors start executing 
immediately, and only transfer the results to the driver when its ready?


was (Author: erikvanoosten):
I have a program in which several steps need to be executed before anything can 
be transferred to the driver. So why can't the executors start executing 
immediately, and only transfer the results to the driver when its ready?

> Speed up toLocalIterator
> 
>
> Key: SPARK-27025
> URL: https://issues.apache.org/jira/browse/SPARK-27025
> Project: Spark
>  Issue Type: Wish
>  Components: Spark Core
>Affects Versions: 2.3.3
>Reporter: Erik van Oosten
>Priority: Major
>
> Method {{toLocalIterator}} fetches the partitions to the driver one by one. 
> However, as far as I can see, any required computation for the 
> yet-to-be-fetched-partitions is not kicked off until it is fetched. 
> Effectively only one partition is being computed at the same time. 
> Desired behavior: immediately start calculation of all partitions while 
> retaining the download-a-partition at a time behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27025) Speed up toLocalIterator

2019-03-02 Thread Erik van Oosten (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782322#comment-16782322
 ] 

Erik van Oosten commented on SPARK-27025:
-

I have a program in which several steps need to be executed before anything can 
be transferred to the driver. So why can't the executors start executing 
immediately, and only transfer the results to the driver when its ready?

> Speed up toLocalIterator
> 
>
> Key: SPARK-27025
> URL: https://issues.apache.org/jira/browse/SPARK-27025
> Project: Spark
>  Issue Type: Wish
>  Components: Spark Core
>Affects Versions: 2.3.3
>Reporter: Erik van Oosten
>Priority: Major
>
> Method {{toLocalIterator}} fetches the partitions to the driver one by one. 
> However, as far as I can see, any required computation for the 
> yet-to-be-fetched-partitions is not kicked off until it is fetched. 
> Effectively only one partition is being computed at the same time. 
> Desired behavior: immediately start calculation of all partitions while 
> retaining the download-a-partition at a time behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-27025) Speed up toLocalIterator

2019-03-01 Thread Erik van Oosten (JIRA)
Erik van Oosten created SPARK-27025:
---

 Summary: Speed up toLocalIterator
 Key: SPARK-27025
 URL: https://issues.apache.org/jira/browse/SPARK-27025
 Project: Spark
  Issue Type: Wish
  Components: Spark Core
Affects Versions: 2.3.3
Reporter: Erik van Oosten


Method {{toLocalIterator}} fetches the partitions to the driver one by one. 
However, as far as I can see, any required computation for the 
yet-to-be-fetched-partitions is not kicked off until it is fetched. Effectively 
only one partition is being computed at the same time. 



Desired behavior: immediately start calculation of all partitions while 
retaining the download-a-partition at a time behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (KAFKA-960) Upgrade Metrics to 3.x

2018-08-20 Thread Erik van Oosten (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585713#comment-16585713
 ] 

Erik van Oosten commented on KAFKA-960:
---

Metrics 4.x was released not so long ago. The core is binary compatible with 
metrics 3.x. However, many modules were split from the core and these got a 
different package name (and are therefore not compatible). For just collecting, 
you're probably fine.

Please also know that Metrics 5.x is on standby for more then half a year. 
Metrics 5 will support tags. Metrics 5 is not binary compatible.

I recommend upgrading to Metrics 4.

> Upgrade Metrics to 3.x
> --
>
> Key: KAFKA-960
> URL: https://issues.apache.org/jira/browse/KAFKA-960
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 0.8.1
>Reporter: Cosmin Lehene
>Priority: Major
>
> Now that metrics 3.0 has been released 
> (http://metrics.codahale.com/about/release-notes/) we can upgrade back



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-5633) ClassCastException: X cannot be cast to X when re-submitting a job.

2017-11-09 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245474#comment-16245474
 ] 

Erik van Oosten commented on FLINK-5633:


bq. Just curious, why are you creating a new reader for each record?

Its just a bit easier then caching a reader for each writer/reader schema 
combination.

> ClassCastException: X cannot be cast to X when re-submitting a job.
> ---
>
> Key: FLINK-5633
> URL: https://issues.apache.org/jira/browse/FLINK-5633
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, YARN
>Affects Versions: 1.1.4
>Reporter: Giuliano Caliari
>Priority: Minor
>
> I’m running a job on my local cluster and the first time I submit the job 
> everything works but whenever I cancel and re-submit the same job it fails 
> with:
> {quote}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
>   at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:101)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:634)
>   at au.com.my.package.pTraitor.OneTrait.execute(Traitor.scala:147)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$.delayedEndpoint$au$com$my$package$pTraitor$TraitorAppOneTrait$1(TraitorApp.scala:22)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$delayedInit$body.apply(TraitorApp.scala:21)
>   at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
>   at 
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.App$class.main(App.scala:76)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$.main(TraitorApp.scala:21)
>   at au.com.my.package.pTraitor.TraitorAppOneTrait.main(TraitorApp.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:339)
>   at 
> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
>   at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
>   at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
>   at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:29)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:900)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> 

[jira] [Commented] (FLINK-5633) ClassCastException: X cannot be cast to X when re-submitting a job.

2017-11-07 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241701#comment-16241701
 ] 

Erik van Oosten commented on FLINK-5633:


[~StephanEwen] We need to process 130K msg/s, I guess that can be called often 
:) . Our process is CPU bound and parsing Avro is ±15% of that. Any improvement 
means we can run with fewer machines.

For every message we create a new SpecificDatumReader. If I follow the code 
correctly that should _not_ give a large overhead. The Schema instances we pass 
to it _are_ cached.

Then we call {SpecificDatumReader.read}} to parse each Avro message. In that 
call you eventually end up in {{SpecificData.newInstance}} to create a new 
instance of the target class. The constructor of that class is looked up in a 
cache. That cache is declared as {{static}}. I do not understand how 
instantiating a new {{SpecificData}} for every call to {{read}} helps because 
it would still use the same cache. The code I pasted above also uses a 
constructor cache but the cache is not {{static}}. Reversing the class loader 
order should also work.

> ClassCastException: X cannot be cast to X when re-submitting a job.
> ---
>
> Key: FLINK-5633
> URL: https://issues.apache.org/jira/browse/FLINK-5633
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, YARN
>Affects Versions: 1.1.4
>Reporter: Giuliano Caliari
>Priority: Minor
>
> I’m running a job on my local cluster and the first time I submit the job 
> everything works but whenever I cancel and re-submit the same job it fails 
> with:
> {quote}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
>   at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:101)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:634)
>   at au.com.my.package.pTraitor.OneTrait.execute(Traitor.scala:147)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$.delayedEndpoint$au$com$my$package$pTraitor$TraitorAppOneTrait$1(TraitorApp.scala:22)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$delayedInit$body.apply(TraitorApp.scala:21)
>   at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
>   at 
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.App$class.main(App.scala:76)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$.main(TraitorApp.scala:21)
>   at au.com.my.package.pTraitor.TraitorAppOneTrait.main(TraitorApp.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:339)
>   at 
> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
>   at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
>   at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
>   at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:29)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:900)
>   at 
> 

[jira] [Commented] (AVRO-2076) Combine already serialized Avro records to an Avro file

2017-09-15 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/AVRO-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167422#comment-16167422
 ] 

Erik van Oosten commented on AVRO-2076:
---

Awesome! Thanks Doug. Somehow I missed that method.

> Combine already serialized Avro records to an Avro file
> ---
>
> Key: AVRO-2076
> URL: https://issues.apache.org/jira/browse/AVRO-2076
> Project: Avro
>  Issue Type: Wish
>Reporter: Erik van Oosten
>
> In some use cases Avro events arrive already serialized (e.g. when listening 
> to a Kafka topic). It would be great if there would an API that allows 
> writing an Avro file without the need for deserializing and serializing these 
> Avro records.
> Providing such an API allows for very efficient creation of Avro files: given 
> that these Avro records are written with the same schema, an Avro file would 
> write will the exact same bytes anyway (before block compression).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AVRO-2076) Combine already serialized Avro records to an Avro file

2017-09-15 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/AVRO-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten resolved AVRO-2076.
---
Resolution: Not A Problem

> Combine already serialized Avro records to an Avro file
> ---
>
> Key: AVRO-2076
> URL: https://issues.apache.org/jira/browse/AVRO-2076
> Project: Avro
>  Issue Type: Wish
>Reporter: Erik van Oosten
>
> In some use cases Avro events arrive already serialized (e.g. when listening 
> to a Kafka topic). It would be great if there would an API that allows 
> writing an Avro file without the need for deserializing and serializing these 
> Avro records.
> Providing such an API allows for very efficient creation of Avro files: given 
> that these Avro records are written with the same schema, an Avro file would 
> write will the exact same bytes anyway (before block compression).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4796) Add new Sink interface with access to more meta data

2017-09-14 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166443#comment-16166443
 ] 

Erik van Oosten commented on FLINK-4796:


I am not sure why this is marked as a duplicate. The problem here is 
inconsistent handling of the runtime context inside the different layers under 
FlinkKafkaProducer: method {{getRuntimeContext}} gives {{null}} even though 
{{setRuntimeContext}} was called.

How does that relate to the addition of a new interface?

> Add new Sink interface with access to more meta data
> 
>
> Key: FLINK-4796
> URL: https://issues.apache.org/jira/browse/FLINK-4796
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Aljoscha Krettek
>
> The current {{SinkFunction}} cannot access the timestamps of elements which 
> resulted in the (somewhat hacky) {{FlinkKafkaProducer010}}. Due to other 
> limitations {{GenericWriteAheadSink}} is currently also a {{StreamOperator}} 
> and not a {{SinkFunction}}.
> We should add a new interface for sinks that takes a context parameter, 
> similar to {{ProcessFunction}}. This will allow sinks to query additional 
> meta data about the element that they're receiving. 
> This is one ML thread where a user ran into a problem caused by this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Why-I-am-getting-Null-pointer-exception-while-accessing-RuntimeContext-in-FlinkKafkaProducer010-td12633.html#a12635
> h3. Original Text (that is still valid but not general)
> The Kafka 0.10 connector supports writing event timestamps to Kafka.
> Currently, the regular DataStream APIs don't allow user code to access the 
> event timestamp easily. That's why the Kafka connector is using a custom 
> operator ({{transform()}}) to access the event time.
> With this JIRA, I would like to provide the event timestamp in the regular 
> DataStream APIs.
> Once I'll look into the issue, I'll post some proposals how to add the 
> timestamp. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AVRO-2076) Combine already serialized Avro records to an Avro file

2017-09-14 Thread Erik van Oosten (JIRA)
Erik van Oosten created AVRO-2076:
-

 Summary: Combine already serialized Avro records to an Avro file
 Key: AVRO-2076
 URL: https://issues.apache.org/jira/browse/AVRO-2076
 Project: Avro
  Issue Type: Wish
Reporter: Erik van Oosten


In some use cases Avro events arrive already serialized (e.g. when listening to 
a Kafka topic). It would be great if there would an API that allows writing an 
Avro file without the need for deserializing and serializing these Avro records.

Providing such an API allows for very efficient creation of Avro files: given 
that these Avro records are written with the same schema, an Avro file would 
write will the exact same bytes anyway (before block compression).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4796) Add new Sink interface with access to more meta data

2017-09-11 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160810#comment-16160810
 ] 

Erik van Oosten commented on FLINK-4796:


A workaround is to override {{setRuntimeContext}} (make sure to call 
{{super.setRuntimeContext}}), and use the passed in context. Possibly store it 
in a private field for later access.

> Add new Sink interface with access to more meta data
> 
>
> Key: FLINK-4796
> URL: https://issues.apache.org/jira/browse/FLINK-4796
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>
> The current {{SinkFunction}} cannot access the timestamps of elements which 
> resulted in the (somewhat hacky) {{FlinkKafkaProducer010}}. Due to other 
> limitations {{GenericWriteAheadSink}} is currently also a {{StreamOperator}} 
> and not a {{SinkFunction}}.
> We should add a new interface for sinks that takes a context parameter, 
> similar to {{ProcessFunction}}. This will allow sinks to query additional 
> meta data about the element that they're receiving. 
> This is one ML thread where a user ran into a problem caused by this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Why-I-am-getting-Null-pointer-exception-while-accessing-RuntimeContext-in-FlinkKafkaProducer010-td12633.html#a12635
> h3. Original Text (that is still valid but not general)
> The Kafka 0.10 connector supports writing event timestamps to Kafka.
> Currently, the regular DataStream APIs don't allow user code to access the 
> event timestamp easily. That's why the Kafka connector is using a custom 
> operator ({{transform()}}) to access the event time.
> With this JIRA, I would like to provide the event timestamp in the regular 
> DataStream APIs.
> Once I'll look into the issue, I'll post some proposals how to add the 
> timestamp. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-1390) java.lang.ClassCastException: X cannot be cast to X

2017-06-21 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058708#comment-16058708
 ] 

Erik van Oosten commented on FLINK-1390:


See 
https://issues.apache.org/jira/browse/FLINK-5633?focusedCommentId=16058706=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16058706
 for a proper solution.

>  java.lang.ClassCastException: X cannot be cast to X
> 
>
> Key: FLINK-1390
> URL: https://issues.apache.org/jira/browse/FLINK-1390
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 0.8.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>
> A user is affected by an issue, which is probably caused by different 
> classloaders being used for loading user classes.
> Current state of investigation:
> - the error happens in yarn sessions (there is only a YARN environment 
> available)
> - the error doesn't happen on the first time the job is being executed. It 
> only happens on subsequent executions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5633) ClassCastException: X cannot be cast to X when re-submitting a job.

2017-06-21 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058706#comment-16058706
 ] 

Erik van Oosten commented on FLINK-5633:


In case you need throughput (like we do), the caching is indispensable. In 
those cases you can use the following {{SpecificData}} implementation. Simply 
instantiate it once and then pass that singleton instance to every 
{{SpecificDatumReader}}.

{code:scala|title=LocalCachingSpecificData.scala}
import java.lang.reflect.Constructor
import java.util.concurrent.ConcurrentHashMap

import org.apache.avro.Schema
import org.apache.avro.specific.SpecificData
import scala.collection.JavaConverters._

/**
  * This can be used instead of [[SpecificData]] in multi-classloader 
environments like Flink.
  * This variation removes the JVM singleton constructor cache and replaces it 
with a
  * cache that is local to the current class loader.
  *
  * If two Flink jobs use the same generated Avro code, they will still have 
separate instances of the classes because
  * they live in separate class loaders.
  * However, a JVM-wide singleton cache keeps reference to the class in the 
first class loader that was loaded. Any
  * subsequent jobs will fail with a [[ClassCastException]] because they will 
get incompatible classes.
  */
class LocalCachingSpecificData extends SpecificData {
  private val NO_ARG: Array[Class[_]] = Array.empty
  private val SCHEMA_ARG: Array[Class[_]] = Array(classOf[Schema])
  private val CTOR_CACHE: scala.collection.concurrent.Map[Class[_], 
Constructor[_]] =
new ConcurrentHashMap[Class[_], Constructor[_]]().asScala

  /** Create an instance of a class.
* If the class implements 
[[org.apache.avro.specific.SpecificData.SchemaConstructable]], call a 
constructor with a
* [[org.apache.avro.Schema]] parameter, otherwise use a no-arg constructor.
*/
  private def newInstance(c: Class[_], s: Schema): AnyRef = {
val useSchema = 
classOf[SpecificData.SchemaConstructable].isAssignableFrom(c)
val constructor = CTOR_CACHE.getOrElseUpdate(c, {
  val ctor = c.getDeclaredConstructor((if (useSchema) SCHEMA_ARG else 
NO_ARG): _*)
  ctor.setAccessible(true)
  ctor
})
if (useSchema) constructor.newInstance(s).asInstanceOf[AnyRef]
else constructor.newInstance().asInstanceOf[AnyRef]
  }

  override def createFixed(old: AnyRef, schema: Schema): AnyRef = {
val c = getClass(schema)
if (c == null) super.createFixed(old, schema) // delegate to generic
else if (c.isInstance(old)) old
else newInstance(c, schema)
  }

  override def newRecord(old: AnyRef, schema: Schema): AnyRef = {
val c = getClass(schema)
if (c == null) super.newRecord(old, schema) // delegate to generic
else if (c.isInstance(old)) {old }
else {newInstance(c, schema) }
  }
}
{code}

> ClassCastException: X cannot be cast to X when re-submitting a job.
> ---
>
> Key: FLINK-5633
> URL: https://issues.apache.org/jira/browse/FLINK-5633
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, YARN
>Affects Versions: 1.1.4
>Reporter: Giuliano Caliari
>Priority: Minor
>
> I’m running a job on my local cluster and the first time I submit the job 
> everything works but whenever I cancel and re-submit the same job it fails 
> with:
> {quote}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
>   at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:101)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:634)
>   at au.com.my.package.pTraitor.OneTrait.execute(Traitor.scala:147)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$.delayedEndpoint$au$com$my$package$pTraitor$TraitorAppOneTrait$1(TraitorApp.scala:22)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$delayedInit$body.apply(TraitorApp.scala:21)
>   at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
>   at 
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.App$class.main(App.scala:76)
>   at 

[jira] [Comment Edited] (FLINK-6928) Kafka sink: default topic should not need to exist

2017-06-15 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050925#comment-16050925
 ] 

Erik van Oosten edited comment on FLINK-6928 at 6/15/17 6:39 PM:
-

In my ideal world method {{getTargetTopic}} would be removed from 
{{*SerializationSchema}} and moved to a new interface, e.g. 
{{DestinationTopic}}.
Then there are two constructor variants for {{FlinkKafkaProducer}}: one would 
take a topic ({{String}}), the other would take a {{DestinationTopic}}. Both 
would have the simplified {{*SerializationSchema}} as argument. To make things 
simple internally, the first variant could wrap the topic in a implementation 
of {{DestinationTopic}} that always returns the same topic.


was (Author: erikvanoosten):
In my ideal world method {{getTargetTopic}} would be removed from 
{{SerializationSchema}} and moved to a new interface, e.g. {{DestinationTopic}}.
Then there are two constructor variants for {{FlinkKafkaProducer}}: one would 
take a topic ({{String}}), the other would take a {{DestinationTopic}}. Both 
would have the simplified {{SerializationSchema}} as argument. To make things 
simple internally, the first variant could wrap the topic in a implementation 
of {{DestinationTopic}} that always returns the same topic.

> Kafka sink: default topic should not need to exist
> --
>
> Key: FLINK-6928
> URL: https://issues.apache.org/jira/browse/FLINK-6928
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Erik van Oosten
>
> When using a Kafka sink, the defaultTopic needs to exist even when it is 
> never used. It would be nice if fetching partition information for the 
> default topic would be delayed until the moment a topic is actually used.
> Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
> default topic.
> In addition, it would be nice if we could signal that the defaultTopic is not 
> needed by passing {{null}}. Currently, a value for the defaultTopic is 
> required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6928) Kafka sink: default topic should not need to exist

2017-06-15 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050925#comment-16050925
 ] 

Erik van Oosten commented on FLINK-6928:


In my ideal world method {{getTargetTopic}} would be removed from 
{{SerializationSchema}} and moved to a new interface, e.g. {{DestinationTopic}}.
Then there are two constructor variants for {{FlinkKafkaProducer}}: one would 
take a topic ({{String}}), the other would take a {{DestinationTopic}}. Both 
would have the simplified {{SerializationSchema}} as argument. To make things 
simple internally, the first variant could wrap the topic in a implementation 
of {{DestinationTopic}} that always returns the same topic.

> Kafka sink: default topic should not need to exist
> --
>
> Key: FLINK-6928
> URL: https://issues.apache.org/jira/browse/FLINK-6928
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Erik van Oosten
>
> When using a Kafka sink, the defaultTopic needs to exist even when it is 
> never used. It would be nice if fetching partition information for the 
> default topic would be delayed until the moment a topic is actually used.
> Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
> default topic.
> In addition, it would be nice if we could signal that the defaultTopic is not 
> needed by passing {{null}}. Currently, a value for the defaultTopic is 
> required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6928) Kafka sink: default topic should not need to exist

2017-06-15 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated FLINK-6928:
---
Description: 
When using a Kafka sink, the defaultTopic needs to exist even when it is never 
used. It would be nice if fetching partition information for the default topic 
would be delayed until the moment a topic is actually used.

Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
default topic.

It would be nice if we could signal that the defaultTopic is not needed by 
passing {{null}}. Currently, a value for the defaultTopic is required.

  was:
When using a Kafka sink, the defaultTopic needs to exist even when it is never 
used. It would be nice if fetching partition information for the default topic 
would be delayed until the moment a topic is actually used.

Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
default topic.


> Kafka sink: default topic should not need to exist
> --
>
> Key: FLINK-6928
> URL: https://issues.apache.org/jira/browse/FLINK-6928
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Erik van Oosten
>
> When using a Kafka sink, the defaultTopic needs to exist even when it is 
> never used. It would be nice if fetching partition information for the 
> default topic would be delayed until the moment a topic is actually used.
> Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
> default topic.
> It would be nice if we could signal that the defaultTopic is not needed by 
> passing {{null}}. Currently, a value for the defaultTopic is required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6928) Kafka sink: default topic should not need to exist

2017-06-15 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated FLINK-6928:
---
Description: 
When using a Kafka sink, the defaultTopic needs to exist even when it is never 
used. It would be nice if fetching partition information for the default topic 
would be delayed until the moment a topic is actually used.

Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
default topic.

In addition, it would be nice if we could signal that the defaultTopic is not 
needed by passing {{null}}. Currently, a value for the defaultTopic is required.

  was:
When using a Kafka sink, the defaultTopic needs to exist even when it is never 
used. It would be nice if fetching partition information for the default topic 
would be delayed until the moment a topic is actually used.

Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
default topic.

It would be nice if we could signal that the defaultTopic is not needed by 
passing {{null}}. Currently, a value for the defaultTopic is required.


> Kafka sink: default topic should not need to exist
> --
>
> Key: FLINK-6928
> URL: https://issues.apache.org/jira/browse/FLINK-6928
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Erik van Oosten
>
> When using a Kafka sink, the defaultTopic needs to exist even when it is 
> never used. It would be nice if fetching partition information for the 
> default topic would be delayed until the moment a topic is actually used.
> Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
> default topic.
> In addition, it would be nice if we could signal that the defaultTopic is not 
> needed by passing {{null}}. Currently, a value for the defaultTopic is 
> required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6928) Kafka sink: default topic should not need to exist

2017-06-15 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated FLINK-6928:
---
Summary: Kafka sink: default topic should not need to exist  (was: Kafka 
source: default topic should not need to exist)

> Kafka sink: default topic should not need to exist
> --
>
> Key: FLINK-6928
> URL: https://issues.apache.org/jira/browse/FLINK-6928
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Erik van Oosten
>
> When using a Kafka source, the defaultTopic needs to exist even when it is 
> never used. It would be nice if fetching partition information for the 
> default topic would be delayed until the moment a topic is actually used.
> Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
> default topic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6928) Kafka sink: default topic should not need to exist

2017-06-15 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated FLINK-6928:
---
Description: 
When using a Kafka sink, the defaultTopic needs to exist even when it is never 
used. It would be nice if fetching partition information for the default topic 
would be delayed until the moment a topic is actually used.

Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
default topic.

  was:
When using a Kafka source, the defaultTopic needs to exist even when it is 
never used. It would be nice if fetching partition information for the default 
topic would be delayed until the moment a topic is actually used.

Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
default topic.


> Kafka sink: default topic should not need to exist
> --
>
> Key: FLINK-6928
> URL: https://issues.apache.org/jira/browse/FLINK-6928
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Erik van Oosten
>
> When using a Kafka sink, the defaultTopic needs to exist even when it is 
> never used. It would be nice if fetching partition information for the 
> default topic would be delayed until the moment a topic is actually used.
> Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
> default topic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6928) Kafka source: default topic should not need to exist

2017-06-15 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated FLINK-6928:
---
Summary: Kafka source: default topic should not need to exist  (was: Kafka 
source: default topic needs to exist)

> Kafka source: default topic should not need to exist
> 
>
> Key: FLINK-6928
> URL: https://issues.apache.org/jira/browse/FLINK-6928
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Erik van Oosten
>
> When using a Kafka source, the defaultTopic needs to exist even when it is 
> never used. It would be nice if fetching partition information for the 
> default topic would be delayed until the moment a topic is actually used.
> Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
> default topic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6928) Kafka source: default topic needs to exist

2017-06-15 Thread Erik van Oosten (JIRA)
Erik van Oosten created FLINK-6928:
--

 Summary: Kafka source: default topic needs to exist
 Key: FLINK-6928
 URL: https://issues.apache.org/jira/browse/FLINK-6928
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.2.1, 1.3.0
Reporter: Erik van Oosten


When using a Kafka source, the defaultTopic needs to exist even when it is 
never used. It would be nice if fetching partition information for the default 
topic would be delayed until the moment a topic is actually used.

Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
default topic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6928) Kafka source: default topic needs to exist

2017-06-15 Thread Erik van Oosten (JIRA)
Erik van Oosten created FLINK-6928:
--

 Summary: Kafka source: default topic needs to exist
 Key: FLINK-6928
 URL: https://issues.apache.org/jira/browse/FLINK-6928
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.2.1, 1.3.0
Reporter: Erik van Oosten


When using a Kafka source, the defaultTopic needs to exist even when it is 
never used. It would be nice if fetching partition information for the default 
topic would be delayed until the moment a topic is actually used.

Cause: {{FlinkKafkaProducerBase.open}} fetches partition information for the 
default topic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AVRO-2022) IDL does not allow `schema` as identifier

2017-05-23 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/AVRO-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021085#comment-16021085
 ] 

Erik van Oosten commented on AVRO-2022:
---

After working with this change for some time we decided to abandon this idea 
and change the schema after all. The problem is that much code generation tools 
assume they can create method {{Schema getSchema()}}. Unfortunately this 
collides with the value we want it to return.

> IDL does not allow `schema` as identifier
> -
>
> Key: AVRO-2022
> URL: https://issues.apache.org/jira/browse/AVRO-2022
> Project: Avro
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.7.7, 1.8.1
>Reporter: Erik van Oosten
>
> The keyword {{schema}} is now allowed as escaped identifier in IDL. E.g. the 
> following does not compile:
> {noformat}
> record {
>string `schema`;
> }
> {noformat}
> Patches are available for the master branch: 
> https://github.com/apache/avro/pull/209 and 1.7 branch: 
> https://github.com/apache/avro/pull/211



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (AVRO-2022) IDL does not allow `schema` as identifier

2017-05-23 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/AVRO-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten resolved AVRO-2022.
---
Resolution: Invalid

> IDL does not allow `schema` as identifier
> -
>
> Key: AVRO-2022
> URL: https://issues.apache.org/jira/browse/AVRO-2022
> Project: Avro
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.7.7, 1.8.1
>Reporter: Erik van Oosten
>
> The keyword {{schema}} is now allowed as escaped identifier in IDL. E.g. the 
> following does not compile:
> {noformat}
> record {
>string `schema`;
> }
> {noformat}
> Patches are available for the master branch: 
> https://github.com/apache/avro/pull/209 and 1.7 branch: 
> https://github.com/apache/avro/pull/211



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AVRO-2022) IDL does not allow `schema` as identifier

2017-04-11 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/AVRO-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated AVRO-2022:
--
Description: 
The keyword {{schema}} is now allowed as escaped identifier in IDL. E.g. the 
following does not compile:

{noformat}
record {
   string `schema`;
}
{noformat}

Patches are available for the master branch: 
https://github.com/apache/avro/pull/209 and 1.7 branch: 
https://github.com/apache/avro/pull/211

  was:
The keyword {{schema}} is now allowed as escaped identifier in IDL. E.g. the 
following does not compile:

{noformat}
record {
   string `schema`;
}
{noformat}

Patches are available for the master and 1.7 branches here: (todo)


> IDL does not allow `schema` as identifier
> -
>
> Key: AVRO-2022
> URL: https://issues.apache.org/jira/browse/AVRO-2022
> Project: Avro
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.7.7, 1.8.1
>Reporter: Erik van Oosten
>
> The keyword {{schema}} is now allowed as escaped identifier in IDL. E.g. the 
> following does not compile:
> {noformat}
> record {
>string `schema`;
> }
> {noformat}
> Patches are available for the master branch: 
> https://github.com/apache/avro/pull/209 and 1.7 branch: 
> https://github.com/apache/avro/pull/211



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AVRO-2022) IDL does not allow `schema` as identifier

2017-04-11 Thread Erik van Oosten (JIRA)
Erik van Oosten created AVRO-2022:
-

 Summary: IDL does not allow `schema` as identifier
 Key: AVRO-2022
 URL: https://issues.apache.org/jira/browse/AVRO-2022
 Project: Avro
  Issue Type: Bug
  Components: java
Affects Versions: 1.8.1, 1.7.7
Reporter: Erik van Oosten


The keyword {{schema}} is now allowed as escaped identifier in IDL. E.g. the 
following does not compile:

{noformat}
record {
   string `schema`;
}
{noformat}

Patches are available for the master and 1.7 branches here: (todo)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (THRIFT-3867) Specify BinaryProtocol and CompactProtocol

2016-06-29 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/THRIFT-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated THRIFT-3867:

Comment: was deleted

(was: Pull request in https://github.com/apache/thrift/pull/1036.)

> Specify BinaryProtocol and CompactProtocol
> --
>
> Key: THRIFT-3867
> URL: https://issues.apache.org/jira/browse/THRIFT-3867
> Project: Thrift
>  Issue Type: Documentation
>  Components: Documentation
>Reporter: Erik van Oosten
>
> It would be nice when the protocol(s) would be specified somewhere. This 
> should improve communication between developers, but also opens the way for 
> alternative implementations so that Thrift can thrive even better.
> I have a fairly complete description of the BinaryProtocol and 
> CompactProtocol which I will submit as a patch for further review and 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (THRIFT-3867) Specify BinaryProtocol and CompactProtocol

2016-06-29 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/THRIFT-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15355043#comment-15355043
 ] 

Erik van Oosten commented on THRIFT-3867:
-

Pull request in https://github.com/apache/thrift/pull/1036.

> Specify BinaryProtocol and CompactProtocol
> --
>
> Key: THRIFT-3867
> URL: https://issues.apache.org/jira/browse/THRIFT-3867
> Project: Thrift
>  Issue Type: Documentation
>  Components: Documentation
>Reporter: Erik van Oosten
>
> It would be nice when the protocol(s) would be specified somewhere. This 
> should improve communication between developers, but also opens the way for 
> alternative implementations so that Thrift can thrive even better.
> I have a fairly complete description of the BinaryProtocol and 
> CompactProtocol which I will submit as a patch for further review and 
> discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SPARK-6878) Sum on empty RDD fails with exception

2015-04-13 Thread Erik van Oosten (JIRA)
Erik van Oosten created SPARK-6878:
--

 Summary: Sum on empty RDD fails with exception
 Key: SPARK-6878
 URL: https://issues.apache.org/jira/browse/SPARK-6878
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Erik van Oosten
Priority: Minor


{{Sum}} on an empty RDD throws an exception. Expected result is {{0}}.

A simple fix is the replace

{noformat}
class DoubleRDDFunctions {
  def sum(): Double = self.reduce(_ + _)
{noformat} 

with:

{noformat}
class DoubleRDDFunctions {
  def sum(): Double = self.aggregate(0.0)(_ + _, _ + _)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6878) Sum on empty RDD fails with exception

2015-04-13 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492302#comment-14492302
 ] 

Erik van Oosten commented on SPARK-6878:


Ah, yes. I now see that fold also first reduces per partition.

 Sum on empty RDD fails with exception
 -

 Key: SPARK-6878
 URL: https://issues.apache.org/jira/browse/SPARK-6878
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Erik van Oosten
Priority: Minor

 {{Sum}} on an empty RDD throws an exception. Expected result is {{0}}.
 A simple fix is the replace
 {noformat}
 class DoubleRDDFunctions {
   def sum(): Double = self.reduce(_ + _)
 {noformat} 
 with:
 {noformat}
 class DoubleRDDFunctions {
   def sum(): Double = self.aggregate(0.0)(_ + _, _ + _)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6878) Sum on empty RDD fails with exception

2015-04-13 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492336#comment-14492336
 ] 

Erik van Oosten commented on SPARK-6878:


Pull request: https://github.com/apache/spark/pull/5489

 Sum on empty RDD fails with exception
 -

 Key: SPARK-6878
 URL: https://issues.apache.org/jira/browse/SPARK-6878
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Erik van Oosten
Priority: Minor

 {{Sum}} on an empty RDD throws an exception. Expected result is {{0}}.
 A simple fix is the replace
 {noformat}
 class DoubleRDDFunctions {
   def sum(): Double = self.reduce(_ + _)
 {noformat} 
 with:
 {noformat}
 class DoubleRDDFunctions {
   def sum(): Double = self.aggregate(0.0)(_ + _, _ + _)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-6878) Sum on empty RDD fails with exception

2015-04-13 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492282#comment-14492282
 ] 

Erik van Oosten commented on SPARK-6878:


The answer is only defined because the RDD is an {{RDD[Double]}} :)

Sure, I'll make a PR.

 Sum on empty RDD fails with exception
 -

 Key: SPARK-6878
 URL: https://issues.apache.org/jira/browse/SPARK-6878
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Erik van Oosten
Priority: Minor

 {{Sum}} on an empty RDD throws an exception. Expected result is {{0}}.
 A simple fix is the replace
 {noformat}
 class DoubleRDDFunctions {
   def sum(): Double = self.reduce(_ + _)
 {noformat} 
 with:
 {noformat}
 class DoubleRDDFunctions {
   def sum(): Double = self.aggregate(0.0)(_ + _, _ + _)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6878) Sum on empty RDD fails with exception

2015-04-13 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated SPARK-6878:
---
Flags: Patch

 Sum on empty RDD fails with exception
 -

 Key: SPARK-6878
 URL: https://issues.apache.org/jira/browse/SPARK-6878
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Erik van Oosten
Priority: Minor

 {{Sum}} on an empty RDD throws an exception. Expected result is {{0}}.
 A simple fix is the replace
 {noformat}
 class DoubleRDDFunctions {
   def sum(): Double = self.reduce(_ + _)
 {noformat} 
 with:
 {noformat}
 class DoubleRDDFunctions {
   def sum(): Double = self.aggregate(0.0)(_ + _, _ + _)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (KAFKA-960) Upgrade Metrics to 3.x

2014-11-03 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14194990#comment-14194990
 ] 

Erik van Oosten commented on KAFKA-960:
---

If 2.20 and 2.1.5 are indeed binary compatible (how do you test that?), _all 
existing_ releases could be patched by simply replacing a jar :)

 Upgrade Metrics to 3.x
 --

 Key: KAFKA-960
 URL: https://issues.apache.org/jira/browse/KAFKA-960
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.1
Reporter: Cosmin Lehene

 Now that metrics 3.0 has been released 
 (http://metrics.codahale.com/about/release-notes/) we can upgrade back



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-4610) ActiveMQ shows icon in Dock on MacOSX (with solution)

2013-07-01 Thread Erik van Oosten (JIRA)
Erik van Oosten created AMQ-4610:


 Summary: ActiveMQ shows icon in Dock on MacOSX (with solution)
 Key: AMQ-4610
 URL: https://issues.apache.org/jira/browse/AMQ-4610
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0
Reporter: Erik van Oosten


On Mac activemq shows an really annoying icon in the doc.

Please add the following options to the startup script to get rid of it:

{noformat}
-Djava.awt.headless=true
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (THRIFT-1836) Java compiler does not generate constructor with all fields

2013-01-21 Thread Erik van Oosten (JIRA)
Erik van Oosten created THRIFT-1836:
---

 Summary: Java compiler does not generate constructor with all 
fields
 Key: THRIFT-1836
 URL: https://issues.apache.org/jira/browse/THRIFT-1836
 Project: Thrift
  Issue Type: Improvement
  Components: Java - Compiler
Affects Versions: 0.9
Reporter: Erik van Oosten


The java compiler does not generate a constructor with all fields when some 
fields are required and some are optional. It only generates a constructor with 
all required fields, or a constructor with all fields when all fields are 
optional.

Rationale: We currently do not specify the requiredness of any field (making 
them optional). If we change some of the fields to required, we also have to 
rewrite so much code that is no longer practical.

The attached patch will generate 3 constructors instead of 2:
- the default constructor
- a constructor with all required fields
- a constructor with all fields (added by this patch)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (THRIFT-1836) Java compiler does not generate constructor with all fields

2013-01-21 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/THRIFT-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated THRIFT-1836:


Attachment: thrift-1836-additional-java-constructor.patch

 Java compiler does not generate constructor with all fields
 ---

 Key: THRIFT-1836
 URL: https://issues.apache.org/jira/browse/THRIFT-1836
 Project: Thrift
  Issue Type: Improvement
  Components: Java - Compiler
Affects Versions: 0.9
Reporter: Erik van Oosten
 Attachments: thrift-1836-additional-java-constructor.patch


 The java compiler does not generate a constructor with all fields when some 
 fields are required and some are optional. It only generates a constructor 
 with all required fields, or a constructor with all fields when all fields 
 are optional.
 Rationale: We currently do not specify the requiredness of any field (making 
 them optional). If we change some of the fields to required, we also have to 
 rewrite so much code that is no longer practical.
 The attached patch will generate 3 constructors instead of 2:
 - the default constructor
 - a constructor with all required fields
 - a constructor with all fields (added by this patch)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (THRIFT-1836) Java compiler does not generate constructor with all fields

2013-01-21 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/THRIFT-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13558631#comment-13558631
 ] 

Erik van Oosten commented on THRIFT-1836:
-

Some comments on the patch:
* I was not able to build the code (even before my changes).
* I was not able to find tests that could be updated.
* This was my first cpp code in 14 years.

Please see the patch as a starting point, and not as working code.

 Java compiler does not generate constructor with all fields
 ---

 Key: THRIFT-1836
 URL: https://issues.apache.org/jira/browse/THRIFT-1836
 Project: Thrift
  Issue Type: Improvement
  Components: Java - Compiler
Affects Versions: 0.9
Reporter: Erik van Oosten
 Attachments: thrift-1836-additional-java-constructor.patch


 The java compiler does not generate a constructor with all fields when some 
 fields are required and some are optional. It only generates a constructor 
 with all required fields, or a constructor with all fields when all fields 
 are optional.
 Rationale: We currently do not specify the requiredness of any field (making 
 them optional). If we change some of the fields to required, we also have to 
 rewrite so much code that is no longer practical.
 The attached patch will generate 3 constructors instead of 2:
 - the default constructor
 - a constructor with all required fields
 - a constructor with all fields (added by this patch)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (WICKET-3557) Can not add validator to AjaxEditableLabel unless it was added to page

2011-03-25 Thread Erik van Oosten (JIRA)
Can not add validator to AjaxEditableLabel unless it was added to page
--

 Key: WICKET-3557
 URL: https://issues.apache.org/jira/browse/WICKET-3557
 Project: Wicket
  Issue Type: Bug
  Components: wicket-extensions
Reporter: Erik van Oosten
Priority: Minor


Method AjaxEditableLabel#add(IValidator) tries to add the validator to the 
editor. As the editor initiaily does not exist, it is created. Creation of the 
editor fails when the component has not been added to the page yet.

Workaround: add the AjaxEditableLabel to page before adding the validator(s).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Commented: (WICKET-1973) Messages lost upon session failover with redirect_to_buffer

2010-12-27 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-1973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975285#action_12975285
 ] 

Erik van Oosten commented on WICKET-1973:
-

The other use case is the odd-client that insists on using a round robin load 
balancer.

 Messages lost upon session failover with redirect_to_buffer
 ---

 Key: WICKET-1973
 URL: https://issues.apache.org/jira/browse/WICKET-1973
 Project: Wicket
  Issue Type: Bug
  Components: wicket
Affects Versions: 1.4-RC1
Reporter: Erik van Oosten

 Using the redirect_to_buffer render strategy, messages in the session get 
 cleared after the render.
 If the redirected request comes in at another node, the buffer is not found 
 and the page is re-rendered. In this case the messages are no longer 
 available.
 See the javadoc of WebApplication#popBufferedResponse(String,String).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-1973) Messages lost upon session failover with redirect_to_buffer

2010-12-27 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-1973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975442#action_12975442
 ] 

Erik van Oosten commented on WICKET-1973:
-

None from me.

 Messages lost upon session failover with redirect_to_buffer
 ---

 Key: WICKET-1973
 URL: https://issues.apache.org/jira/browse/WICKET-1973
 Project: Wicket
  Issue Type: Bug
  Components: wicket
Affects Versions: 1.4-RC1
Reporter: Erik van Oosten

 Using the redirect_to_buffer render strategy, messages in the session get 
 cleared after the render.
 If the redirected request comes in at another node, the buffer is not found 
 and the page is re-rendered. In this case the messages are no longer 
 available.
 See the javadoc of WebApplication#popBufferedResponse(String,String).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-1355) Autocomplete window has wrong position in scrolled context

2010-08-28 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12903873#action_12903873
 ] 

Erik van Oosten commented on WICKET-1355:
-

Thanks Igor. But indeed, lets just drop it. The situation in which this happens 
is too specific (and hard to reproduce), and apparently hard to solve in all 
browsers, to warrant large changes.

 Autocomplete window has wrong position in scrolled context
 --

 Key: WICKET-1355
 URL: https://issues.apache.org/jira/browse/WICKET-1355
 Project: Wicket
  Issue Type: Bug
  Components: wicket-extensions
Affects Versions: 1.3.1
Reporter: Erik van Oosten
Assignee: Igor Vaynberg
 Attachments: Safari autocomplete in Modal Window.jpg, 
 wicket-1355-wicket-1.3.x-autocomplete.patch, 
 wicket-1355-wicket-1.4.x-autocomplete.patch, wicket-autocomplete.js


 When the autocompleted field is located in a scrolled div, the drop-down 
 window is positioned too far down.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2846) Store Application in InheritableThreadLocal instead of ThreadLocal

2010-07-15 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12888729#action_12888729
 ] 

Erik van Oosten commented on WICKET-2846:
-

Juliano Viana on the WIcket user mailing list at 2010-07-14 15:55 wrote:

Hi everyone,

I know this issue has already been debated and that a decision was made to
revert this change in a future version of Wicket.
However, the discussions about this issue were centered on the fact starting
threads in web applications is not a good idea anyway, and hence this would
not break applications that are not already broken.
I have found a real case where this breaks an innocent application:
redeploying an application based on  Wicket 1.4.9 on Glassfish 3.0.1 causes
a memory leak due to the use of InheritableThreadLocal.
The problem is that when the application accesses a JDBC resource for the
first time, Glassfish lazily starts a timer (connector-timer-proxy) that has
an associated thread. This timer is started  from the web request processing
thread. This thread never dies, and inherits a reference to the Wicket
Application object.
This only happens on redeployments, but it really hurts development as you
keep having to restart Glassfish due to OOM exceptions.
Removing the InheritableThreadLocal resolves the issue completely and makes
development really smooth again.
So if you are using Wicket 1.4.9 with Glassfish v3 you should consider
patching it until a new Wicket release is out.

Regards,
  - Juliano


 Store Application in InheritableThreadLocal instead of ThreadLocal
 --

 Key: WICKET-2846
 URL: https://issues.apache.org/jira/browse/WICKET-2846
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Reporter: Alexandru Objelean
Assignee: Jeremy Thomerson
Priority: Minor
 Fix For: 1.4.10

 Attachments: wicket-application-leak.tar.gz


 Is there any particular reason why Application class wouldn't be stored in 
 InheritableThreadLocal instead of ThreadLocal? The problem is that I need to 
 be able to access Application class from a thread created when a button is 
 pressed. Using InheritableThreadLocal instead of ThreadLocal would solve 
 this problem. 
 Use case example:
 public class MyPage extends Page { 
   @SpringBean 
   private MyService service; 
   //perform a polling of long running process triggered by a button click 
   onClickButton() { 
 new Thread() { 
   run() { 
 service.executeLongRunningProcess(); 
   } 
 }.start();   
   } 
 } 
 The following example won't work well if the Application is not stored in 
 InheritableThreadLocal. The reason why it doesn't work, as I understand that, 
 is because @SpringBean lookup depends on Application instance which is not 
 accessible from within the thread. Having it stored inside of ITL would solve 
 the problem. 
 Thanks!
 Alex

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2881) Cannot substitute RelativePathPrefixHandler

2010-06-19 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12880475#action_12880475
 ] 

Erik van Oosten commented on WICKET-2881:
-

[WICKET-1974] provides an alternative way to get rid of relative paths.

 Cannot substitute RelativePathPrefixHandler
 ---

 Key: WICKET-2881
 URL: https://issues.apache.org/jira/browse/WICKET-2881
 Project: Wicket
  Issue Type: Bug
  Components: wicket
Affects Versions: 1.4.8
 Environment: All
Reporter: bernard
 Attachments: DirectoryStructure.gif, HomePage.html


 In IPageSettings
 Get the (modifiable) list of IComponentResolvers.
 ListIComponentResolver getComponentResolvers();
 This looks very useful and easy indeed, and in Application.init() one can 
 find and remove
 RelativePathPrefixHandler and replace it with a different 
 AbstractMarkupFilter implementation e.g. XRelativePathPrefixHandler.
 But even while the List.remove(Object o) returns true, and the handler 
 appears to be removed, it is still active.
 I don't know why and what holds on to it or what creates a new 
 RelativePathPrefixHandler.
 If I add my XRelativePathPrefixHandler, it is not used.
 Consider
 public class MarkupParser
 public final void appendMarkupFilter(final IMarkupFilter filter)
 {
 appendMarkupFilter(filter, RelativePathPrefixHandler.class);
 }
 So RelativePathPrefixHandler seems to be something special and I am afraid of 
 other potential complications in case replacement would work.
 Can Wicket be fixed to make a replacement as easy as it appears to be?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2631) wicket:message within wicket:head not processed

2010-06-17 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2631:


Attachment: messageintitle-quickstart.tar.gz

Quickstart to demonstrate that the problem persists in Wicket 1.4.9.

 wicket:message within wicket:head not processed
 ---

 Key: WICKET-2631
 URL: https://issues.apache.org/jira/browse/WICKET-2631
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4.1
Reporter: Ivo Maixner
Assignee: Juergen Donnerstag
Priority: Minor
 Attachments: messageintitle-quickstart.tar.gz


 My pages extend a base page, so they use the wicket:extend tag. For such a 
 page to specify its html title, the wicket:head tag has to be used. At the 
 same time, my pages require localization, so the page title cannot be 
 hardcoded but needs to be loaded from properties files instead, so I have to 
 use the wicket:message tag inside the wicket:head tag. Overall, the page 
 looks like this:
 wicket:head
   titlewicket:message 
 key=page_title[page_title]/wicket:message/title
 /wicket:head
 wicket:extend
   ... page content ...
 /wicket:extend
 In this setup, the content of the title html tag is passed over to the page 
 as-is, i.e. the wicket:message tag is not recognized and processed by Wicket.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2631) wicket:message within wicket:head not processed

2010-06-17 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2631:


Comment: was deleted

(was: Seeing the same here in Wicket 1.4.9. Will try to reproduce it in a 
quickstart.)

 wicket:message within wicket:head not processed
 ---

 Key: WICKET-2631
 URL: https://issues.apache.org/jira/browse/WICKET-2631
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4.1
Reporter: Ivo Maixner
Assignee: Juergen Donnerstag
Priority: Minor

 My pages extend a base page, so they use the wicket:extend tag. For such a 
 page to specify its html title, the wicket:head tag has to be used. At the 
 same time, my pages require localization, so the page title cannot be 
 hardcoded but needs to be loaded from properties files instead, so I have to 
 use the wicket:message tag inside the wicket:head tag. Overall, the page 
 looks like this:
 wicket:head
   titlewicket:message 
 key=page_title[page_title]/wicket:message/title
 /wicket:head
 wicket:extend
   ... page content ...
 /wicket:extend
 In this setup, the content of the title html tag is passed over to the page 
 as-is, i.e. the wicket:message tag is not recognized and processed by Wicket.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2631) wicket:message within wicket:head not processed

2010-06-17 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2631:


Attachment: (was: messageintitle-quickstart.tar.gz)

 wicket:message within wicket:head not processed
 ---

 Key: WICKET-2631
 URL: https://issues.apache.org/jira/browse/WICKET-2631
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4.1
Reporter: Ivo Maixner
Assignee: Juergen Donnerstag
Priority: Minor

 My pages extend a base page, so they use the wicket:extend tag. For such a 
 page to specify its html title, the wicket:head tag has to be used. At the 
 same time, my pages require localization, so the page title cannot be 
 hardcoded but needs to be loaded from properties files instead, so I have to 
 use the wicket:message tag inside the wicket:head tag. Overall, the page 
 looks like this:
 wicket:head
   titlewicket:message 
 key=page_title[page_title]/wicket:message/title
 /wicket:head
 wicket:extend
   ... page content ...
 /wicket:extend
 In this setup, the content of the title html tag is passed over to the page 
 as-is, i.e. the wicket:message tag is not recognized and processed by Wicket.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2631) wicket:message within wicket:head not processed

2010-06-17 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2631:


Comment: was deleted

(was: Quickstart to demonstrate that the problem persists in Wicket 1.4.9.)

 wicket:message within wicket:head not processed
 ---

 Key: WICKET-2631
 URL: https://issues.apache.org/jira/browse/WICKET-2631
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4.1
Reporter: Ivo Maixner
Assignee: Juergen Donnerstag
Priority: Minor

 My pages extend a base page, so they use the wicket:extend tag. For such a 
 page to specify its html title, the wicket:head tag has to be used. At the 
 same time, my pages require localization, so the page title cannot be 
 hardcoded but needs to be loaded from properties files instead, so I have to 
 use the wicket:message tag inside the wicket:head tag. Overall, the page 
 looks like this:
 wicket:head
   titlewicket:message 
 key=page_title[page_title]/wicket:message/title
 /wicket:head
 wicket:extend
   ... page content ...
 /wicket:extend
 In this setup, the content of the title html tag is passed over to the page 
 as-is, i.e. the wicket:message tag is not recognized and processed by Wicket.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2631) wicket:message within wicket:head not processed

2010-06-17 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12879704#action_12879704
 ] 

Erik van Oosten commented on WICKET-2631:
-

Don't forget to call
  getMarkupSettings().setStripWicketTags(true);
in the init() of your WebApplication subclass.

If you don't the wicket:message elements will be displayed by the browser as 
part of the title.

 wicket:message within wicket:head not processed
 ---

 Key: WICKET-2631
 URL: https://issues.apache.org/jira/browse/WICKET-2631
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4.1
Reporter: Ivo Maixner
Assignee: Juergen Donnerstag
Priority: Minor

 My pages extend a base page, so they use the wicket:extend tag. For such a 
 page to specify its html title, the wicket:head tag has to be used. At the 
 same time, my pages require localization, so the page title cannot be 
 hardcoded but needs to be loaded from properties files instead, so I have to 
 use the wicket:message tag inside the wicket:head tag. Overall, the page 
 looks like this:
 wicket:head
   titlewicket:message 
 key=page_title[page_title]/wicket:message/title
 /wicket:head
 wicket:extend
   ... page content ...
 /wicket:extend
 In this setup, the content of the title html tag is passed over to the page 
 as-is, i.e. the wicket:message tag is not recognized and processed by Wicket.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2631) wicket:message within wicket:head not processed

2010-06-15 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12878955#action_12878955
 ] 

Erik van Oosten commented on WICKET-2631:
-

Seeing the same here in Wicket 1.4.9. Will try to reproduce it in a quickstart.

 wicket:message within wicket:head not processed
 ---

 Key: WICKET-2631
 URL: https://issues.apache.org/jira/browse/WICKET-2631
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4.1
Reporter: Ivo Maixner
Assignee: Juergen Donnerstag
Priority: Minor

 My pages extend a base page, so they use the wicket:extend tag. For such a 
 page to specify its html title, the wicket:head tag has to be used. At the 
 same time, my pages require localization, so the page title cannot be 
 hardcoded but needs to be loaded from properties files instead, so I have to 
 use the wicket:message tag inside the wicket:head tag. Overall, the page 
 looks like this:
 wicket:head
   titlewicket:message 
 key=page_title[page_title]/wicket:message/title
 /wicket:head
 wicket:extend
   ... page content ...
 /wicket:extend
 In this setup, the content of the title html tag is passed over to the page 
 as-is, i.e. the wicket:message tag is not recognized and processed by Wicket.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-1469) New Wicket tag 'wicket:for'

2009-12-23 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-1469:


Attachment: WICKET-1469-for-wicket-1.4.x.patch

Attachment WICKET-1469-for-wicket-1.4.x.patch can be applied to branch 1.4.x.

Supported:
 - wicket:for attribute, value refers to any component
 - during rendering the for attribute is generated with as value the referred 
component's markup id
 - referred component's will automatically render its markup id when it is 
located /after/ the wicket:for attribute in the markup stream

Not supported:
 - referred component will /not/ automatically render its markup id when it is 
located before the wicket:for attribute in the markup stream

Any clues on how to support the latter are much appreciated. The alternative is 
to remove auto rendering of markup ids completely.

 New Wicket tag 'wicket:for'
 ---

 Key: WICKET-1469
 URL: https://issues.apache.org/jira/browse/WICKET-1469
 Project: Wicket
  Issue Type: New Feature
  Components: wicket
Affects Versions: 1.3.2
Reporter: Jan Kriesten
Priority: Minor
 Fix For: 1.5-M1

 Attachments: WICKET-1469-for-wicket-1.4.x.patch


 This often happens during my daily work:
 You create a form with labels and corresponding input fields. As it is now, 
 you have to bind all those Labels and FormComponents together with some 
 boilerplate code within Java.
 I'd like to suggest the following enhancement Wicket tag:
 label wicket:for=username wicket:messge=keydefault message/label
 where wicket:for contains the referenced wicket:id

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (WICKET-2602) Display upload progress bar only when a file is selected

2009-12-16 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten reopened WICKET-2602:
-


Pleas apply attached patch to optimize serialization.

 Display upload progress bar only when a file is selected
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten
Assignee: Igor Vaynberg
 Fix For: 1.4.5, 1.5-M1

 Attachments: Serialization_optimization_.patch, 
 WICKET-2602-1.3.patch, WICKET-2602-1.4_and_1.5.patch


 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Display upload progress bar only when a file is selected

2009-12-16 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Attachment: Serialization_optimization_.patch

 Display upload progress bar only when a file is selected
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten
Assignee: Igor Vaynberg
 Fix For: 1.4.5, 1.5-M1

 Attachments: Serialization_optimization_.patch, 
 WICKET-2602-1.3.patch, WICKET-2602-1.4_and_1.5.patch


 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-1469) New Wicket tag 'wicket:for'

2009-12-14 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12790036#action_12790036
 ] 

Erik van Oosten commented on WICKET-1469:
-

See e-mail discussion: 
http://old.nabble.com/Request-for-input-on-new-feature-idea%3A-wicket%3Afor-attribute-td26765933.html

 New Wicket tag 'wicket:for'
 ---

 Key: WICKET-1469
 URL: https://issues.apache.org/jira/browse/WICKET-1469
 Project: Wicket
  Issue Type: New Feature
  Components: wicket
Affects Versions: 1.3.2
Reporter: Jan Kriesten
Priority: Minor
 Fix For: 1.5-M1


 This often happens during my daily work:
 You create a form with labels and corresponding input fields. As it is now, 
 you have to bind all those Labels and FormComponents together with some 
 boilerplate code within Java.
 I'd like to suggest the following enhancement Wicket tag:
 label wicket:for=username wicket:messge=keydefault message/label
 where wicket:for contains the referenced wicket:id

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2602) Delay display of upload progress bar

2009-12-09 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12787976#action_12787976
 ] 

Erik van Oosten commented on WICKET-2602:
-

I deleted the patch as I have a new patch in the making that will detect if a 
file will actually be uploaded for a specific file field.

 Delay display of upload progress bar
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten

 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Delay display of upload progress bar

2009-12-09 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Comment: was deleted

(was: Please apply this patch to:
- trunk
- branch 1.4.x
- branch 1.3.x)

 Delay display of upload progress bar
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten

 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.
 The patch will display the upload bar only after 1 second. Presumably a 
 submit without file will be finished by then.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Delay display of upload progress bar

2009-12-09 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Attachment: (was: WICKET-2602.patch)

 Delay display of upload progress bar
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten

 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.
 The patch will display the upload bar only after 1 second. Presumably a 
 submit without file will be finished by then.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Delay display of upload progress bar

2009-12-09 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Description: When the UploadProgressBar is part of a larger form structure 
where not all submits actually start a file upload, it is disturbing to see the 
'upload starting...' message.  (was: When the UploadProgressBar is part of a 
larger form structure where not all submits actually start a file upload, it is 
disturbing to see the 'upload starting...' message.

The patch will display the upload bar only after 1 second. Presumably a submit 
without file will be finished by then.)

 Delay display of upload progress bar
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten

 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Delay display of upload progress bar

2009-12-09 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Comment: was deleted

(was: An even better fix would show the upload bar only when there is a file to 
upload.)

 Delay display of upload progress bar
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten

 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.
 The patch will display the upload bar only after 1 second. Presumably a 
 submit without file will be finished by then.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2600) Redirect to home page still does not work (regression)

2009-12-09 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12787981#action_12787981
 ] 

Erik van Oosten commented on WICKET-2600:
-

Yes, I tested this with Firefox 3.5 (Ubuntu), Chrome 3.0 (Windows), and IE 5.5, 
6, 7 and 8.

BTW, it is within reason that any other browser that understands ./ will also 
understand ..

 Redirect to home page still does not work (regression)
 --

 Key: WICKET-2600
 URL: https://issues.apache.org/jira/browse/WICKET-2600
 Project: Wicket
  Issue Type: Bug
  Components: wicket
Affects Versions: 1.3.7, 1.4.4, 1.5-M1
Reporter: Erik van Oosten
 Attachments: WICKET-2600.patch


 It is still not possible to redirect to the home under all circumstances with 
 Tomcat + IE (6, 7 and 8).
 WICKET-847 fixed a problem by removing any ./ at the start of the redirect 
 URL.
 WICKET-1916 undid this for redirect URLs that are exactly equal to ./.
 The latter fix is not correct, IE can not redirect to ./.
 The correct addition to WICKET-847 would be to redirect to .. See patch.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (WICKET-2600) Redirect to home page still does not work (regression)

2009-12-09 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12787981#action_12787981
 ] 

Erik van Oosten edited comment on WICKET-2600 at 12/9/09 9:10 AM:
--

Yes, I tested this with Firefox 3.5 (Ubuntu), Chrome 3.0 (Windows), and IE 5.5, 
6, 7 and 8.

BTW, it is within reason that any other browser that understands ./ will also 
understand .. Under this assumption the patch can make the situation only 
better, not worse.

  was (Author: erikvanoosten):
Yes, I tested this with Firefox 3.5 (Ubuntu), Chrome 3.0 (Windows), and IE 
5.5, 6, 7 and 8.

BTW, it is within reason that any other browser that understands ./ will also 
understand ..
  
 Redirect to home page still does not work (regression)
 --

 Key: WICKET-2600
 URL: https://issues.apache.org/jira/browse/WICKET-2600
 Project: Wicket
  Issue Type: Bug
  Components: wicket
Affects Versions: 1.3.7, 1.4.4, 1.5-M1
Reporter: Erik van Oosten
 Attachments: WICKET-2600.patch


 It is still not possible to redirect to the home under all circumstances with 
 Tomcat + IE (6, 7 and 8).
 WICKET-847 fixed a problem by removing any ./ at the start of the redirect 
 URL.
 WICKET-1916 undid this for redirect URLs that are exactly equal to ./.
 The latter fix is not correct, IE can not redirect to ./.
 The correct addition to WICKET-847 would be to redirect to .. See patch.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Do not display upload progress bar when no file is selected

2009-12-09 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Summary: Do not display upload progress bar when no file is selected  (was: 
Delay display of upload progress bar)

 Do not display upload progress bar when no file is selected
 ---

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten
 Attachments: WICKET-2602-1.3.patch, WICKET-2602-1.4_and_1.5.patch


 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Delay display of upload progress bar

2009-12-09 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Attachment: WICKET-2602-1.4_and_1.5.patch
WICKET-2602-1.3.patch

These patches add the option to make UploadProgressBar only display the 
progress upload bar when a file is selected.

Please apply the 1.3 patch in branch 1.3.x.
Please apply the 1.4_and_1.5 patch in trunk and branch 1.4.x.

 Delay display of upload progress bar
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten
 Attachments: WICKET-2602-1.3.patch, WICKET-2602-1.4_and_1.5.patch


 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2602) Do not display upload progress bar when no file is selected

2009-12-09 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12788023#action_12788023
 ] 

Erik van Oosten commented on WICKET-2602:
-

Tested the patch with Chrome 3.0 (Windows), IE 8.0 and Firefox 3.5 (Ubuntu).

 Do not display upload progress bar when no file is selected
 ---

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten
 Attachments: WICKET-2602-1.3.patch, WICKET-2602-1.4_and_1.5.patch


 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Do not display upload progress bar when no file is selected

2009-12-09 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Comment: was deleted

(was: I deleted the patch as I have a new patch in the making that will detect 
if a file will actually be uploaded for a specific file field.)

 Do not display upload progress bar when no file is selected
 ---

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten
 Attachments: WICKET-2602-1.3.patch, WICKET-2602-1.4_and_1.5.patch


 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (WICKET-2600) Redirect to home page still does not work (regression)

2009-12-08 Thread Erik van Oosten (JIRA)
Redirect to home page still does not work (regression)
--

 Key: WICKET-2600
 URL: https://issues.apache.org/jira/browse/WICKET-2600
 Project: Wicket
  Issue Type: Bug
  Components: wicket
Affects Versions: 1.4.4, 1.3.7, 1.5-M1
Reporter: Erik van Oosten


It is still not possible to redirect to the home under all circumstances with 
Tomcat + IE (6, 7 and 8).

WICKET-847 fixed a problem by removing any ./ at the start of the redirect 
URL.
WICKET-1916 undid this for redirect URLs that are exactly equal to ./.

The latter fix is not correct, IE can not redirect to ./.
The correct addition to WICKET-847 would be to redirect to .. See patch.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2600) Redirect to home page still does not work (regression)

2009-12-08 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2600:


Attachment: WICKET-2600.patch

Patch can be applied in:
- trunk
- branch 1.4.x
- branch 1.3.x


 Redirect to home page still does not work (regression)
 --

 Key: WICKET-2600
 URL: https://issues.apache.org/jira/browse/WICKET-2600
 Project: Wicket
  Issue Type: Bug
  Components: wicket
Affects Versions: 1.3.7, 1.4.4, 1.5-M1
Reporter: Erik van Oosten
 Attachments: WICKET-2600.patch


 It is still not possible to redirect to the home under all circumstances with 
 Tomcat + IE (6, 7 and 8).
 WICKET-847 fixed a problem by removing any ./ at the start of the redirect 
 URL.
 WICKET-1916 undid this for redirect URLs that are exactly equal to ./.
 The latter fix is not correct, IE can not redirect to ./.
 The correct addition to WICKET-847 would be to redirect to .. See patch.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2602) Delay display of upload progress bar

2009-12-08 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2602:


Attachment: WICKET-2602.patch

Please apply this patch to:
- trunk
- branch 1.4.x
- branch 1.3.x

 Delay display of upload progress bar
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten
 Attachments: WICKET-2602.patch


 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.
 The patch will display the upload bar only after 1 second. Presumably a 
 submit without file will be finished by then.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2602) Delay display of upload progress bar

2009-12-08 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12787513#action_12787513
 ] 

Erik van Oosten commented on WICKET-2602:
-

An even better fix would show the upload bar only when there is a file to 
upload.

 Delay display of upload progress bar
 

 Key: WICKET-2602
 URL: https://issues.apache.org/jira/browse/WICKET-2602
 Project: Wicket
  Issue Type: Improvement
  Components: wicket-extensions
Reporter: Erik van Oosten
 Attachments: WICKET-2602.patch


 When the UploadProgressBar is part of a larger form structure where not all 
 submits actually start a file upload, it is disturbing to see the 'upload 
 starting...' message.
 The patch will display the upload bar only after 1 second. Presumably a 
 submit without file will be finished by then.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-1355) Autocomplete window has wrong position in scrolled context

2009-12-04 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12785895#action_12785895
 ] 

Erik van Oosten commented on WICKET-1355:
-

One disadvantage of positioning the div as sibling of the textfield is that the 
dropdown div will never be able to extend the area of the first parent with 
position absolute. I.e. you need to make sure that there is enough room  below 
the textfield within the parent (make sure some more input fields follow, or 
just add some empty space). This will happen with modal windows, but this will 
not be a problem at all when you have no such parent.

This disadvantage outweights the disadvantage of having no visible dropdown div 
at all.

 Autocomplete window has wrong position in scrolled context
 --

 Key: WICKET-1355
 URL: https://issues.apache.org/jira/browse/WICKET-1355
 Project: Wicket
  Issue Type: Bug
  Components: wicket-extensions
Affects Versions: 1.3.1
Reporter: Erik van Oosten
Assignee: Igor Vaynberg
 Attachments: wicket-1355-wicket-1.3.x-autocomplete.patch, 
 wicket-1355-wicket-1.4.x-autocomplete.patch, wicket-autocomplete.js


 When the autocompleted field is located in a scrolled div, the drop-down 
 window is positioned too far down.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2579) tabbedpanel (and ajaxtabbedpanel) only submit the selected tab. A mode which instead submits all loaded tabs would be helpful.

2009-11-19 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12779860#action_12779860
 ] 

Erik van Oosten commented on WICKET-2579:
-

Roger, please note that what you ask is not weird, its just that it can not be 
accomplish with the current TabbedPanel. Another component that does what you 
ask would be welcome of course.

 tabbedpanel (and ajaxtabbedpanel) only submit the selected tab. A mode which 
 instead submits all loaded tabs would be helpful.
 --

 Key: WICKET-2579
 URL: https://issues.apache.org/jira/browse/WICKET-2579
 Project: Wicket
  Issue Type: Wish
  Components: wicket-extensions
Affects Versions: 1.4.3
Reporter: Roger Armstrong
Assignee: Igor Vaynberg

 If I want to split the contents of a form across multiple tabs (for example a 
 user profile form split into basic and advanced settings), there seems to be 
 no way to validate the form properly.
 The user should be able to fill out, say, first name and last name in the 
 basic tab, then switch to the advanced tab and fill out some settings there, 
 then click the Save button. If the user forgot to fill out a required field 
 on the basic, (say, email address), there's no way to handle this (because 
 the first tab is already gone when you switch to the second tab).
 I've tried to use an AjaxFormValidatingBehavior on blur of all form 
 components, but this is not a good solution since validation occurs on lost 
 focus instead of when the user clicks the Save button.
 What I would like would be that the TabbedPanel keeps all visited panels 
 around (but all hidden except the selected tab) so that they are all 
 submitted together. That way, you have lazy loading, but standard submit and 
 validate behavior (at the expense of keeping the loaded panels around).
 This seems like a fairly standard pattern for using a tabbed panel, so it 
 would seem useful to have it in the standard tab panel instead of everyone 
 having to reinvent it (like at 
 http://www.xaloon.org/blog/advanced-wicket-tabs-with-jquery).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (WICKET-2580) Javadoc of Component#setOutputMarkupPlaceholderTag is wrong

2009-11-18 Thread Erik van Oosten (JIRA)
Javadoc of Component#setOutputMarkupPlaceholderTag is wrong
---

 Key: WICKET-2580
 URL: https://issues.apache.org/jira/browse/WICKET-2580
 Project: Wicket
  Issue Type: Bug
  Components: wicket
Affects Versions: 1.4.3, 1.3.7
Reporter: Erik van Oosten


The javadoc of Component#setOutputMarkupPlaceholderTag uses the term 
componentid where it should use markupid.

Please update the javadoc from:

  The tag is of form: lt;componenttag style=display:none; 
id=componentid/gt;.

to

  The tag is of form: lt;componenttag style=display:none; id=markupid/gt;.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2579) tabbedpanel (and ajaxtabbedpanel) only submit the selected tab. A mode which instead submits all loaded tabs would be helpful.

2009-11-18 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12779472#action_12779472
 ] 

Erik van Oosten commented on WICKET-2579:
-

This requires an entirely new implementation of TabbedPanel. The current tabbed 
panel is based on the idea that only the visible tab is rendered. What is 
needed to implement what you want is to render all tabs to the browser and then 
only display one of the panels at a time. This can only be done with javascript 
(another departure of the current TabbedPanel implementation).

 tabbedpanel (and ajaxtabbedpanel) only submit the selected tab. A mode which 
 instead submits all loaded tabs would be helpful.
 --

 Key: WICKET-2579
 URL: https://issues.apache.org/jira/browse/WICKET-2579
 Project: Wicket
  Issue Type: Wish
  Components: wicket-extensions
Affects Versions: 1.4.3
Reporter: Roger Armstrong

 If I want to split the contents of a form across multiple tabs (for example a 
 user profile form split into basic and advanced settings), there seems to be 
 no way to validate the form properly.
 The user should be able to fill out, say, first name and last name in the 
 basic tab, then switch to the advanced tab and fill out some settings there, 
 then click the Save button. If the user forgot to fill out a required field 
 on the basic, (say, email address), there's no way to handle this (because 
 the first tab is already gone when you switch to the second tab).
 I've tried to use an AjaxFormValidatingBehavior on blur of all form 
 components, but this is not a good solution since validation occurs on lost 
 focus instead of when the user clicks the Save button.
 What I would like would be that the TabbedPanel keeps all visited panels 
 around (but all hidden except the selected tab) so that they are all 
 submitted together. That way, you have lazy loading, but standard submit and 
 validate behavior (at the expense of keeping the loaded panels around).
 This seems like a fairly standard pattern for using a tabbed panel, so it 
 would seem useful to have it in the standard tab panel instead of everyone 
 having to reinvent it (like at 
 http://www.xaloon.org/blog/advanced-wicket-tabs-with-jquery).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-1355) Autocomplete window has wrong position in scrolled context

2009-11-17 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-1355:


Attachment: wicket-1355-wicket-1.4.x-autocomplete.patch
wicket-1355-wicket-1.3.x-autocomplete.patch

Attached 2 patches:

- wicket-1355-wicket-1.3.x-autocomplete.patch
to be applied in 
http://svn.apache.org/repos/asf/wicket/branches/wicket-1.3.x/jdk-1.4/wicket-extensions/src/main/java/org/apache/wicket/extensions/ajax/markup/html/autocomplete

- wicket-1355-wicket-1.4.x-autocomplete.patch
to be applied in 
http://svn.apache.org/repos/asf/wicket/branches/wicket-1.4.x/wicket-extensions/src/main/java/org/apache/wicket/extensions/ajax/markup/html/autocomplete

 Autocomplete window has wrong position in scrolled context
 --

 Key: WICKET-1355
 URL: https://issues.apache.org/jira/browse/WICKET-1355
 Project: Wicket
  Issue Type: Bug
  Components: wicket-extensions
Affects Versions: 1.3.1
Reporter: Erik van Oosten
Assignee: Igor Vaynberg
 Attachments: wicket-1355-wicket-1.3.x-autocomplete.patch, 
 wicket-1355-wicket-1.4.x-autocomplete.patch, wicket-autocomplete.js


 When the autocompleted field is located in a scrolled div, the drop-down 
 window is positioned too far down.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-1355) Autocomplete window has wrong position in scrolled context

2009-11-17 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12778918#action_12778918
 ] 

Erik van Oosten commented on WICKET-1355:
-

- wicket-1355-wicket-1.4.x-autocomplete.patch
can ALSO be applied in trunk (e.g. in 
http://svn.apache.org/repos/asf/wicket/trunk/wicket-extensions/src/main/java/org/apache/wicket/extensions/ajax/markup/html/autocomplete)

 Autocomplete window has wrong position in scrolled context
 --

 Key: WICKET-1355
 URL: https://issues.apache.org/jira/browse/WICKET-1355
 Project: Wicket
  Issue Type: Bug
  Components: wicket-extensions
Affects Versions: 1.3.1
Reporter: Erik van Oosten
Assignee: Igor Vaynberg
 Attachments: wicket-1355-wicket-1.3.x-autocomplete.patch, 
 wicket-1355-wicket-1.4.x-autocomplete.patch, wicket-autocomplete.js


 When the autocompleted field is located in a scrolled div, the drop-down 
 window is positioned too far down.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-1355) Autocomplete window has wrong position in scrolled context

2009-11-17 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12778940#action_12778940
 ] 

Erik van Oosten commented on WICKET-1355:
-

The fix is based on the idea that it is very hard to position a div relative to 
a textfield when that div is added to the document body. Instead the div will 
be added as a sibling of the textfield. The idea came from colleague Tim Taylor 
(toolman) who also gave me the initial patch.

 Autocomplete window has wrong position in scrolled context
 --

 Key: WICKET-1355
 URL: https://issues.apache.org/jira/browse/WICKET-1355
 Project: Wicket
  Issue Type: Bug
  Components: wicket-extensions
Affects Versions: 1.3.1
Reporter: Erik van Oosten
Assignee: Igor Vaynberg
 Attachments: wicket-1355-wicket-1.3.x-autocomplete.patch, 
 wicket-1355-wicket-1.4.x-autocomplete.patch, wicket-autocomplete.js


 When the autocompleted field is located in a scrolled div, the drop-down 
 window is positioned too far down.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WICKET-2395) add MixedParamHybridUrlCodingStrategy

2009-09-27 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/WICKET-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12760086#action_12760086
 ] 

Erik van Oosten commented on WICKET-2395:
-

Igor committed WICKET-2439 earlier. WICKET-2439 also contains this class.

 add MixedParamHybridUrlCodingStrategy
 -

 Key: WICKET-2395
 URL: https://issues.apache.org/jira/browse/WICKET-2395
 Project: Wicket
  Issue Type: New Feature
Affects Versions: 1.4-RC5
Reporter: Vladimir Kovalyuk
Assignee: Juergen Donnerstag
 Fix For: 1.4.2


 /**
  * Apache 2 license.
  */
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.Map;
 import java.util.Set;
 import org.apache.wicket.Page;
 import org.apache.wicket.PageParameters;
 import org.apache.wicket.RequestCycle;
 import org.apache.wicket.request.target.coding.HybridUrlCodingStrategy;
 import org.apache.wicket.request.target.coding.MixedParamUrlCodingStrategy;
 import org.apache.wicket.util.string.AppendingStringBuffer;
 import org.apache.wicket.util.value.ValueMap;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 /**
  * @see MixedParamUrlCodingStrategy
  * 
  * @author Erik van Oosten
  */
 public class MixedParamHybridUrlCodingStrategy extends 
 HybridUrlCodingStrategy {
   private static Logger logger = 
 LoggerFactory.getLogger(MixedParamHybridUrlCodingStrategy.class);
   private final String[] parameterNames;
   private boolean ignoreUndeclaredParameters = true;
   /**
* Construct.
* 
* @param mountPath
*mount path
* @param pageClass
*class of mounted page
* @param redirectOnBookmarkableRequest
*?
* @param parameterNames
*the parameter names (not null)
*/
   public MixedParamHybridUrlCodingStrategy(String mountPath, Class? 
 extends Page pageClass,
   boolean redirectOnBookmarkableRequest, String[] 
 parameterNames) {
   super(mountPath, pageClass, redirectOnBookmarkableRequest);
   this.parameterNames = parameterNames;
   }
   /**
* Construct.
* 
* @param mountPath
*mount path
* @param pageClass
*class of mounted page
* @param parameterNames
*the parameter names (not null)
*/
   public MixedParamHybridUrlCodingStrategy(String mountPath, Class? 
 extends Page pageClass, String[] parameterNames) {
   super(mountPath, pageClass);
   this.parameterNames = parameterNames;
   }
   /** {...@inheritdoc} */
   @Override
   protected void appendParameters(AppendingStringBuffer url, MapString, 
 ? parameters) {
   if (!url.endsWith(/)) {
   url.append(/);
   }
   SetString parameterNamesToAdd = new 
 HashSetString(parameters.keySet());
   // Find index of last specified parameter
   boolean foundParameter = false;
   int lastSpecifiedParameter = parameterNames.length;
   while (lastSpecifiedParameter != 0  !foundParameter) {
   foundParameter = 
 parameters.containsKey(parameterNames[--lastSpecifiedParameter]);
   }
   if (foundParameter) {
   for (int i = 0; i = lastSpecifiedParameter; i++) {
   String parameterName = parameterNames[i];
   final Object param = 
 parameters.get(parameterName);
   String value = param instanceof String[] ? 
 ((String[]) param)[0] : (String) param;
   if (value == null) {
   value = ;
   }
   
 url.append(urlEncodePathComponent(value)).append(/);
   parameterNamesToAdd.remove(parameterName);
   }
   }
   if (!parameterNamesToAdd.isEmpty()) {
   boolean first = true;
   final Iterator iterator = 
 parameterNamesToAdd.iterator();
   while (iterator.hasNext()) {
   url.append(first ? '?' : '');
   String parameterName = (String) iterator.next();
   final Object param = 
 parameters.get(parameterName);
   String value = param instanceof String[] ? 
 ((String[]) param)[0] : (String) param;
   
 url.append(urlEncodeQueryComponent(parameterName)).append(=).append(urlEncodeQueryComponent(value));
   first = false;
   

[jira] Created: (WICKET-2439) Improve MixedParamUrlCodingStrategy, introduce Hybrid

2009-08-27 Thread Erik van Oosten (JIRA)
Improve MixedParamUrlCodingStrategy, introduce Hybrid
-

 Key: WICKET-2439
 URL: https://issues.apache.org/jira/browse/WICKET-2439
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4.1
Reporter: Erik van Oosten
 Fix For: 1.4.2


The MixedParamUrlCodingStrategy can be improved.

The current form has the following shortcomings:
- it just fails when something is added to the URL, solution: add the option to 
ignore the added parts (in fact I made this the default)
- when something is added to the URL, the message is not very clear, solution: 
rewrite message and add more information
- it does not accept non-String parameter values, solution: use 
String.valueOf(paramValue)

In addition the patch adds a Hybrid variant.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2439) Improve MixedParamUrlCodingStrategy, introduce Hybrid

2009-08-27 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2439:


Attachment: WICKET-2439.patch

Please apply to 1.4 branch, possible also to 1.5 branch if that is still 
applicable.

 Improve MixedParamUrlCodingStrategy, introduce Hybrid
 -

 Key: WICKET-2439
 URL: https://issues.apache.org/jira/browse/WICKET-2439
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4.1
Reporter: Erik van Oosten
 Fix For: 1.4.2

 Attachments: WICKET-2439.patch


 The MixedParamUrlCodingStrategy can be improved.
 The current form has the following shortcomings:
 - it just fails when something is added to the URL, solution: add the option 
 to ignore the added parts (in fact I made this the default)
 - when something is added to the URL, the message is not very clear, 
 solution: rewrite message and add more information
 - it does not accept non-String parameter values, solution: use 
 String.valueOf(paramValue)
 In addition the patch adds a Hybrid variant.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2404) Quickstart for 1.4 uses 1.3 dtd in HomePage.html

2009-08-24 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2404:


Attachment: WICKET-2404-1.5.patch
WICKET-2404-1.4.patch

Patch for Wicket 1.4 (please apply in 1.4 branch)
and Wicket 1.5 (please apply in trunk).

 Quickstart for 1.4 uses 1.3 dtd in HomePage.html
 

 Key: WICKET-2404
 URL: https://issues.apache.org/jira/browse/WICKET-2404
 Project: Wicket
  Issue Type: Bug
  Components: wicket-quickstart
Affects Versions: 1.4.0
Reporter: Erik van Oosten
Priority: Trivial
 Attachments: WICKET-2404-1.4.patch, WICKET-2404-1.5.patch


 The generated HomePage.html contains the following header:
 html 
 xmlns:wicket=http://wicket.apache.org/dtds.data/wicket-xhtml1.3-strict.dtd; 
 That should be:
 html 
 xmlns:wicket=http://wicket.apache.org/dtds.data/wicket-xhtml1.4-strict.dtd; 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (WICKET-2404) Quickstart for 1.4 uses 1.3 dtd in HomePage.html (with patch)

2009-08-24 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten updated WICKET-2404:


Summary: Quickstart for 1.4 uses 1.3 dtd in HomePage.html (with patch)  
(was: Quickstart for 1.4 uses 1.3 dtd in HomePage.html)

 Quickstart for 1.4 uses 1.3 dtd in HomePage.html (with patch)
 -

 Key: WICKET-2404
 URL: https://issues.apache.org/jira/browse/WICKET-2404
 Project: Wicket
  Issue Type: Bug
  Components: wicket-quickstart
Affects Versions: 1.4.0
Reporter: Erik van Oosten
Priority: Trivial
 Attachments: WICKET-2404-1.4.patch, WICKET-2404-1.5.patch


 The generated HomePage.html contains the following header:
 html 
 xmlns:wicket=http://wicket.apache.org/dtds.data/wicket-xhtml1.3-strict.dtd; 
 That should be:
 html 
 xmlns:wicket=http://wicket.apache.org/dtds.data/wicket-xhtml1.4-strict.dtd; 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (WICKET-2288) Refactor DefaultPageFactory#constructor

2009-08-24 Thread Erik van Oosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/WICKET-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik van Oosten resolved WICKET-2288.
-

Resolution: Invalid

Seems to be fixed already.

 Refactor DefaultPageFactory#constructor
 ---

 Key: WICKET-2288
 URL: https://issues.apache.org/jira/browse/WICKET-2288
 Project: Wicket
  Issue Type: Improvement
  Components: wicket
Affects Versions: 1.4-RC3
Reporter: Erik van Oosten
Priority: Trivial
   Original Estimate: 0.03h
  Remaining Estimate: 0.03h

 Method DefaultPageFactory#constructor should loose the second parameter 
 (argumentType) as it looks up cached Constructor instances without regard of 
 the argument type. Instead the type (always PageParameters.class) should be 
 hard coded in DefaultPageFactory#constructor.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



  1   2   >