On 28 November 2015 at 18:35, Sterpu Victor wrote:
> Can I make a distinct STRING_AGG?
> This is my query :
> SELECT atjs.id, STRING_AGG(CAST(aqjs1.id AS VARCHAR), ',' ORDER BY
> aqjs1.to_left) AS children
> FROM administration.ad_query_join_select atjs
> JOIN
On Sun, 29 Nov 2015 05:04:42 -0500, "Steve Petrie, P.Eng."
wrote:
>"George Neuner" wrote in message
>news:kaed5btl92qr4v8ndevlgtv0f28qaae...@4ax.com...
>
>> My vote for an email client would be Thunderbird. It runs on XP or
>> higher and you can
On Nov 29, 2015 18:34, "George Neuner" wrote:
>
> On Sun, 29 Nov 2015 05:04:42 -0500, "Steve Petrie, P.Eng."
> wrote:
>
> >"George Neuner" wrote in message
> >news:kaed5btl92qr4v8ndevlgtv0f28qaae...@4ax.com...
> >
> >> My vote
Bill Moran writes:
> Tom Smith wrote:
>> Is there a plan for 9.6 to resolve the issue of very slow
>> query/retrieval of jsonb fields when there are large number (maybe
>> several thousands) of top level keys. Currently, if I save a large
>>
Hi:
The goal is fast retrieval of a a field value with a row when the row is
already
picked, one scenario is download a particular field value (if exists) of
all rows in the table.
It is actually a very common use case of exporting data of several user
selected fields.
The performance is
On 29 November 2015 at 18:59, Sterpu Victor wrote:
> I can't skip the ordering.
> I'm sure aqjs3 is the one that produces the duplication.
> I guess subqueries are the only option, like this:
Well you could look at the intarray extension and a combination of
array_agg,
On 29 November 2015 at 20:51,
I
wrote:
> Well you could look at the intarray extension and a combination of
> array_agg, uniq() and
> *string_to_array*:
Mind blip, apologies, obviously I meant array_to_string :)
Geoff
I can't skip the ordering.
I'm sure aqjs3 is the one that produces the duplication.
I guess subqueries are the only option, like this:
SELECT atjs.id, tmp.children AS children
FROM administration.ad_query_join_select atjs
JOIN (SELECT
atjs.id, STRING_AGG(CAST(aqjs1.id AS VARCHAR), ',' ORDER
On 11/29/2015 02:04 AM, Steve Petrie, P.Eng. wrote:
Thanks to George for the Thuderbird email client recommendation. My remarks
are below.
"George Neuner" wrote in message
news:kaed5btl92qr4v8ndevlgtv0f28qaae...@4ax.com...
On Wed, 25 Nov 2015 23:39:39 -0500, "Steve
On 11/29/2015 01:59 AM, Steve Petrie, P.Eng. wrote:
Thanks to Jeff for the helpful response. My remarks are below.
- Original Message - From: "Jeff Janes"
To: "Steve Petrie, P.Eng."
Cc: "Tim Uckun" ; "Merlin Moncure"
On Sun, 29 Nov 2015 08:24:12 -0500
Tom Smith wrote:
> Hi, Thanks for everyone's response.
>
> The issue is not just compression, but lack of "indexing" or "segmentation"
> when a
> single doc has, say 2000 top level keys (with multiple levels of subkeys).
> right now,
Thanks to Jeff for the helpful response. My remarks are below.
- Original Message -
From: "Jeff Janes"
To: "Steve Petrie, P.Eng."
Cc: "Tim Uckun" ; "Merlin Moncure" ;
"John R Pierce"
Thanks to George for the helpful comments. My remarks are below.
"George Neuner" wrote in message
news:gvad5bllba9slstdhkn6ql2jbplgd78...@4ax.com...
On Wed, 25 Nov 2015 23:39:39 -0500, "Steve Petrie, P.Eng."
wrote:
My plan was always, to
Thanks to George for the Thuderbird email client recommendation. My remarks
are below.
"George Neuner" wrote in message
news:kaed5btl92qr4v8ndevlgtv0f28qaae...@4ax.com...
On Wed, 25 Nov 2015 23:39:39 -0500, "Steve Petrie, P.Eng."
wrote:
My stupid
Hi, Thanks for everyone's response.
The issue is not just compression, but lack of "indexing" or "segmentation"
when a
single doc has, say 2000 top level keys (with multiple levels of subkeys).
right now, if I query for one key, the whole doc
has to be first uncompressed and loaded and then
Unfortunately, the keys can not be predefined or fixed. it is a doc, the
reason jsonb
is used. It works well for small docs with small number of keys.
but really slow with large number of keys. If this issue is resolved, I
think Postgresql
would be an absolutely superior choice over MongoDB.for
On Sat, 28 Nov 2015 21:27:51 -0500
Tom Smith wrote:
>
> Is there a plan for 9.6 to resolve the issue of very slow query/retrieval
> of jsonb fields
> when there are large number (maybe several thousands) of top level keys.
> Currently, if I save a large json document
Hi:
On Sun, Nov 29, 2015 at 1:09 PM, Bill Moran wrote:
> On Sat, 28 Nov 2015 21:27:51 -0500
>> Currently, if I save a large json document with top level keys of thousands
** LARGE **
> The default storage for a JSONB field is EXTENDED. Switching it to
> EXTERNAL will
Tom Smith schrieb am 29.11.2015 um 03:27:
Hello:
Is there a plan for 9.6 to resolve the issue of very slow query/retrieval of
jsonb fields
when there are large number (maybe several thousands) of top level keys.
Currently, if I save a large json document with top level keys of thousands and
Is this correct? I'm fairly sure jsonb supports lazily parsing objects and
each object level is actually searched using binary search.
Em 29/11/2015 11:25 AM, "Tom Smith" escreveu:
> Hi, Thanks for everyone's response.
>
> The issue is not just compression, but lack of
On Sun, 29 Nov 2015 05:02:58 -0500, "Steve Petrie, P.Eng."
wrote:
>I should have clarified, that I use the the term "fuzzy" to refer to the
>probability mechanism, that hooks a small burst of session row deletion
>activity, to each one of a randomly-selected portion
21 matches
Mail list logo