Re: [HACKERS] IMPORT FOREIGN SCHEMA statement

2014-06-15 Thread Ronan Dunklau
Le lundi 16 juin 2014 11:32:38 Atri Sharma a écrit :
> On Mon, Jun 16, 2014 at 11:28 AM, Michael Paquier  > wrote:
> > Just wondering: what about the case where the same data type is
> > defined on both local and remote, but with *different* definitions? Is
> > it the responsibility of the fdw to check for type incompatibilities?
> 
> Logically, should be.
> 

This is akin to what Stephen proposed, to allow IMPORT FOREIGN SCHEMA to also 
import types. 

The problem with checking if the type is the same is deciding where to stop. 
For composite types, sure it should be easy. But what about built-in types ? 
Or types provided by an extension / a native library ? These could theorically 
change from one release to another.

> Just wondering, cant we extend the above proposed function  typedef List
> *(*ImportForeignSchema_function) (ForeignServer *server,
> ImportForeignSchemaStmt * parsetree); be changed a bit to give exact type
> definitions from the remote side as well?

I toyed with this idea, but the more I think about it the less I'm sure what 
the API should look like, should we ever decide to go beyond the standard and 
import more than tables. Should the proposed function return value be changed 
to void, letting the FDW execute any DDL statement ? The advantage of 
returning a list of statement was to make it clear that tables should be 
imported, and letting the core enforce "INTO local_schema" part of the clause.

I would prefer if the API is limited by design to import tables. This 
limitation can always be bypassed by executing arbitrary statements before 
returning the list of ImportForeignSchemaStmt*. 

For the postgres_fdw specific case, we could add two IMPORT options (since it 
looked like we had a consensus on this):

 - import_types
 - check_types

Import types would import simple, composite types, issuing the corresponding 
statements before returning to core.

Check types would compare the local and remote definition for composite types. 
For types installed by an extension, it would check that the local type has 
also been created by an extension of the same name, installed in the same 
schema, raising a warning if the local and remote version differ.
For built-in types, a warning would be raised if the local and remote versions 
of PostgreSQL differ.

However, I don't know what we should be doing about types located in a 
different schema. For example, if the remote table s1.t1 has a column of 
composite type s2.typ1, should we import typ1 in s1 ? In S2, optionnaly 
creating the non-existing schema ? Raise an error ?

Regards,

-- 
Ronan Dunklau
http://dalibo.com - http://dalibo.org

signature.asc
Description: This is a digitally signed message part.


[HACKERS] 9.5 CF1

2014-06-15 Thread Abhijit Menon-Sen
Hi.

There are 92 outstanding patches in this CommitFest, and 63 of them do
not have any reviewer. Those are very large numbers, so I hope everyone
will pitch in to keep things moving along.

There's quite a variety of patches available for review this time, and
any level of feedback about them is useful, from "no longer applies to
HEAD" or "doesn't build" to more detailed reviews.

If you don't have the time to do a full review, or are getting bogged
down, post whatever you do have (the same amount of fame and fortune
will still be yours!).

If you're wondering where to start, here are some suggestions, picked
almost at random:

Using Levenshtein distance to HINT a candidate column name

http://archives.postgresql.org/message-id/cam3swzs9-xr2ud_j9yrkdctt6xxy16h1eugtswmlu6or4ct...@mail.gmail.com

Better partial index-only scans

http://archives.postgresql.org/message-id/CABz-M-GrkvrMc9ni5S0mX53rtZg3=szneyru_a8rigq2b3m...@mail.gmail.com

Use unique index for longer pathkeys

http://archives.postgresql.org/message-id/20140613.164133.160845727.horiguchi.kyot...@lab.ntt.co.jp

SQL access to database attributes
http://archives.postgresql.org/message-id/53868e57.3030...@dalibo.com

pg_resetxlog option to change system identifier
http://archives.postgresql.org/message-id/539b97fc.8040...@2ndquadrant.com

pg_xlogdump --stats
http://archives.postgresql.org/message-id/20140604104716.ga3...@toroid.org

tab completion for set search_path TO

http://archives.postgresql.org/message-id/CAMkU=1xJzK0h7=0_sOLLKGaf7zSwp_YzcKwuG41Ns+_Qcn+t=g...@mail.gmail.com

idle_in_transaction_timeout
http://archives.postgresql.org/message-id/538e600e.1020...@dalibo.com

I'll post a periodic summary to the list, and will send out reminders by
private mail as usual.

Please feel free to contact me with questions.

-- Abhijit

P.S. If you tag your reviews with [REVIEW] in the Subject, it'll be
easier to keep track of them.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] IMPORT FOREIGN SCHEMA statement

2014-06-15 Thread Atri Sharma
On Mon, Jun 16, 2014 at 11:28 AM, Michael Paquier  wrote:

> On Mon, May 26, 2014 at 6:23 AM, Ronan Dunklau 
> wrote:
> > Le dimanche 25 mai 2014 12:41:18 David Fetter a écrit :
> >> On Fri, May 23, 2014 at 10:08:06PM +0200, Ronan Dunklau wrote:
> >> > Hello,
> >> >
> >> > Since my last proposal didn't get any strong rebuttal, please find
> >> > attached a more complete version of the IMPORT FOREIGN SCHEMA
> statement.
> >>
> >> Thanks!
> >>
> >> Please to send future patches to this thread so people can track them
> >> in their mail.
> >
> > I'll do.
> >
> > I didn't for the previous one because it was a few months ago, and no
> patch
> > had been added to the commit fest.
> >
> >>
> >> > I tried to follow the SQL-MED specification as closely as possible.
> >> >
> >> > This adds discoverability to foreign servers. The structure of the
> >> > statement as I understand it is simple enough:
> >> >
> >> > IMPORT FOREIGN SCHEMA remote_schema FROM SERVER some_server [ (LIMIT
> TO |
> >> > EXCEPT) table_list ] INTO local_schema.
> >> >
> >> > The import_foreign_schema patch adds the infrastructure, and a new FDW
> >> > routine:
> >> >
> >> > typedef List *(*ImportForeignSchema_function) (ForeignServer *server,
> >> > ImportForeignSchemaStmt * parsetree);
> >> >
> >> > This routine must return a list of CreateForeignTableStmt mirroring
> >> > whatever tables were found on the remote side, which will then be
> >> > executed.
> >> >
> >> > The import_foreign_schema_postgres_fdw patch proposes an
> implementation of
> >> > this API for postgres_fdw. It will import a foreign schema using the
> right
> >> > types as well as nullable information.
> >>
> >> In the case of PostgreSQL, "the right types" are obvious until there's
> >> a user-defined one.  What do you plan to do in that case ?
> >>
> >
> > The current implementation fetches the types as regtype, and when
> receiving a
> > custom type, two things can happen:
> >
> >  - the type is defined locally: everything will work as expected
> >  - the type is not defined locally: the conversion function will fail,
> and
> > raise an error of the form: ERROR:  type "schema.typname" does not exist
>
> Just wondering: what about the case where the same data type is
> defined on both local and remote, but with *different* definitions? Is
> it the responsibility of the fdw to check for type incompatibilities?
>

Logically, should be.

Just wondering, cant we extend the above proposed function  typedef List
*(*ImportForeignSchema_function) (ForeignServer *server,
ImportForeignSchemaStmt * parsetree); be changed a bit to give exact type
definitions from the remote side as well?

Regards,

Atri


Regards,

Atri
*l'apprenant*


Re: [HACKERS] IMPORT FOREIGN SCHEMA statement

2014-06-15 Thread Michael Paquier
On Mon, May 26, 2014 at 6:23 AM, Ronan Dunklau  wrote:
> Le dimanche 25 mai 2014 12:41:18 David Fetter a écrit :
>> On Fri, May 23, 2014 at 10:08:06PM +0200, Ronan Dunklau wrote:
>> > Hello,
>> >
>> > Since my last proposal didn't get any strong rebuttal, please find
>> > attached a more complete version of the IMPORT FOREIGN SCHEMA statement.
>>
>> Thanks!
>>
>> Please to send future patches to this thread so people can track them
>> in their mail.
>
> I'll do.
>
> I didn't for the previous one because it was a few months ago, and no patch
> had been added to the commit fest.
>
>>
>> > I tried to follow the SQL-MED specification as closely as possible.
>> >
>> > This adds discoverability to foreign servers. The structure of the
>> > statement as I understand it is simple enough:
>> >
>> > IMPORT FOREIGN SCHEMA remote_schema FROM SERVER some_server [ (LIMIT TO |
>> > EXCEPT) table_list ] INTO local_schema.
>> >
>> > The import_foreign_schema patch adds the infrastructure, and a new FDW
>> > routine:
>> >
>> > typedef List *(*ImportForeignSchema_function) (ForeignServer *server,
>> > ImportForeignSchemaStmt * parsetree);
>> >
>> > This routine must return a list of CreateForeignTableStmt mirroring
>> > whatever tables were found on the remote side, which will then be
>> > executed.
>> >
>> > The import_foreign_schema_postgres_fdw patch proposes an implementation of
>> > this API for postgres_fdw. It will import a foreign schema using the right
>> > types as well as nullable information.
>>
>> In the case of PostgreSQL, "the right types" are obvious until there's
>> a user-defined one.  What do you plan to do in that case ?
>>
>
> The current implementation fetches the types as regtype, and when receiving a
> custom type, two things can happen:
>
>  - the type is defined locally: everything will work as expected
>  - the type is not defined locally: the conversion function will fail, and
> raise an error of the form: ERROR:  type "schema.typname" does not exist

Just wondering: what about the case where the same data type is
defined on both local and remote, but with *different* definitions? Is
it the responsibility of the fdw to check for type incompatibilities?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-15 Thread Stephen Frost
Kevin,

* Kevin Grittner (kgri...@ymail.com) wrote:
> Robert Haas  wrote:
> > Even aside from security exposures, how
> > does a non-superuser who runs pg_dump know whether they've got a
> > complete backup or a filtered dump that's missing some rows?
> 
> This seems to me to be a killer objection to the feature as
> proposed, and points out a huge difference between column level
> security and the proposed implementation of row level security. 

I really hate this notion of "killer objection".  It's been discussed
(perhaps not seen by all) at least one suggestion for how to address
this specific issue and there are other ways in which to address it
(having COPY have the same behavior as the GUC being discussed, instead
of having a GUC, though I feel like the GUC is a better approach..).

> (In fact it is a difference between just about any GRANTed
> permission and row level security.)  If you try to SELECT * FROM
> sometable and you don't have rights to all the columns, you get an
> error.  A dump would always either work as expected or generate an
> error.

Provided you know all of the tables and other objects which need to be
included in such a partial dump (as a full dump, today, must be run by a
superuser to be sure you're actually getting everything anyway...).

> The proposed approach would leave the validity of any dump which
> was not run as a superuser in doubt.  The last thing we need, in
> terms of improving security, is another thing you can't do without
> connecting as a superuser.

Any dump not run by a superuser is already in doubt, imv.  That is a
problem we already have which really needs to be addressed, but I view
that as an independent issue.

I agree with avoiding adding another superuser-only capability; see the
other sub-thread about making this a per-user capability.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-15 Thread Stephen Frost
Robert,

* Robert Haas (robertmh...@gmail.com) wrote:
> On Wed, Jun 11, 2014 at 8:59 PM, Stephen Frost  wrote:
> > In this case the user-defined code needs to return a boolean.  We don't
> > currently do anything to prevent it from having side-effects, no, but
> > the same is true with views which incorporate functions.  I agree that
> > it makes a difference when compared to column-level privileges, but my
> > point was that we have provided easier ways to do things which were
> > possible using more complicated methods before.  Perhaps the risk with
> > RLS is higher but these issues look managable to me and the level of
> > doubt about our ability to provide this feature in a reasonable and
> > principled way that our users will understand surprises me.
> 
> I'm glad the issues look manageable to you, but you haven't really
> explained how to manage them.  

There's been a number of suggestions made and it'd be great to get more
feedback on them- running the quals as the table owner, having a GUC
which can be set to either run 'as normal' or either ignore RLS (if the
user has that right) or error out if RLS would happen, and undoubtably
there are other ideas along those same lines to address the pg_dump and
other concerns.

> For my part, I'm mildly surprised that anyone thinks it's a good idea
> to have SELECT * FROM tab to mean different things depending on who is
> typing it.  

Realistically, in the RDBMS realm in which we're in and that we're
working to break into- this is absolutely a given and expected.  It's
new to PostgreSQL, certainly, but it's not uncommon or surprising at all
in our industry.

> To me, that seems very confusing; how does an unprivileged
> user with no ability to assume some other role validate that the row
> security policy they've configured works at all and exposes precisely
> the intended set of rows?  

While I see what you're getting at, I'm not convinced it's really all
that different from being set up without access to some schema or table
which the administrator setting up accounts didn't include for you.
Sure, in the case of a schema or table, you can get an error back
instead of just not seeing the data, but if you're looking for specific
data, chances are pretty good you'll realize the lack of data quickly
and ask the same question regarding access.

To wit, I've certainly had users ask exactly that question of- "do I
have access to all the data in this table?" even when using PG where
it's a bit tricky to limit such access.  Clearly, the same risk applies
when using views and so the question is understandable.  Perhaps these
were users with more experience in other RDBMS's where it's more common
to have RLS, but there are at least a couple cases which I can think of
where that wouldn't apply.

> Even aside from security exposures, how
> does a non-superuser who runs pg_dump know whether they've got a
> complete backup or a filtered dump that's missing some rows?  

This would be addressed with the GUC that's been proposed.  As would the
previous paragraph, though I wanted to apply to that independently.

> I'm not referring to the proposed implementation particularly; or at
> least not that aspect of it.  I don't think trying to run the view
> quals as the defining user is likely to be very appealing, because I
> think it's going to hurt performance, for example by preventing
> function inlining and requiring lots of user-ID switches.  

I understand that there are performance implications.  As mentioned to
Tom, realistically, there's zero way to optimized at least some of these
use-cases because they require a completely external module (eg:
SELlinux) to be involved in the decision about who can view what
records.  If we can optimize that, it'd be by a completely different
approach whereby we pull up the qual higher because we know the whole
query only involves leakproof functions or similar, allowing us to only
apply the filter to the final set of records prior to them being
returned to the user.  The point being that such optimizations would
happen independently and regardless of the quals or user-defined
functions involved.  At the end of the day, I can't think of a better
optimization for such a case (where we have to ask an external security
module if a row is acceptable to return to the user) than that.  Is
there something specific you're thinking about that we'd be missing out
on?

> But I'm not
> gonna complain if someone wants to mull it over and make a proposal
> for how to make it work.  Rather, my concern is that all we've got is
> what might be called the core of the feature; the actual guts of it.
> There are a lot of ancillary details that seem to me to be not worked
> out at all yet, or only half-baked.

Perhaps it's just my experience, but I've been focused on the main core
feature for quite some time and it feels like we're really close to
having it there.  I agree that a few additional bits would be nice to
have but these strike me as relative

Re: [HACKERS] Proposal for CSN based snapshots

2014-06-15 Thread Craig Ringer
On 05/30/2014 11:14 PM, Heikki Linnakangas wrote:
> 
> Yeah. To recap, the failure mode is that if the master crashes and
> restarts, the transaction becomes visible in the master even though it
> was never replicated.

Wouldn't another pg_clog bit for the transaction be able to sort that out?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to change the pgsql source code and build it??

2014-06-15 Thread Craig Ringer
On 06/13/2014 07:08 AM, Shreesha wrote:
> I need to initialize the db as the root and start the database server

Assuming there's no way around doing this (it's generally not a good
idea), you can just use the simple program 'fakeroot'.

This program changes the return values from system calls via LD_PRELOAD,
so PostgreSQL thinks that the user it is running as isn't root. It's
commonly used in testing and packaging systems.

http://man.he.net/man1/fakeroot

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Built-in support for a memory consumption ulimit?

2014-06-15 Thread Craig Ringer
On 06/16/2014 11:56 AM, Amit Kapila wrote:
> On Sat, Jun 14, 2014 at 8:07 PM, Tom Lane  > wrote:
>>
>> After giving somebody advice, for the Nth time, to install a
>> memory-consumption ulimit instead of leaving his database to the tender
>> mercies of the Linux OOM killer, it occurred to me to wonder why we don't
>> provide a built-in feature for that, comparable to the "ulimit -c max"
>> option that already exists in pg_ctl.
> 
> Considering that we have quite some stuff which is backend local (prepared
> statement cache, pl compiled body cache, etc..) due to which memory
> usage can increase and keep on increasing depending on operations
> performed by user

AFTER trigger queues, anybody?

Though they're bad enough that they really need to spill to disk, adding
a limit for them would be at best a temporary workaround.

> Providing such a feature via GUC is a good idea, but I think changing
> limit for usage of system resources should be allowed to privileged
> users.

I don't think we have the facility to do what I'd really like to: Let
users lower it, but not raise it above the system provided max. Just
like ulimit its self.

So SUSET seems OK to me. I don't think it should be PGC_BACKEND, not
least because I can see the utility of a superuser-owned SECURITY
DEFINER procedure applying system specific policy to who can set what limit.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-15 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Craig Ringer  writes:
> > I agree, and now that the urgency of trying to deliver this for 9.4 is
> > over it's worth seeing if we can just run as table owner.
> 
> > Failing that, we could take the approach a certain other RDBMS does and
> > make the ability to define row security quals a GRANTable right
> > initially held only by the superuser.
> 
> Hmm ... that might be a workable compromise.  I think the main issue here
> is whether we expect that RLS quals will be something that the planner
> could optimize to any meaningful extent.  If they're always (in effect)
> wrapped in SECURITY DEFINER functions, I think that largely blocks any
> optimizations; but maybe that wouldn't matter in practice.

From what I've heard from actual users with other RDBMS's who are coming
to PostgreSQL- the reality is that they're going to be using a security
module (eg: SELinux) whose responsibility it is to manage this whole
question of "can this user see this row", meaning there's zero chance of
optimization.

I'd certainly like to see the ability to optimize remain in cases where
the qual itself gives us a way to filter (eg: a table partitioned based
on some security level, where another table maps users to levels), but
that is, from a practical standpoint, not an immediate concern from real
users and I don't believe our approach paints us into a corner which
would prevent that.  What that would require is better support for true
partitioning rather than constraint exclusions.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-15 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Adam Brightwell  writes:
> > Through this effort, we have concluded that for RLS the case of
> > invalidating a plan is only necessary when switching between a superuser
> > and a non-superuser.  Obviously, re-planning on every role change would be
> > too costly, but this approach should help minimize that cost.  As well,
> > there were not any cases outside of this one that were immediately apparent
> > with respect to RLS that would require re-planning on a per userid basis.
> 
> Hm ... I'm not following why we'd need a special case for superusers and
> not anyone else?  Seems like any useful RLS scheme is going to require
> more privilege levels than just superuser and not-superuser.

Just to clarify this- the proposal allows RLS to be implemented
essentially by any user-defined qual, where that qual can include the
current user, the IP the user is connecting from, or more-or-less
anything else, possibly even via a user-defined function or security
module.  It is not superuser-or-not.  This discussion is about how to
support users for which RLS should not be applied.  I can see that being
useful at a more granular level than superuser-or-not, but even at that
level, RLS is still extremely useful.

> Could we put the "if superuser then ok" test into the RLS condition test
> and thereby not need more than one plan at all?

As discussed, that unfortunately doesn't quite work.

This discussion, in general, has been quite useful and I'll work on
adding documentation to the wiki pages which discusses the consideration
and suggestions for a GUC to disable-or-error when RLS is encountered,
along with a per-role capability to bypass RLS; that is in line with the
goal of avoiding adding superuser-specific capabilities.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Built-in support for a memory consumption ulimit?

2014-06-15 Thread Amit Kapila
On Sat, Jun 14, 2014 at 8:07 PM, Tom Lane  wrote:
>
> After giving somebody advice, for the Nth time, to install a
> memory-consumption ulimit instead of leaving his database to the tender
> mercies of the Linux OOM killer, it occurred to me to wonder why we don't
> provide a built-in feature for that, comparable to the "ulimit -c max"
> option that already exists in pg_ctl.

Considering that we have quite some stuff which is backend local (prepared
statement cache, pl compiled body cache, etc..) due to which memory
usage can increase and keep on increasing depending on operations
performed by user or due to some bug, I think having such a feature will be
useful.  Infact I have heard about such complaints from users.

> A reasonably low-overhead way
> to do that would be to define it as something a backend process sets
> once at startup, if told to by a GUC.  The GUC could possibly be
> PGC_BACKEND level though I'm not sure if we want unprivileged users
> messing with it.

Providing such a feature via GUC is a good idea, but I think changing
limit for usage of system resources should be allowed to privileged
users.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] postgresql.auto.conf read from wrong directory

2014-06-15 Thread Amit Kapila
On Sun, Jun 15, 2014 at 6:29 PM, Christoph Berg  wrote:
>
> Re: Amit Kapila 2014-06-13 
> > Agreed, I had mentioned in Notes section of document.  Apart from that
> > I had disallowed parameters that are excluded from postgresql.conf by
> > initdb (Developer options) and they are recommended in user manual
> > to be not used in production.
>
> Excluding developer options seems too excessive to me. ALTER SYSTEM
> should be useful for developing too.

Developer options are mainly for debugging information or might help in one
of the situations, so I thought somebody might not want them to be part of
server configuration once they are set.  We already disallow parameters like
config_file, transaction_isolation, etc. which are disallowed to be set in
postgresql.conf.  Could you please explain me a bit in which
situations/scenarios, do you think that allowing developer options via Alter
System can be helpful?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [GSoC] Clustering in MADlib - status update

2014-06-15 Thread Maxence Ahlouche
Hi! Here is my report for the last two weeks.Weeks 3 and 4 - 2014/06/15

During my third week, I haven't had time to work on GSoC a lot, because of
my exams and my relocation (that's why I didn't deem necessary to post a
report last Sunday). But last week has been much more productive, as I am
now working full time!

I have developped an aggregate that computes the sum of pairwise
dissimilarities in a cluster, for a given medoid. Thanks to Hai and Atri, I
have also developped the main SQL function that actually computes the
k-medoids. This function is still under debugging, so I have not committed
it yet.

According to my planning, I am not on time: I should have finished working
on k-medoids on Friday. When I made this timeline, I largely underestimated
the time needed to get started in this project, and overestimated the time
I thought I could spend on GSoC during my exams. But things will now go
much faster!

As for our weekly phone call, I have lots of difficulties understanding
what is said, partly because of me not being used to hearing english, but
mostly because of low quality sound. Last time, I hardly understood half of
what's been said; which is quite unfortunate, given that I'm supposed to
take advices during this phone call. So I'd like to suggest an alternative:
an IRC channel, for example. And for those who don't have an IRC client
ready: http://webchat.freenode.net/ . For example, the channel #gsoc-madlib
would surely be appropriate :) Also, I've had a change in my timetable,
which makes Tuesday inconvenient for this phone call. Is it possible to
change the day? I'm available at this hour on Monday, Wednesday and
Thursday. Of course, if this change annoys too much people, I'll deal with
Tuesday :)

Finally, for the coming week, I'll finish debugging k-medoids, write all
the secondary functions (e.g. random inital medoids), and write the doc.


Regards,

Maxence A.

-- 
Maxence Ahlouche
06 06 66 97 00


Re: [HACKERS] Why is it "JSQuery"?

2014-06-15 Thread Andrew Dunstan


On 06/15/2014 04:58 PM, Josh Berkus wrote:

I've been poking at the various json-query syntaxes you forwarded, and
none of them really work for the actual jsquery features.  Also, the
existing syntax has the advantage of being *simple*, relatively
speaking, and reasonably similar to JSONPATH.

In other words, what I'm saying is: I don't think there's an existing,
poplular syntax we could reasonably use.




Not to mention the similarity to tsquery, which is something not to be 
despised.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why is it "JSQuery"?

2014-06-15 Thread Josh Berkus
On 06/10/2014 02:46 PM, David E. Wheeler wrote:
> On Jun 10, 2014, at 12:06 PM, Oleg Bartunov  wrote:
> 
>> we have many other tasks than guessing the language name.
>> jsquery is just an extension, which we invent to test our indexing
>> stuff.  Eventually, it grew out.  I think we'll think on better name
>> if developers agree to have it in core. For now, jsquery is good
>> enough to us.
>>
>> jsquery name doesn't need to be used at all, by the way.
> 
> Yeah, I was more on about syntax than the name. We can change that any time 
> before you release it.

I've been poking at the various json-query syntaxes you forwarded, and
none of them really work for the actual jsquery features.  Also, the
existing syntax has the advantage of being *simple*, relatively
speaking, and reasonably similar to JSONPATH.

In other words, what I'm saying is: I don't think there's an existing,
poplular syntax we could reasonably use.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make check For Extensions

2014-06-15 Thread David E. Wheeler
On Jun 15, 2014, at 12:25 AM, Fabien COELHO  wrote:

> I'm not sure the extension is sought for in the cluster (ie the database data 
> directory). If you do "make install" the shared object is installed in some 
> /usr/lib/postgresql/... directory (under unix), and it is loaded from there, 
> but I understood that you wanted to test WITHOUT installing against the 
> current postgresql.

I would assume there is a way to do it with a path…it’ just a SMOP, of course.

D



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [HACKERS] delta relations in AFTER triggers

2014-06-15 Thread Kevin Grittner
David Fetter  wrote:

> Any chance we might be able to surface the old version for the
> case of UPDATE ... RETURNING?

Not as part of this patch.

Of course, once delta relations are available, who knows what
people might do with them.  I have a hard time imagining exactly
how you would expose what you're talking about, but a column to
distinguish before and after images might work.  Incremental
maintenance of materialized views will require that in the form of
a count column with -1 for deleted and +1 for inserted, so there
might be some common ground when we get to that.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] delta relations in AFTER triggers

2014-06-15 Thread David Fetter
On Sat, Jun 14, 2014 at 04:56:44PM -0700, Kevin Grittner wrote:
> Attached is a WIP patch for implementing the capture of delta
> relations for a DML statement, in the form of two tuplestores --
> one for the old versions and one for the new versions.

Thanks!

Any chance we might be able to surface the old version for the case of
UPDATE ... RETURNING?

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.auto.conf read from wrong directory

2014-06-15 Thread Christoph Berg
Re: Amit Kapila 2014-06-13 

> Agreed, I had mentioned in Notes section of document.  Apart from that
> I had disallowed parameters that are excluded from postgresql.conf by
> initdb (Developer options) and they are recommended in user manual
> to be not used in production.

Excluding developer options seems too excessive to me. ALTER SYSTEM
should be useful for developing too.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make check For Extensions

2014-06-15 Thread Fabien COELHO



I would suggest to add that to https://wiki.postgresql.org/wiki/Todo.

I may look into it when I have time, over the summer. The key point is 
that there is no need for a temporary installation, but only of a 
temporary cluster, and to trick this cluster into loading the 
uninstalled extension, maybe by playing with dynamic_library_path in 
the temporary cluster.


The temporary cluster will be in a temporarty `initdb`ed directory, no?


Yep.


If so, you can just install the extension there.


I'm not sure the extension is sought for in the cluster (ie the database 
data directory). If you do "make install" the shared object is installed 
in some /usr/lib/postgresql/... directory (under unix), and it is loaded 
from there, but I understood that you wanted to test WITHOUT installing 
against the current postgresql.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers