[GENERAL] tzdata timezone boundaries

2017-07-29 Thread Jerome Wagner
Hello,

Is there a way to have access to the tzdata boundaries from within postgres
?

the zdump linux command gives something like

--
zdump -v /usr/share/zoneinfo/America/Los_Angeles | grep 2017

Sun Mar 12 09:59:59 2017 UTC = Sun Mar 12 01:59:59 2017 PST isdst=0
gmtoff=-28800
Sun Mar 12 10:00:00 2017 UTC = Sun Mar 12 03:00:00 2017 PDT isdst=1
gmtoff=-25200
Sun Nov  5 08:59:59 2017 UTC = Sun Nov  5 01:59:59 2017 PDT isdst=1
gmtoff=-25200
Sun Nov  5 09:00:00 2017 UTC = Sun Nov  5 01:00:00 2017 PST isdst=0
gmtoff=-28800
--

I guess that postgres have access to these datas to be able to correcly
handle timezone, but I can't find a way to surface these boundaries (except
maybe with a smart generate_series scheme (?))

nb: I asked the question on stack overflow to no avail -
https://stackoverflow.com/questions/45381924/postgresql-how-to-extract-a-list-of-past-and-known-future-offset-changes-for-a

Thanks,
Jérôme Wagner


Re: [GENERAL] tzdata version

2017-07-28 Thread Jerome Wagner
Thank you for your answer.

When compiled with "--with-system-tzdata", does postgres need a restart
after the system is updated with the new tzdata ?

  regards, jérôme wagner

On Fri, Jul 28, 2017 at 4:06 PM, Tom Lane  wrote:

> Jerome Wagner  writes:
> > As i understand it, the tzdata on which the timezone information is based
> > is updated regularly (semi-manually) in https://github.com/
> > postgres/postgres/tree/master/src/timezone
>
> > Am i correct in stating that there is currently no way to update the
> tzdata
> > database except by recompiling postgres after a new version of the tz
> > database has been adapted for postgres ?
>
> It's in the standard tzdata format, so if you have a copy of zic laying
> about, you could download a new tzdata file set and run zic to install
> the new files into the PG installation tree.  There's no need to recompile
> Postgres per se.
>
> However, if this seems like a problem to you and you are on a platform
> whose vendor updates tzdata reliably, you should consider building PG
> with --with-system-tzdata so that it will rely on the vendor copy.
> We've always seen distributing a copy of tzdata as being mainly a
> service to people stuck on platforms where that doesn't happen.
>
> > Is there a way to introspect the current version of the tzdata release
> > (2014j, 2016h, 2017b, ..) that is embedded in a pre-compiled postgres (at
> > runtime or in another way) ?
>
> I don't know of any version labeling in the tzdata files themselves
> (not that I've looked very hard for one).  If you know your PG minor
> release you could look into our release notes to see what tzdata
> release it shipped with.
>
> regards, tom lane
>


[GENERAL] tzdata version

2017-07-28 Thread Jerome Wagner
Hello,

As i understand it, the tzdata on which the timezone information is based
is updated regularly (semi-manually) in
https://github.com/postgres/postgres/tree/master/src/timezone

Am i correct in stating that there is currently no way to update the tzdata
database except by recompiling postgres after a new version of the tz
database has been adapted for postgres ?

Is there a way to introspect the current version of the tzdata release
(2014j, 2016h, 2017b, ..) that is embedded in a pre-compiled postgres (at
runtime or in another way) ?

Thanks
Jérôme


[GENERAL] tzdata version

2017-07-28 Thread Jerome Wagner
Hello,

As i understand it, the tzdata on which the timezone information is based
is updated regularly (semi-manually) in https://github.com/
postgres/postgres/tree/master/src/timezone

Am i correct in stating that there is currently no way to update the tzdata
database except by recompiling postgres after a new version of the tz
database has been adapted for postgres ?

Is there a way to introspect the current version of the tzdata release
(2014j, 2016h, 2017b, ..) that is embedded in a pre-compiled postgres (at
runtime or in another way) ?

Thanks
Jérôme


Re: [GENERAL] Storing files: 2.3TBytes, 17M file count

2016-11-29 Thread Jerome Wagner
2 other options that you may want to look at :

- cephfs

This has nothing to do with postgres but is a distributed filesystem
handling very large amount of files (thinks next generation NFS)
I haven't tried it myself yet but they reached a "stable" milestone
regarding the distributed fs.
cf https://en.wikipedia.org/wiki/Ceph_(software)

- fuse based postgres storage

this can
 - relieve the "pg_largeobject" beeing a system table issue (tablespace, ..)
 - give you a fs-like access to your data

The closest I have seen to this thus far is
https://github.com/andreasbaumann/pgfuse which would probably need some
tinkering.





On Tue, Nov 29, 2016 at 10:50 AM, Thomas Güttler <
guettl...@thomas-guettler.de> wrote:

>
>
> Am 29.11.2016 um 01:52 schrieb Mike Sofen:
>
>> From: Thomas Güttler   Sent: Monday, November 28, 2016 6:28 AM
>>
>> ...I have 2.3TBytes of files. File count is 17M
>>
>> Since we already store our structured data in postgres, I think about
>> storing the files in PostgreSQL, too.
>>
>> Is it feasible to store file in PostgreSQL?
>>
>> ---
>>
>> I am doing something similar, but in reverse.  The legacy mysql databases
>> I’m converting into a modern Postgres data
>> model, have very large genomic strings stored in 3 separate columns.  Out
>> of the 25 TB of legacy data storage (in 800
>> dbs across 4 servers, about 22b rows), those 3 columns consume 90% of the
>> total space, and they are just used for
>> reference, never used in searches or calculations.  They range from 1k to
>> several MB.
>>
>>
>>
>> Since I am collapsing all 800 dbs into a single PG db, being very smart
>> about storage was critical.  Since we’re also
>> migrating everything to AWS, we’re placing those 3 strings (per row) into
>> a single json document and storing the
>> document in S3 bins, with the pointer to the file being the globally
>> unique PK for the row…super simple.  The app tier
>> knows to fetch the data from the db and large string json from the S3
>> bins.  The retrieval time is surprisingly fast,
>> this is all real time web app stuff.
>>
>>
>>
>> This is a model that could work for anyone dealing with large objects
>> (text or binary).  The nice part is, the original
>> 25TB of data storage drops to 5TB – a much more manageable number,
>> allowing for significant growth, which is on the horizon.
>>
>
> Thank you Mike for your feedback.
>
> Yes, I think I will drop my idea. Encoding binary (the file content) to
> text and decoding to binary again makes no sense. I was not aware that this
> is needed.
>
> I guess I will use some key-to-blob store like s3. AFAIK there are open
> source s3 implementations available.
>
> Thank you all for your feeback!
>
>  Regards, Thomas
>
>
>
>
>
> --
> Thomas Guettler http://www.thomas-guettler.de/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Converting a TimestampTz into a C# DateTime

2016-11-14 Thread Jerome Wagner
Hello,
seeing you answer I have a question for which I found no answer a few weeks
ago : is there a way to know at runtime which internal representation
timestamps have ?
I am trying to deal with the COPY binary protocol with only SQL access to
the remote server and would like to find a way to know the internal
representation to read / write the correct timestamps.
Thanks for your help  !

On Mon, Nov 14, 2016 at 1:12 PM, Albe Laurenz 
wrote:

> valeriof wrote:
> > I'm handling a TimestampTz value inside a plugin to stream WAL changes
> to a
> > .NET client application. What I'm trying to do is to return all possible
> > column changes as binary (don't like to have Postgres handle the
> conversion
> > to string as I may need to have access to the bytes at the client
> level). In
> > case of a TimestampTz, is it possible to return the 8-bytes long integer
> and
> > then from the C# application convert the value to Ticks?
>
> Sure, if you know how it is stored internally.
>
> One of your problems will be that the format depends on whether PostgreSQL
> was configured with --disable-integer-datetimes or not.
>
> With that switch, a timestamp is a double precision value, otherwise a
> 64-bit integer value. In the former case, it measures seconds after
> midnight 2000-01-01, while in the latter case it measures microseconds
> after that timestamp.
>
> Yours,
> Laurenz Albe
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Multi tenancy : schema vs databases

2016-09-30 Thread Jerome Wagner
you could also use a hybrid approach :
 - have a systematic tenant_id field in your tables, allowing for 1 db / 1
schema multi-tenants
 - give your application the ability to set the schema path for a tenant,
so it will locate the tenant schema if it has a decidated schema
 - maybe go to the extreme to be able to specialize the db per tenant
 - ..

this would allow you to easily re-organize your tenants to find the best
compromise depending on their status (small tenants, huge tenant, security
freaks tenants, ..).

if going with schema based tenants, make sure you have administrative tasks
to check the diffs between the schemas because if 1000s schemas diverge it
will be bring technical debt down the line.




On Fri, Sep 30, 2016 at 11:47 AM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> Via schemata if the tenants represent sub entities of the same
> organization.
> This gives the top level mgmt the ability to have a consolidated view of
> the whole organization.
>
> On 30/09/2016 12:06, Rakesh Kumar wrote:
>
>>
>> 
>> From: Venkata B Nagothi 
>> Sent: Friday, September 30, 2016 02:48
>> To: Rakesh Kumar
>> Cc: pgsql-general@postgresql.org
>> Subject: Re: [GENERAL] Multi tenancy : schema vs databases
>>
>> On Fri, Sep 30, 2016 at 10:16 AM, Rakesh Kumar <
>> rakeshkumar...@outlook.com> wrote:
>>
>>
>> 
>> From: Venkata B Nagothi mailto:nag1...@gmail.com>>
>> Sent: Thursday, September 29, 2016 17:25
>> To: Rakesh Kumar
>> Cc: pgsql-general@postgresql.org
>> Subject: Re: [GENERAL] Multi tenancy : schema vs databases
>>
>> On Fri, Sep 30, 2016 at 5:18 AM, Rakesh Kumar > > >> wrote:
>>
>> Hi
>>
>> I would like to know which technique is better for supporting
>> multi-tenancy=
>>   applications, going upto hundreds or even thousands of tenants.
>>
>> 1 - One database with difference schemas (one schema per tenant)
>> or
>> 2 - One database per tenant.
>>
>> Did you mean one database with-in a postgresql cluster ?
>>
>> Yes.  Say something like this within a PG cluster
>>
>> db4978
>> db6234
>> ...
>> 100s of such databases.
>>
>> That would make things worst if you are going for one database per
>> tenant. As said by John just now, it would end up in an very complex and
>> bad design contributing to very poor performance and high maintenance
>> overhead.
>> A schema per tenant would be a good idea and its hard to say without
>> knowing the data isolation levels you require for each tenant.
>> 
>>
>> We require complete data isolation. Absolutely nothing should be shared
>> between two tenants.
>>
>> WHy would multiple dbs be any worse than multiple schemas in performance?
>>
>>
>>
>>
>>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] question on error during COPY FROM

2016-08-23 Thread Jerome Wagner
Hello,

in the documentation I read
https://www.postgresql.org/docs/current/static/sql-copy.html


COPY stops operation at the first error. This should not lead to problems
in the event of a COPY TO, but the target table will already have received
earlier rows in a COPY FROM. These rows will not be visible or accessible,
but they still occupy disk space. This might amount to a considerable
amount of wasted disk space if the failure happened well into a large copy
operation. You might wish to invoke VACUUM to recover the wasted space.

does that mean that I should always execute a VACUUM to recover the wasted
space when an error is triggered or will the auto-vacuum mechanism do the
job by itself ?

Thanks
Jerome


[GENERAL] question on parsing postgres sql queries

2016-07-27 Thread Jerome Wagner
Hello,

I am doing some research on postgres sql query parsing.

I have found the https://github.com/lfittl/libpg_query project which
manages to re-use the native postgres server parser. For using this, you
need to accept an external dependency on a lib compiled out of the postgres
source.

I was wondering what people think of the conformance with regards to the
real parser of the documentations on
 - https://www.postgresql.org/docs/current/static/sql-select.html
 - https://www.postgresql.org/docs/current/static/sql-copy.html
... and more generally sgmls in
https://github.com/postgres/postgres/tree/master/doc/src/sgml/ref

Would it make sense to use these sgml synopsis as some kind of source of
truth, parse them, and automatically generate a parser for a specifc
language ?

This could enable the creation of parsers for different languages using
parser generators based on the synopsis.

I feel like the conformance level of the documentation is high and that the
sgml synopis seem to be nearly programmatically sufficient to create
parsers.

what do you think ?

Could the parser commiters share some lights on how the documentation
process interacts with the parser commits ?

Thanks,
Jerome


Re: [GENERAL] question on parsing postgres sql queries

2016-07-27 Thread Jerome Wagner
> What problem are you trying to solve here?​  to whit not everything that
can be parsed is documented - usually intentionally.

I am tyring to see whether we could use the documentation as a kind of
formal specification of the language but I understand that the devil is
in the details and that even formal specifications can lead to incompatible
implementations,

I would have found it nice if the clean documentation of the project could
be used as a meta-grammar sufficient to maybe generate the grammar but I
will have to dig further into the Bison grammar files.

The project I mentioned that isolates the parser from PostgreSQL binary as
a re-usable library is probably the closest you can get currently to a
parser matching the real engine.

Otherwise, yes, parsing the synopsis could maybe lead to a sanity check on
the fact that the documentation is in line with the grammar. This could
lead to warnings or help uncover unexpected corner cases not mentioned in
the documentation.

Thanks for your answers
Jerome


On Tue, Jul 26, 2016 at 9:52 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Jul 26, 2016 at 3:20 PM, Jerome Wagner 
> wrote:
>
>>
>> Would it make sense to use these sgml synopsis as some kind of source of
>> truth, parse them, and automatically generate a parser for a specifc
>> language ?
>>
>
> ​What problem are you trying to solve here?​  to whit not everything that
> can be parsed is documented - usually intentionally.
>
>
>> Could the parser commiters share some lights on how the documentation
>> process interacts with the parser commits ?
>>
>>
> ​Commits that modify the parser are expected to have manual modifications
> to the relevant documentation ​as well.
>
> David J.
>
>


[GENERAL] question on parsing postgres sql queries

2016-07-26 Thread Jerome Wagner
Hello,

I am doing some research on postgres sql query parsing.

I have found the https://github.com/lfittl/libpg_query project which
manages to re-use the native postgres server parser. For using this, you
need to accept an external dependency on a lib compiled out of the postgres
source.

I was wondering what people think of the conformance with regards to the
real parser of the documentations on
 - https://www.postgresql.org/docs/current/static/sql-select.html
 - https://www.postgresql.org/docs/current/static/sql-copy.html
... and more generally sgmls in
https://github.com/postgres/postgres/tree/master/doc/src/sgml/ref

Would it make sense to use these sgml synopsis as some kind of source of
truth, parse them, and automatically generate a parser for a specifc
language ?

This could enable the creation of parsers for different languages using
parser generators based on the synopsis.

I feel like the conformance level of the documentation is high and that the
sgml synopis seem to be nearly programmatically sufficient to create
parsers.

what do you think ?

Could the parser commiters share some lights on how the documentation
process interacts with the parser commits ?

Thanks,
Jerome


Re: [GENERAL] pg_largeobject

2016-03-29 Thread Jerome Wagner
I am not saying that this will solve your problem (I never tried id even
though I keep it in my radar), but this project seems to implement
something close to what Daniel is describing:

https://github.com/andreasbaumann/pgfuse

+ it gives you a FUSE wrapper so the client can use fs calls.

the proposed schema is here
https://github.com/andreasbaumann/pgfuse/blob/master/schema.sql



On Tue, Mar 29, 2016 at 5:09 PM, Sridhar N Bamandlapally <
sridhar@gmail.com> wrote:

> We are doing application/database migration compatible with postgresql on
> cloud, DR/replication also in plan
>
> at present I feel need of configurable multi-table storage instead of
> pg_largeobject only
>
> Thanks
> Sridhar
>
>
> On Tue, Mar 29, 2016 at 6:08 PM, Alvaro Aguayo Garcia-Rada <
> aagu...@opensysperu.com> wrote:
>
>> Some time ago I had to setup a replicated file system between multiple
>> linux servers. I tried everything I could based on postgres, including
>> large objects, but everything was significantly slower than a regular
>> filesystem.
>>
>> My conclussion: postgres is not suitable for storing large files
>> efficiently.
>>
>> Do you need that for replication, or just for file storage?
>>
>> Alvaro Aguayo
>> Jefe de Operaciones
>> Open Comb Systems E.I.R.L.
>>
>> Oficina: (+51-1) 3377813 | RPM: #034252 / (+51) 995540103  | RPC:
>> (+51) 954183248
>> Website: www.ocs.pe
>>
>> Sent from my Sony Xperia™ smartphone
>>
>>
>>  Sridhar N Bamandlapally wrote 
>>
>>
>> all media files are stored in database with size varies from 1MB - 5GB
>>
>> based on media file types and user-group we storing in different tables,
>> but PostgreSQL store OID/Large-object in single table (pg_largeobject), 90%
>> of database size is with table pg_largeobject
>>
>> due to size limitation BYTEA was not considered
>>
>> Thanks
>> Sridhar
>>
>>
>>
>> On Tue, Mar 29, 2016 at 3:05 PM, John R Pierce 
>> wrote:
>>
>>> On 3/29/2016 2:13 AM, Sridhar N Bamandlapally wrote:
>>>
 Hi

 pg_largeobject is creating performance issues as it grow due to single
 point storage(for all tables)

 is there any alternate apart from bytea ?

 like configuration large-object-table at table-column level and oid
 PK(primary key) stored at pg_largeobject


>>> I would as soon use a NFS file store for larger files like images,
>>> audio, videos, or whatever.   use SQL for the relational metadata.
>>>
>>> just sayin'
>>>
>>>
>>>
>>> --
>>> john r pierce, recycling bits in santa cruz
>>>
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>>
>


Re: [GENERAL] Let's Do the CoC Right

2016-01-22 Thread Jerome Wagner
Hello,

I do not intervene much on the list and am not an english native speaker,
but here are some thoughts :

It seems to me that it is very hard to find good words (which should find
their way in other languages) to summarize what is a decent conduct in an
open source project.

Don't we all (or at least peaceful people) want to have a decent conduct,
respectful of others, be it in open source projets, in conferences, or in
life in general ?

Are we not going to end up with some sort of "human rights declaration" ?
which by the way is already translated in many languages here -
http://www.ohchr.org/EN/UDHR/Pages/SearchByLang.aspx

I am not saying that the declaration of human rights is perfect (I should
re-read it) but are we going to write something better on this thread ?
Shouldn't we better use all that energy to modify the declaration of human
rights if there is an obvious problem with it ?

What is the goal of this ? reject people who have sub-par conduct ? have
some kind of legal way to ban them from the project ? Is this like a
"constitution" for the project ?

Anyone can participate in an open source project. Communication and human
interactions, even hidden behind a computer screen, are key to this.

We should maybe try and fix things without needing to write complicated
things to say that one's person freedom ends where another's begin.

I understand that some people sometimes feel rejected or blamed or hurt by
writings or acts that are innapropriate to them or innapropriate in general.

If they can speak out, a healthy community will help them sort and maybe
fix the problem.
If they cannot speak out, then maybe there needs to be someone in the
community who has this "I am all ears and happy to try and protect
everyone's freedom" attitude so that this person can try and sort things
out anonymously.

I am maybe too naïve and put too much trust in the good sides of human
nature, but I hope this helps in some way.

Jérôme






On Fri, Jan 22, 2016 at 8:47 PM, Luz Violeta 
wrote:

> Hi David !
> I totally share your toughts. I was following the whole CoC discussion,
> and as a transgender woman found myself with a lot of sadness. Because what
> happened in that discussion, happens in some other projects that I liked
> technically and used for a long time.
>
> It's sad, because all those who participated in the discussion were people
> that are not exposed to the experiences we live (and by that, I mean
> everyone not fitting in the hegemony of that white guy in the IT industry),
> and by consequence they don't have sensibility/empathy to notice or
> understand what's out of place ... or these people are totally limited in
> how much of that sensibility/empathy can get. And that's the foundation on
> which the CoC is being written. I saw the CoC go down, down, and down in
> content and quality, not taking stances for nothing and falling into
> generalizations.
>
> I truly hope that open source communities can move forward on the social
> aspect (the community), so more people can feel ok/safe to come by and put
> hands to the work without feeling exposed to violent situations. And, about
> some comments/signatures I saw floating around the CoC discussion, I will
> just say that this is not being about weak, pitiful, etc ... sometimes, you
> just get tired or you just cant have your armor all day on, all week on,
> all month on, all year on ... all life on, and sometimes you prefer to
> avoid these situations, and do something else in a safer enviroment, so you
> have a moment when you can take the damn armor off and simply worrying
> about having fun.
>
> This is pretty much my personal opinion.
>
> Hugs ~
>
> P.S → even now, I'm kinda terrified of a shitstorm in my first mail to the
> mailing list ... but definitely this spark of hope made me come forward and
> say something, dunno.
>
> On 01/22/2016 04:00 AM, Rajeev Bhatta wrote:
>
>> On Friday 22 January 2016 10:55 AM, David E. Wheeler wrote:
>>
>>> Fellow PostgreSQLers,
>>>
>>> I can’t help that there are a whole lot of white guys working on this
>>> document, with very little feedback from the people who it’s likely to
>>> benefit (only exception I spotted in a quick scan was Regina; sorry if I
>>> missed you). I suspect that most of you, like me, have never been the
>>> target of the kinds os behaviors we want to forbid. Certainly not to the
>>> level of many women, transgendered, and people of color I know of
>>> personally, in this community and others, who have. If those people are not
>>> speaking up here, I suspect it’s because they don’t expect to be heard. A
>>> bunch of white guys who run the project have decided what it’s gonna be,
>>> and mostly cut things out since these threads started.
>>>
>>> But a *whole* lot of thought has gone into the creation of CoCs by the
>>> people who need them, and those who care about them. They have considered
>>> what sorts of things should be covered, what topics specifically addressed,
>>> and how

Re: [GENERAL] FDW wrapper for clustered app management

2014-10-17 Thread Jerome Wagner
Hello Albe,

Thanks for your answer.

The application is an application that is a sort of supervisor of sub
applications, mainly web oriented. It manage access rights, update versions
of web services, launch services, ..

No the data is not sharded across the servers, but each server can have its
own set of data.

Yes I guess that the predicates (and WHERE clause) could be pushed down to
the servers.

But I am not sure how fdw works regarding joins between a remote server and
a local table. In this case it is hard for me to answer you.

For example I have yet to document myself on how fdw handles those 2 types
of requests :

-- without join
SELECT s.name FROM server1.service AS s
WHERE s.uptime > 2;

-- with join
SELECT s.name FROM server1.service AS s
JOIN uptime_threshold u ON (u.name = s.name)
WHERE s.uptime > u.uptime;

I also have not thought about the multi-server requests. A view on all the
fdw servers, could probably enable SELECT requests communicating with all
the servers (eg: what are all the servers that have a specific property).
For the write part, a stored procedure could probably be used ; I don't
know if there is another option.

Clearly, I am trying to see how I could twist the fdw wrappers into a sort
of manhole inside the application, without resorting to a classic event
based mechanism.

I could also imagine very risky things (security wise) like :

SELECT line FROM server1.execute WHERE command = 'wc -l /my/file' ORDER by
lineno;

and use pgadmin as general control mechanism for the cluster : the cluster
and the data would be on the same SQL data plane :-)

Thanks,
Jerome

On Fri, Oct 17, 2014 at 12:00 PM, Albe Laurenz 
wrote:

> Jerome Wagner wrote:
> > I am considering (postgres 9.3+) the idea of opening a R/W access into a
> clustered application by
> > creating one fdw server from a central database to each server a cluster.
> >
> > That would imply opening a port on each server inside the application,
> listening for incoming
> > connections from the database and this way all the servers would become
> visible with R/W access.
> >
> > Is that a sound idea or does it look horrible ? Would it be reasonable
> to connect in this way to a
> > cluster of 1, 10, 100, 1000 or more servers ?
> >
> > is there an existing _fdw wrapper that would look like a good
> candidate for such a direct access
> > inside an application ? Then I would have to implement the protocol
> corresponding to this _fdw
> > inside my application.
>
> What is the application you want to access this way?
>
> Does "cluster" mean that data is distributed (sharded) across these
> servers?
>
> Can you reasonably translate SQL access predicates so that you can push
> down
> WHERE conditions to the remote servers?  If not, every access would have
> to pull
> the entire foreign table into the PostgreSQL server before applying the
> conditions.
>
> Under the right conditions, such a setup could work, e.g. to join data from
> different data sources.  How many servers make sense probably depends a
> lot on how
> you want to use them.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] FDW wrapper for clustered app management

2014-10-17 Thread Jerome Wagner
Hello John,
Thanks for your answer. I am also considering a publish/subscribe MQ based
solution (and this may be indeed needed for queuing RPCs).

the data I would like to access R/W is more like
 - configuration data
 - states of different state machines
 - cache values for some keys (reading, invalidating)

so you could in a sense say that they are table oriented.

I agree that this becomes 'tangential' with write access.
a request like "UPDATE server.service SET start_requested = true WHERE name
= 'myService' and started = false" seems weird but it could probably work.

Thanks,
Jerome

On Fri, Oct 17, 2014 at 11:57 AM, John R Pierce  wrote:

> On 10/17/2014 2:35 AM, Jerome Wagner wrote:
>
>> Hello,
>>
>> I am considering (postgres 9.3+) the idea of opening a R/W access into a
>> clustered application by creating one fdw server from a central database to
>> each server a cluster.
>>
>> That would imply opening a port on each server inside the application,
>> listening for incoming connections from the database and this way all the
>> servers would become visible with R/W access.
>>
>> Is that a sound idea or does it look horrible ? Would it be reasonable to
>> connect in this way to a cluster of 1, 10, 100, 1000 or more servers ?
>>
>> is there an existing _fdw wrapper that would look like a good
>> candidate for such a direct access inside an application ? Then I would
>> have to implement the protocol corresponding to this _fdw inside my
>> application.
>>
>>
> is the application running on these 10, 100, 1000 nodes something
> resembling a table oriented relational database?
>
> I would suggest instead you look at using a MQ style message queueing
> system, with publish-subscribe semantics for your distributed remote
> procedure calls. and not from within a database, rather, from your central
> control application to your distributed application workers...
>
>
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] FDW wrapper for clustered app management

2014-10-17 Thread Jerome Wagner
Hello,

I am considering (postgres 9.3+) the idea of opening a R/W access into a
clustered application by creating one fdw server from a central database to
each server a cluster.

That would imply opening a port on each server inside the application,
listening for incoming connections from the database and this way all the
servers would become visible with R/W access.

Is that a sound idea or does it look horrible ? Would it be reasonable to
connect in this way to a cluster of 1, 10, 100, 1000 or more servers ?

is there an existing _fdw wrapper that would look like a good candidate
for such a direct access inside an application ? Then I would have to
implement the protocol corresponding to this _fdw inside my application.

Thank you for your feedback.
Jerome