Re: [GENERAL] query syntax to combine 2 set returning functions ?
> I can't find the syntax to get the results in multiple columns instaed of a > singel 'record' column: > select get_table_depends('cicpg_logs', t) FROM get_modeltablelist('efeeds') > t I got it: select (get_table_depends('cicpg_logs', t)).* FROM get_modeltablelist('efeeds') t ^ ^ Marc From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Marc Mamin Sent: Freitag, 1. März 2013 13:51 To: pgsql-general@postgresql.org Subject: [GENERAL] query syntax to combine 2 set returning functions ? Hello, I have 2 set returning functions: select get_modeltablelist('foo') t t -- _emc_cubelist _efm_modellist _efm_sequences select get_table_depends('cicpg_logs', '_efm_sequences') d Typ Table Object --- - --- TR _efm_sequences _efm_sequences_tr Now I want to combine both. This is valid, but I can't find the syntax to get the results in multiple columns instaed of a singel 'record' column: select get_table_depends('cicpg_logs', t) FROM get_modeltablelist('efeeds') t get_table_depends - (TR,_efm_sequences,_efm_sequences_tr) (TR,_emr_reportlist,_emr_reportlist_tr) (TR,_efm_models,_efm_models_del_tr) regards, Marc Mamin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [HACKERS] [GENERAL] Floating point error
Maciek Sakrejda writes: > [ a bunch of questions that boil down to: ] > Isn't full fidelity possible assuming sensible rounding semantics and > enough characters of precision? The fundamental issue is that the underlying representation is binary and so its precision limit doesn't correspond to an exact number of decimal digits. The default print format tries to hide that from you by printing only as many decimal digits as the stored format is certain to be able to reproduce, ie if you enter "1.23456" you should see that printed again, not "1.23455" or "1.23457". However, the stored value is not going to be exactly equal to "1.23456". The real difficulty is that there may be more than one storable value that corresponds to "1.23456" to six decimal digits. To be certain that we can reproduce the stored value uniquely, we have to err in the other direction, and print *more* decimal digits than the underlying precision justifies, rather than a bit less. Some of those digits are going to look like garbage to the naked eye. pg_dump cares about reproducing values exactly, and not about whether things are nice-looking, so it cranks up extra_float_digits. The JDBC driver might be justified in doing likewise, to ensure that the identical binary float value is stored on both client and server --- but that isn't even a valid goal unless you assume that the server's float implementation is the same as Java's, which is a bit of a leap of faith, even if IEEE 754 is nigh universal these days. > I think extra_float_digits is an awkward, surprising implementation > detail We could have dumbed it down to a boolean "look nice versus reproduce the value exactly" switch, but it seemed like there might be applications that could use some additional flexibility. In any case, it's not Postgres' fault that there is an issue here; it's fundamental to the use of binary rather than decimal stored values. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [HACKERS] [GENERAL] Floating point error
While having more docs around extra_float_digits is a great idea, I don't think the patch really clarifies much. (Disclaimer: I honestly have only a vague idea of the reasoning behind extra_float_digits existing in the first place, but perhaps that means I'm a good target audience for the doc patch. Also, I apologize if anything below sounds abrasive--having worked on both my own driver and a few things here and there on the JDBC one, plus running into this in assorted places in the wild, I still find extra_float_digits baffling at best. I immensely appreciate the effort to make improvements here.) That is, the patch explains some of the reasoning behind the setting, but it doesn't really help the reader identify where setting this is useful and/or necessary. E.g., the JDBC driver just indiscriminately sets extra_float_digits to 3 if the server supports it (the max allowable). Is this a bogus approach? + The (inherently inaccurate) textual representation of a real + or double precision value... Isn't full fidelity possible assuming sensible rounding semantics and enough characters of precision? Isn't that the point of extra_float_digits? + will by default contain only as many + significant digits as can be represented without losing precision + on any platform supported by PostgreSQL. How would providing more digits lose precision? Platform as in a non-IEEE 754 server restoring a dump? A non-IEEE 754 client running queries? Something more subtle? And how does having more precision in the serialized cause these to lose precision when decoding the value? I think extra_float_digits is an awkward, surprising implementation detail--at first glance it seems like a "flip this switch for correct behavior" knob. I'd love to have a clear explanation in the docs of why it's needed, and when you should and should not set it (e.g., is the JDBC driver wrong to set it all the time? should every driver set it all the time?). Thanks, Maciek -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Fwd: [GENERAL] pg_upgrade
Ack! Sorry. Bad list etiquette in so many ways... -- Forwarded message -- From: Ian Harding Date: Sun, Mar 3, 2013 at 8:26 PM Subject: Re: [GENERAL] pg_upgrade To: Bruce Momjian It doesn't seem to though. Here's what I get when I leave it alone (no changes to postgresql.conf, no environment variables set) postgres@server:~$ echo $PGPORT postgres@server:~$ echo $PGOPTIONS postgres@server:~$ /opt/postgres/9.2/bin/pg_upgrade -d /var/lib/postgresql/8.4/main/ -D /srv/data/ -b /usr/lib/postgresql/8.4/bin/ -B /opt/postgres/9.2/bin/ Performing Consistency Checks - Checking current, bin, and data directories ok Checking cluster versions ok connection to database failed: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.50432"? could not connect to old postmaster started with the command: "/usr/lib/postgresql/8.4/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/postgresql/8.4/main/" -o "-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=20 -c listen_addresses='' -c unix_socket_permissions=0700" start Failure, exiting When I try to start postgresql with those options it doesn't appear to start because -w makes it wait, where wait means try "psql -l" which doesn't work because of the changed port I assume. postgres@server:~$ psql -l psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? postgres@server:~$ psql -p 50432 -l List of databases Name| Owner | Encoding | Collation | Ctype | Access privileges +--+---+---+---+--- fordparts | iharding | SQL_ASCII | C | C | gmparts| iharding | SQL_ASCII | C | C | leaguemgmt | iharding | SQL_ASCII | C | C | postgres | postgres | SQL_ASCII | C | C | template0 | postgres | SQL_ASCII | C | C | =c/postgres : postgres=CTc/postgres template1 | postgres | SQL_ASCII | C | C | =c/postgres : postgres=CTc/postgres (6 rows) Although it actually did start, as in the case above. However, the pg_upgrade message is looking for /tmp/.s.PGSQL.50432 which isn't going to work. I'm puzzled. I've tried PGPORT and different combinations of PGOPTIONS to no beneficial effect. On Sun, Mar 3, 2013 at 8:16 PM, Bruce Momjian wrote: > On Sun, Mar 3, 2013 at 07:58:03PM -0800, Ian Harding wrote: > > Hi! Sorry for the delay. Just getting back to it. Looks like postgres > was > > compiled with a funky socket_directory so even if I change it in the > > postgresql.conf things break. I've tried tricking it with PGOPTIONS= > > "socket_directory = '/var/run/postgresql'" but that didn't help. > > You need the -o and -O options which were added to pg_upgrade in 9.2. > Actually, I think we handle that in 9.2 automatically by checking the > server settings, but I am not sure. > > Actually, if the client and server libraries were compiled with the same > default, and you are using the default, it should work fine. > > --- > > > > > > > > > > On Fri, Feb 15, 2013 at 5:54 PM, Bruce Momjian wrote: > > > > On Fri, Feb 15, 2013 at 10:36:25AM -0800, Ian Harding wrote: > > > Maybe this is it. 8.4 pg_ctl docs say it uses "psql -l" to see if > it's > > > finished when you use -w. It also says > > > > > > PGPORT > > > > > > Default port for psql (used by the -w option). > > > > > > And since pg_upgrade uses a funky port, it might miss unless > the > > PGPORT > > > environment variable is set to match. > > > > > > I'll try that tonight. > > > > Yes, you are getting close to the answer. ;-) The problem is that > > Postgres doesn'isn't checking the right port number or socket > location > > or something else. This was all improved in Postgres 9.1: > > > > The wait mode is now significantly more robust. It will not > get > > confused by non-default postmaster port numbers, non-default > > Unix-domain socket locations, permission problems, or stale > > postmaster lock files. > > > > I am guessing there is something non-standard about your old cluster, > > and 8.4's pg_ctl -w can't handle it. Tell me what is non-standard > and I > > can help further. Another idea is to make the old cluster use > defaults > > for everything and do the upgrade. > > > > -- > > Bruce Momjian http://momjian.us > > EnterpriseDB
[GENERAL] out of memory issue
Hello Friends, Hope you are all well... I have a specific issue, where my query fails with below error while trying to export data from pgadmin SQL tool. There are no such issues when the result set is small. But it returns error when the result set is bit large. Any inputs please ? Where and how should memory be increased in case ? out of memory for query result --GN
Re: [GENERAL] Scalable cluster
On 3 March 2013 22:56, John R Pierce wrote: > > did you look at pgbouncer ? thats the simple pooler for postgres, and its > quite robust, because its so simple. > > Yes, it is one of the solutions I do consider. Having applications decide whether they should write to master, or use slaves and/or master for read queries (for instance in case it is a transaction, etc). I wonder however, how others are handing it. There seems to be nothing out there apart from pgbouncer and pgpool. And only the latter can handle (albeit not really that quick) pooling between master and slaves. How do you guys go about designing such cluster. -- GJ
Re: [GENERAL] Scalable cluster
On 3/3/2013 1:57 PM, Gregg Jaskiewicz wrote: I don't know of any other pooling solution that would be capable of handling the job, but focus only on the task of pooling (pgpool's fault probably is that it is trying to be jack of all trades) in HA replicated scenario. did you look at pgbouncer ? thats the simple pooler for postgres, and its quite robust, because its so simple. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Scalable cluster
Hi guys, I'm looking into setting up an HA scalable DB cluster. So far my tests with streaming replication proof that it is very very good indeed. However, problem seems to be on the connection pooling side. Ideally, we would love to have single point of connection to the cluster, but I do realise that it might not be feasible. So far I've been testing pgpool-II 3.2.3 and 3 DB servers. And as much as I'm impressed by postgresql itself. pgpool simply fails on pretty much every front. That is, in terms of scalability, running dbbench against it, proves to be much slower then direct connection to the master. It also goes into strange states when you overallocate connections, etc. Not really something I'd trust on a production server. I don't know of any other pooling solution that would be capable of handling the job, but focus only on the task of pooling (pgpool's fault probably is that it is trying to be jack of all trades) in HA replicated scenario. What is out there, free or paid - that would solve an HA DB Cluster running Postgresql (ideally 9.2), that you guys could suggest ? Personally, I think that having single connection point DB Cluster is not going to be ideal solution anyway. So question is, how would you guys go about designing a cluster that handles Java/C/C++ applications connecting from some number of servers ? It has scale (adding more servers, to improve performance, or aid in case storage has become a problem), but also be redundant in case hardware fails. Thanks. -- GJ