Re: [GENERAL] PSQL 9.1.3 segmentation fault
On Fri, 2012-03-30 at 20:11 +, Hu, William wrote: > I used –with-openssl option with the configure, after starting the > server, ... > Psql would cause a segmentation fault, createuser did too. Can you try with plain "./configure --prefix=/your/install/path" and see if there is still a problem? It might be a problem related to openssl. If it is a problem with openssl, try to figure out if the library matches the headers. You should be able to see what's happening during "make" when it's linking the "psql" or "createuser" binaries. Regards, Jeff Davis -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_dump incredibly slow dumping a single schema from a large db
> > That could be out-of-date info though. Here's some info about > another possibility: > http://wiki.postgresql.org/wiki/Profiling_with_perf > > There we go this perf worked on the VM. The file is 6 megs so I've dropped it here. That was doing perf for the length of the pg_dump command and then a perf report -n http://dl.dropbox.com/u/13153/output.txt
[GENERAL] PSQL 9.1.3 segmentation fault
I downloaded the stable version of postgresql-9.1.3.tar.gz, installed on a CentOS 5.7 final server. I used -with-openssl option with the configure, after starting the server, Psql would cause a segmentation fault, createuser did too. I don't think the postgresql log shows anything on this, please help point out as where I can look for a clue? Thank in advance
Re: [GENERAL] PANIC: corrupted item pointer
On Fri, 2012-03-30 at 16:02 +0200, Janning Vygen wrote: > The PANIC occurred first on March, 19. My servers uptime ist 56 days, so > about 4th of February. There was no power failure since i started to use > this machine. This machine is in use since March, 7. I checked it twice: > Now power failure. Just to be sure: the postgres instance didn't exist before you started to use it, right? > > Did you get the PANIC and WARNINGs on the primary or the replica? It > > might be worth doing some comparisons between the two systems. > > It only happend on my primary server. My backup server has no suspicious > log entries. Do you have a full copy of the two data directories? It might be worth exploring the differences there, but that could be a tedious process. > It is pretty obvious to me the segmentation fault is the main reason for > getting the PANIC afterwards. What can cause a segmentation fault? Is > there anything to analyse further? It's clear that they are connected, but it's not clear that it was the cause. To speculate: it might be that disk corruption caused the segfault as well as the PANICs. Do you have any core files? Can you get backtraces? Regards, Jeff Davis -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] where to find initdb log file?
On Saturday, March 31, 2012 01:52:37 AM clover white wrote: > HI, i have a problem when using pg, thanks for help. :) > > I used command initdb, but nothing was created in my pgdata directory. > however, I used command ps to list all the processes, and I found inidb > process kept running all the time. > > I don't know what happened, and i want to read the log file of initdb. > > Could someone tell me where the log is? thank you. > > my pg version is 9.1.2 I don't believe there is a log. initdb logs what it's doing on stdout and then finishes. It does have a debug option to print more info while running. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] where to find initdb log file?
HI, i have a problem when using pg, thanks for help. :) I used command initdb, but nothing was created in my pgdata directory. however, I used command ps to list all the processes, and I found inidb process kept running all the time. I don't know what happened, and i want to read the log file of initdb. Could someone tell me where the log is? thank you. my pg version is 9.1.2
Re: [GENERAL] pg_dump incredibly slow dumping a single schema from a large db
Mike Roest writes: > That was on the CentOS 5.8 x64 machine. The one I'm trying it from now is > Ubuntu 11.10 x64 Hm. On current Red-Hat-derived systems I'd recommend oprofile, although you need root privileges to use that. Not real sure what is available on Ubuntu, but our crib sheet for oprofile says it doesn't currently work there: http://wiki.postgresql.org/wiki/Profiling_with_OProfile That could be out-of-date info though. Here's some info about another possibility: http://wiki.postgresql.org/wiki/Profiling_with_perf regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: invalid byte sequence for encoding "UTF8": 0xc325
On Friday, March 30, 2012 10:00:31 AM Prashant Bharucha wrote: > Hello All > > Could you help me to automatically convert all db request into UTF8 encode ? > Set your session client_encoding to match your data. That's about as close as you can get to automatic. http://www.postgresql.org/docs/9.1/static/multibyte.html A better bet is to make your application end-to-end UTF8, which is doable, more or less, in a web environment, although misbehaving clients will still sometimes send you bad data. For any other data source (especially email) you'll probably get tons of badly encoded data. If you're looking for a silver bullet, there isn't one. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_dump incredibly slow dumping a single schema from a large db
That was on the CentOS 5.8 x64 machine. The one I'm trying it from now is Ubuntu 11.10 x64 On Fri, Mar 30, 2012 at 11:30 AM, Tom Lane wrote: > Mike Roest writes: > > Ok I just realized that's probably not going to be much help :) > > gmon.out would be of no value to anybody else anyway --- making sense of > it requires the exact executable you took the measurements with. > > > 0.00 0.00 0.005 0.00 0.00 canonicalize_path > > 0.00 0.00 0.005 0.00 0.00 > > trim_trailing_separator > > 0.00 0.00 0.003 0.00 0.00 strlcpy > > Ugh. There are some platforms on which gprof is busted to various > degrees; you may have one. What platform is this exactly? > >regards, tom lane >
Re: [GENERAL] pg_dump incredibly slow dumping a single schema from a large db
Mike Roest writes: > Ok I just realized that's probably not going to be much help :) gmon.out would be of no value to anybody else anyway --- making sense of it requires the exact executable you took the measurements with. > 0.00 0.00 0.005 0.00 0.00 canonicalize_path > 0.00 0.00 0.005 0.00 0.00 > trim_trailing_separator > 0.00 0.00 0.003 0.00 0.00 strlcpy Ugh. There are some platforms on which gprof is busted to various degrees; you may have one. What platform is this exactly? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_dump incredibly slow dumping a single schema from a large db
Ok I just realized that's probably not going to be much help :) 0.00 0.00 0.005 0.00 0.00 canonicalize_path 0.00 0.00 0.005 0.00 0.00 trim_trailing_separator 0.00 0.00 0.003 0.00 0.00 strlcpy 0.00 0.00 0.002 0.00 0.00 join_path_components 0.00 0.00 0.002 0.00 0.00 last_dir_separator 0.00 0.00 0.001 0.00 0.00 find_my_exec 0.00 0.00 0.001 0.00 0.00 first_dir_separator 0.00 0.00 0.001 0.00 0.00 get_etc_path 0.00 0.00 0.001 0.00 0.00 get_progname 0.00 0.00 0.001 0.00 0.00 help 0.00 0.00 0.001 0.00 0.00 make_relative_path 0.00 0.00 0.001 0.00 0.00 resolve_symlinks 0.00 0.00 0.001 0.00 0.00 set_pglocale_pgservice 0.00 0.00 0.001 0.00 0.00 trim_directory 0.00 0.00 0.001 0.00 0.00 validate_exec That's the output of gprof pg_dump gmon.out (I built the -pg build on my dev box then ran it on the server. I'm just running the actual dump on my dev box against the server instead to see if I get something more useful since that doesn't really seem to have much data in it) On Fri, Mar 30, 2012 at 11:09 AM, Mike Roest wrote: > Here's the gmon.out from a -pg compiled 9.1.1 pg_dump. > > --Mike > > > On Fri, Mar 30, 2012 at 10:40 AM, Mike Roest wrote: > >> For sure I'll work on that now. One thing I noticed looking through the >> pg_dump code based on the messages and the code one thing I noticed it >> seems to be grabbing the full dependency graph for the whole db rather then >> limiting it by the schema (not sure if limiting this would be possible) >> >> This query returns 9843923 rows from the DB. So processing this seems >> like it'll take quite a while. >> >> I'll get a -pg build of pg_dump going here on a dev box so I can get you >> a profile. >> >> >> On Fri, Mar 30, 2012 at 10:18 AM, Tom Lane wrote: >> >>> Mike Roest writes: >>> > This dump is currently taking around 8 minutes. While dumping the >>> pg_dump >>> > process is using 100% of one core in the server (24 core machine). >>> Doing a >>> > -v pg_dump I found that the following stages are taking the majority >>> of the >>> > time >>> >>> > reading user_defined tables (2 minutes and 20 seconds) >>> > reading dependency data (5 minutes and 30 seconds) >>> >>> Can you get an execution profile with oprofile or gprof or similar tool? >>> It doesn't surprise me a lot that pg_dump might have some issues with >>> large numbers of objects, but guessing which inefficiencies are hurting >>> you is difficult without more info. >>> >>>regards, tom lane >>> >> >> >
[GENERAL] ERROR: invalid byte sequence for encoding "UTF8": 0xc325
Hello All Could you help me to automatically convert all db request into UTF8 encode ? Thx Prashant
Re: [GENERAL] pg_dump incredibly slow dumping a single schema from a large db
For sure I'll work on that now. One thing I noticed looking through the pg_dump code based on the messages and the code one thing I noticed it seems to be grabbing the full dependency graph for the whole db rather then limiting it by the schema (not sure if limiting this would be possible) This query returns 9843923 rows from the DB. So processing this seems like it'll take quite a while. I'll get a -pg build of pg_dump going here on a dev box so I can get you a profile. On Fri, Mar 30, 2012 at 10:18 AM, Tom Lane wrote: > Mike Roest writes: > > This dump is currently taking around 8 minutes. While dumping the > pg_dump > > process is using 100% of one core in the server (24 core machine). > Doing a > > -v pg_dump I found that the following stages are taking the majority of > the > > time > > > reading user_defined tables (2 minutes and 20 seconds) > > reading dependency data (5 minutes and 30 seconds) > > Can you get an execution profile with oprofile or gprof or similar tool? > It doesn't surprise me a lot that pg_dump might have some issues with > large numbers of objects, but guessing which inefficiencies are hurting > you is difficult without more info. > >regards, tom lane >
Re: [GENERAL] pg_dump incredibly slow dumping a single schema from a large db
Mike Roest writes: > This dump is currently taking around 8 minutes. While dumping the pg_dump > process is using 100% of one core in the server (24 core machine). Doing a > -v pg_dump I found that the following stages are taking the majority of the > time > reading user_defined tables (2 minutes and 20 seconds) > reading dependency data (5 minutes and 30 seconds) Can you get an execution profile with oprofile or gprof or similar tool? It doesn't surprise me a lot that pg_dump might have some issues with large numbers of objects, but guessing which inefficiencies are hurting you is difficult without more info. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_dump incredibly slow dumping a single schema from a large db
Hey Everyone, I've got an interesting issue. We're running postgres 9.1.1 linux x64 centos 5.8 aspdata=# select version(); version --- PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit We currently have 1 DB we use for multiple independent tenant schemas. The database size is current 56227005240 bytes as reported by pg_database_size. There are 557 schemas each with about 1300 objects (760 tables 520 views). We are using pg_dump to do backups of a single schema with a total size of (5480448 bytes calculated with SELECT sum(pg_relation_size(schemaname || '.' || tablename))::bigint FROM pg_tables WHERE schemaname ='miketest';) pg_dump -f /dumps/test.backup -Fc -n miketest aspdata This dump is currently taking around 8 minutes. While dumping the pg_dump process is using 100% of one core in the server (24 core machine). Doing a -v pg_dump I found that the following stages are taking the majority of the time reading user_defined tables (2 minutes and 20 seconds) reading dependency data (5 minutes and 30 seconds) The size of the schema doesn't really seem to effect theses times are almost identical for a 700 meg schema as well (obviously the data dump portion takes longer with the bigger db) During the reading user_defined tables the following query shows up for a 10-20 seconds then the pg_dump connection sits idle for the rest of the 2 minutes: SELECT c.tableoid, c.oid, c.relname, c.relacl, c.relkind, c.relnamespace, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = c.relowner) AS rolname, c.relchecks, c.relhastriggers, c.relhasindex, c.relhasrules, c.relhasoids, c.relfrozenxid, tc.oid AS toid, tc.relfrozenxid AS tfrozenxid, c.relpersistence, CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, d.refobjid AS owning_tab, d.refobjsubid AS owning_col, (SELECT spcname FROM pg_tablespace t WHERE t.oid = c.reltablespace) AS reltablespace, array_to_string(c.reloptions, ', ') AS reloptions, array_to_string(array(SELECT 'toast.' || x FROM unnest(tc.reloptions) x), ', ') AS toast_reloptions FROM pg_class c LEFT JOIN pg_depend d ON (c.relkind = 'S' AND d.classid = c.tableoid AND d.objid = c.oid AND d.objsubid = 0 AND d.refclassid = c.tableoid AND d.deptype = 'a') LEFT JOIN pg_class tc ON (c.reltoastrelid = tc.oid) WHERE c.relkind in ('r', 'S', 'v', 'c', 'f') ORDER BY c.oid During the reading dependency data the following queries show up for a few seconds then the connection sits idle for the rest of the 5.5 minutes: SELECT tableoid , oid, typname, typnamespace, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = typowner) AS rolname, typinput::oid AS typinput, typoutput::oid AS typoutput, typelem, typrelid, CASE WHEN typrelid = 0 THEN ' '::"char" ELSE (SELECT relk ind FROM pg_class WHERE oid = typrelid) END AS typrelkind, typtype, typisdefined, typname[0] = '_' AND typelem != 0 AND (SELECT typarray FROM pg_type te WHERE oid = pg_type.typelem) = oid AS isarray FROM pg_type SELECT classid, objid, refclassid, refobjid, deptype FROM pg_depend WHERE deptype != 'p' AND deptype != 'e' ORDER BY 1,2 Non production servers with less schemas don't seem to have any issue and perform the same dump in under 10 seconds on much lower classed hardware. Server Specs: 2 x Intel Xeon X5650 32 Gigs of Ram DELL Perc H700 Controller Data drive - 6XSAS2 15K in RAID10 FS: xfs Log Drive - 2XSAS2 15K in RAID1 FS: xfs There are 2 of these machine one master other slaved via streaming replication over gigabit network. Thanks
[GENERAL] Surge 2012 CFP is Open!
Surge 2012, the scalability conference, September 27-28, Baltimore, MD has opened its CFP. Please visit http://omniti.com/surge/2012/cfp for details. -- Katherine Jeschke Director of Marketing and Creative Services OmniTI Computer Consulting, Inc. 7070 Samuel Morse Drive, Ste.150 Columbia, MD 21046 O: 443-325-1357, 222 F: 410/872-4911 C: 443/643-6140 omniti.com Surge2012: http://omniti.com/surge/2012 PG Corridor Days - DC: http://pgday.bwpug.org/ The information contained in this electronic message and any attached documents is privileged, confidential, and protected from disclosure. If you are not the intended recipient, note that any review, disclosure, copying, distribution, or use of the contents of this electronic message or any attached documents is prohibited. If you have received this communication in error, please destroy it and notify us immediately by telephone (1-443-325-1360) or by electronic mail (i...@omniti.com). Thank you. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: double check the role has what's kind of the privilege? And the same for the objects. Thanks.
I think they don't care about the grantee. they only care about the users ( for example the application team user, develop team user) and the privileges they have. Thanks. Guys. Grace -- View this message in context: http://postgresql.1045698.n5.nabble.com/double-check-the-role-has-what-s-kind-of-the-privilege-And-the-same-for-the-objects-Thanks-tp5605564p5606831.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: double check the role has what's kind of the privilege? And the same for the objects. Thanks.
Tom Lane wrote: >>> My bosses ask me to list all the users and all the privilege which >> the superuser granted to the >>> users. >>> Then they can double check that I did right thing or not? > >> Unlike Oracle, PostgreSQL does not have a concept of "grantor", >> so it is not possible to find out which privileges were granted >> by a superuser. > > Um, we *do* have a concept of grantors, and that information is > recorded. However, in Postgres any grant or revoke executed by > a superuser is treated as having been done by the object's owner, > so what gets recorded as the grantor in such a case is the owner. > So, right answer, wrong reasoning. Oops, you're right of course. Sorry for causing confusion. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: how to pass the function caller's parameter to inside the function. syntax error at or near "$1"
merlin : Thanks. -- View this message in context: http://postgresql.1045698.n5.nabble.com/how-to-pass-the-function-caller-s-parameter-to-inside-the-function-syntax-error-at-or-near-1-tp5601045p5606816.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL crashed server
On 03/26/2012 07:41 PM, Martín Marqués wrote: Any ideas? Last year I had memory problems with this server (changed the faulty bank), I wouldn't want it to be happening again. It's a crash deep in memory management for inode storage used by the ext3 filesystem code. Chances are it's a hardware fault or (small chance) kernel bug. If Pg could cause a crash like that it'd be a kernel DoS vulnerability, but I'm much more inclined to suspect the hardware even without the dodgy history. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: double check the role has what's kind of the privilege? And the same for the objects. Thanks.
"Albe Laurenz" writes: > leaf_yxj wrote: >> My bosses ask me to list all the users and all the privilege which > the superuser granted to the >> users. >> Then they can double check that I did right thing or not? > Unlike Oracle, PostgreSQL does not have a concept of "grantor", > so it is not possible to find out which privileges were granted > by a superuser. Um, we *do* have a concept of grantors, and that information is recorded. However, in Postgres any grant or revoke executed by a superuser is treated as having been done by the object's owner, so what gets recorded as the grantor in such a case is the owner. So, right answer, wrong reasoning. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PANIC: corrupted item pointer
Hi, thanks so much for answering. I found a "segmentation fault" in my logs so please check below: > On Tue, 2012-03-27 at 11:47 +0200, Janning Vygen wrote: >> >> I am running postgresql-9.1 from debian backport package fsync=on >> full_page_writes=off > > That may be unsafe (and usually is) depending on your I/O system and > filesystem. However, because you didn't have any power failures, I > don't think this is the cause of the problem. I think i should switch to full_page_writes=on. But as my harddisk are rather cheap, so I used to tune it to get maximum performance. > These WARNINGs below could also be caused by a power failure. Can > you verify that no power failure occurred? E.g. check uptime, and > maybe look at a few logfiles? The PANIC occurred first on March, 19. My servers uptime ist 56 days, so about 4th of February. There was no power failure since i started to use this machine. This machine is in use since March, 7. I checked it twice: Now power failure. But i found more strange things, so let me show you a summary (some things were shortened for readability) 1. Segmentation fault Mar 13 19:01 LOG: server process (PID 32464) was terminated by signal 11: Segmentation fault Mar 13 19:01 FATAL: the database system is in recovery mode Mar 13 19:01 LOG: unexpected pageaddr 22/8D402000 in log file 35, segment 208, offset 4202496 Mar 13 19:01 LOG: redo done at 23/D0401F78 Mar 13 19:01 LOG: last completed transaction was at log time 2012-03-13 19:01:58.667779+01 Mar 13 19:01 LOG: checkpoint starting: end-of-recovery immediate 2. PANICS Mar 19 22:14 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 20 23:38 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 21 23:30 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 23 02:10 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 24 06:12 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 25 01:28 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 26 22:16 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 27 09:17 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 27 09:21 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 27 09:36 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 27 09:48 PANIC: corrupted item pointer: offset = 21248, size = 16 Mar 27 10:01 PANIC: corrupted item pointer: offset = 21248, size = 16 What I additionally see, that my table rankingentry was not autovacuumed anymore after the first PANIC on March,19. But it was still autovacuumed after segmentation fault without error. 3. Then I rebuilt all index on this table, dropped old indexes, and did run vacuum on this table: WARNING: relation "rankingentry" page 424147 is uninitialized --- fixing WARNING: relation "rankingentry" page 424154 is uninitialized --- fixing WARNING: relation "rankingentry" page 424155 is uninitialized --- fixing WARNING: relation "rankingentry" page 424166 is uninitialized --- fixing WARNING: relation "rankingentry" page 424167 is uninitialized --- fixing WARNING: relation "rankingentry" page 424180 is uninitialized --- fixing After this everything is running just fine. No more problems, just headache. > Did you get the PANIC and WARNINGs on the primary or the replica? It > might be worth doing some comparisons between the two systems. It only happend on my primary server. My backup server has no suspicious log entries. It is pretty obvious to me the segmentation fault is the main reason for getting the PANIC afterwards. What can cause a segmentation fault? Is there anything to analyse further? kind regards Janning -- Kicktipp GmbH Venloer Straße 8, 40477 Düsseldorf Sitz der Gesellschaft: Düsseldorf Geschäftsführung: Janning Vygen Handelsregister Düsseldorf: HRB 55639 http://www.kicktipp.de/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: double check the role has what's kind of the privilege? And the same for the objects. Thanks.
Hi Albe, My bosses ask me to list 1)all the users and the roles associated with the users. 2) all the users and the privileges associated with that users. Thanks. Regards. Grace At 2012-03-30 16:07:08,"Albe Laurenz *EXTERN* [via PostgreSQL]" wrote: leaf_yxj wrote: > My bosses ask me to list all the users and all the privilege which the superuser granted to the > users. > Then they can double check that I did right thing or not? Unlike Oracle, PostgreSQL does not have a concept of "grantor", so it is not possible to find out which privileges were granted by a superuser. It is possible to find out all privileges for a certain user, but it's probably a bit complicated. What exactly should be checked? Yours, Laurenz Albe -- Sent via pgsql-general mailing list ([hidden email]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general If you reply to this email, your message will be added to the discussion below: http://postgresql.1045698.n5.nabble.com/double-check-the-role-has-what-s-kind-of-the-privilege-And-the-same-for-the-objects-Thanks-tp5605564p5605960.html To unsubscribe from double check the role has what's kind of the privilege? And the same for the objects. Thanks., click here. NAML -- View this message in context: http://postgresql.1045698.n5.nabble.com/double-check-the-role-has-what-s-kind-of-the-privilege-And-the-same-for-the-objects-Thanks-tp5605564p5606709.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
[GENERAL] Adding new and changed data
Hi there, Below is a snippet of data of summary (24hours, ie ~86400seconds) and detail (~300seconds), that gets generated every day and then posted to a central server that needs to load these data into a (Preferably PostgreSQL) database. Now, my problem is that the majority of the previous data (except the first entry) is the same for the files of the different days, as the extraction is a tad brain dead/difficult to get done per previous day, and we prefer to get those extra data for in case. Now my problem is importing the data in bulk, and to keep the version of the record that has the longest interval value (the third field in the CSV below). Refer to the entries of 03/29 of the *.gs files. The *.gd have the same, however there would be a single entry that might be shorter, though we'll be adding a full day's worth of extra ~5min interval data. If I set a unique key on date,time,interval_length (or secondsSince1970, interval_length), I can reject the similar entries, but then I'm left with duplicates on date,time (or secondsSince1970) and would need to clean up with duplicate detection SQL code, and I'd prefer to prevent that at load time. Any advice/ideas as to the K.I.S.S. to use/implement insert/update instead of doing the select/delete search for duplicates? (Columns: Date, Time, secondSince1970, interval_length in seconds, performance data...) [visagehe@tsysl01 capacity]$ tail -n4 daily/dnba01-201203*gs ==> daily/dnba01-20120329.gs <== 03/26/2012,00:00:00,133272,86411.1, 4.10, 0.18, 53.36, 93, 03/27/2012,00:00:00,1332806400,86406.9, 4.23, 0.21, 53.63, 96, 03/28/2012,00:00:00,1332892800,86409.0, 4.15, 0.18, 53.94, 93, 03/29/2012,00:00:00,1332979200,57606.0, 4.17, 0.17, 54.19, 78, ==> daily/dnba01-20120330.gs <== 03/27/2012,00:00:00,1332806400,86406.9, 4.23, 0.21, 53.63, 96, 03/28/2012,00:00:00,1332892800,86409.0, 4.15, 0.18, 53.94, 93, 03/29/2012,00:00:00,1332979200,86409.5, 4.16, 0.20, 54.18, 93, 03/30/2012,00:00:00,1333065600, 300.4, 14.73, 5.22, 54.64, 79, [visagehe@tsysl01 capacity]$ [visagehe@tsysl01 capacity]$ tail -n4 daily/dnba01-201203*gd ==> daily/dnba01-20120329.gd <== 03/29/2012,15:40:00,1333035600, 300.1, 4.35, 0.06, 54.12, 77, 03/29/2012,15:45:00,1333035900, 300.0, 3.62, 0.18, 54.14, 80, 03/29/2012,15:50:00,1333036200, 300.0, 3.67, 0.16, 54.17, 79, 03/29/2012,15:55:00,1333036500, 299.0, 5.27, 0.39, 54.54, 82, ==> daily/dnba01-20120330.gd <== 03/29/2012,23:45:00,1333064700, 300.1, 3.60, 0.03, 54.29, 73, 03/29/2012,23:50:00,1333065000, 300.0, 3.71, 0.04, 54.30, 77, 03/29/2012,23:55:00,1333065300, 300.0, 5.17, 0.05, 54.68, 76, 03/30/2012,00:00:00,1333065600, 300.4, 14.73, 5.22, 54.64, 79, [visagehe@tsysl01 capacity]$ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] user get notification when postgresql database updated
On 29 Mar 2012, at 11:43, Albert wrote: > it a browser based application. so would you advice me about the best way to > poll the database for notifications ? > > I've been read about DB triggers but still can't tell if it will help me. The HTTP protocol doesn't have a push mechanism, so you can't notify your browser-based application from your server. You will have to poll. Googling for "ajax push" turned up this explanation: http://www.subbu.org/blog/2006/04/dissecting-ajax-server-push Alban Hertroys -- The scale of a problem often equals the size of an ego. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] default value returned from sql stmt
On 30 Mar 2012, at 10:22, Richard Huxton wrote: > On 30/03/12 08:46, Pavel Stehule wrote: >> 2012/3/30 Richard Huxton: >>> On 29/03/12 23:28, Pavel Stehule wrote: select anum from t1 where anum = 4 union all select 100 limit 1; >>> >>> >>> I'm not sure the ordering here is guaranteed by the standard though, is it? >>> You could end up with the 4 being discarded. >> >> A order is random for only "UNION", "UNION ALL" should to respect >> order. But I didn't check it in standard. > > Let's put it this way - a quick bit of googling can't find anything that says > the order *is* guaranteed, and (almost?) no other operations do so by default. Obviously, UNION needs to sort the results to filter out any duplicate rows, so it would change the order of the results of above query and return the 100-valued row for anum values > 100. UNION ALL will not do so by default, so it would probably behave as Pavel describes. Until you add an ORDER BY to your query. A more robust implementation would be: select anum, 0 from t1 where anum = 4 union all select 100, 1 limit 1 order by 2; If you don't want the extra column in your query results, you can wrap the query in another select. Alban Hertroys -- The scale of a problem often equals the size of an ego. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] default value returned from sql stmt
2012/3/30 Richard Huxton : > On 30/03/12 08:46, Pavel Stehule wrote: >> >> 2012/3/30 Richard Huxton: >>> >>> On 29/03/12 23:28, Pavel Stehule wrote: select anum from t1 where anum = 4 union all select 100 limit 1; >>> >>> >>> >>> I'm not sure the ordering here is guaranteed by the standard though, is >>> it? >>> You could end up with the 4 being discarded. >> >> >> A order is random for only "UNION", "UNION ALL" should to respect >> order. But I didn't check it in standard. > > > Let's put it this way - a quick bit of googling can't find anything that > says the order *is* guaranteed, and (almost?) no other operations do so by > default. > yes, it should to work in pg, but it should not work else where. secure solution is SELECT x FROM (SELECT * FROM (SELECT 1, x FROM tab WHERE x = 10 LIMIT 1) s1 UNION ALL SELECT 2, -1000 ORDER BY 1 LIMIT 1) s2; Regards Pavel Stehule > > -- > Richard Huxton > Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] user get notification when postgresql database updated
another Q : my app should display notifications to each user depends on his selected cars. can i do that just using ajax and php ? -- View this message in context: http://postgresql.1045698.n5.nabble.com/user-get-notification-when-postgresql-database-updated-tp5600187p5606001.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] default value returned from sql stmt
On 30/03/12 08:46, Pavel Stehule wrote: 2012/3/30 Richard Huxton: On 29/03/12 23:28, Pavel Stehule wrote: select anum from t1 where anum = 4 union all select 100 limit 1; I'm not sure the ordering here is guaranteed by the standard though, is it? You could end up with the 4 being discarded. A order is random for only "UNION", "UNION ALL" should to respect order. But I didn't check it in standard. Let's put it this way - a quick bit of googling can't find anything that says the order *is* guaranteed, and (almost?) no other operations do so by default. -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] octet_length operator: what encoding?
Chris Angelico wrote: > We have a number of varchar fields and I'm looking to see what the > greatest data length in any is, after UTF-8 encoding. The two-argument > length function appears (I think) to take a byte array, so it's the > opposite of what I'm looking for (give it a UTF-8 encoded string and > the second parameter 'UTF-8' and it'll count characters). The > octet_length function, though, doesn't accept an encoding argument. > What does it use? You probably want something like that: test=> SELECT length(convert_to('schön', 'UTF8')); length 6 (1 row) Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: double check the role has what's kind of the privilege? And the same for the objects. Thanks.
leaf_yxj wrote: > My bosses ask me to list all the users and all the privilege which the superuser granted to the > users. > Then they can double check that I did right thing or not? Unlike Oracle, PostgreSQL does not have a concept of "grantor", so it is not possible to find out which privileges were granted by a superuser. It is possible to find out all privileges for a certain user, but it's probably a bit complicated. What exactly should be checked? Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] default value returned from sql stmt
2012/3/30 Richard Huxton : > On 29/03/12 23:28, Pavel Stehule wrote: >> >> select anum from t1 where anum = 4 >> union all select 100 limit 1; > > > I'm not sure the ordering here is guaranteed by the standard though, is it? > You could end up with the 4 being discarded. A order is random for only "UNION", "UNION ALL" should to respect order. But I didn't check it in standard. Pavel > > -- > Richard Huxton > Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] default value returned from sql stmt
On 29/03/12 23:28, Pavel Stehule wrote: select anum from t1 where anum = 4 union all select 100 limit 1; I'm not sure the ordering here is guaranteed by the standard though, is it? You could end up with the 4 being discarded. -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general