Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-10 Thread Kevin Grittner
Chris Travers  wrote:
> John R Pierce  wrote:

> I often find in those cases it is a choice between vacuum full
> and dumpall/initdb/reload/analyze.

Way back in the 8.1 days I often found CLUSTER to be my best option
-- as long as I had room enough for a second copy (without the
bloat) while it was running.  If I didn't have that much room the
dump/initdb/restore option was almost always faster than VACUUM
FULL.  Personally, I would follow the restore with VACUUM FREEZE
ANALYZE rather than just ANALYZE, so that the entire database
didn't come due for wraparond prevention vacuums at an inopportune
time.

The main thing here seem to be to upgrade to a supported version
(preferably 9.2 or 9.3).

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-10 Thread Chris Travers
On Wed, Oct 9, 2013 at 7:04 PM, John R Pierce  wrote:

> On 10/8/2013 8:35 AM, Chris Travers wrote:
>
>> First, while vacuum is usually preferred to vacuum full, in this case, I
>> usually find that vacuum full clears up enough cruft to be worth it (not
>> always, but especially if you are also having performance issues).
>>
>
>
> IIRC, vacuum full was pretty broken in 8.1, which the output the original
> postered showed indicated they were running.


I certainly wouldn't recommend it for routine maintenance.  The problem I
have run into is that sometimes folks don't vacuum db's and you find this
out after 7 years of write-heavy workloads.  In this case, there aren't
a lot of great options.  In 8.1 a normal vacuum will usually lead to tons
of bloat in this case because the FSM isn't big enough to accommodate all
the free space which is a problem.  So at that point, vacuum without the
full option is pretty broken in 8.1 :-P  I often find in those cases it is
a choice between vacuum full and dumpall/initdb/reload/analyze.  It is
better now that there is no maximum size for the free space map though.

Best Wishes,
Chris travers

>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more.shtml


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-09 Thread John R Pierce

On 10/8/2013 8:35 AM, Chris Travers wrote:
First, while vacuum is usually preferred to vacuum full, in this case, 
I usually find that vacuum full clears up enough cruft to be worth it 
(not always, but especially if you are also having performance issues).



IIRC, vacuum full was pretty broken in 8.1, which the output the 
original postered showed indicated they were running.


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help Required

2013-10-08 Thread Adrian Klaver

On 10/08/2013 09:03 AM, David Johnston wrote:



postgres -D /var/lib/pgsql/data patnadbold < fix.sql


What the heck is the point of feeding "VACUUM FULL;" into the standard input
of the postgres command?  "postgres" simply starts the server, it does not
execute arbitrary SQL.  Once the database is started you want to use "psql"
- either interactively or in a similar manner to the above - to connect to
the  database and run "VACUUM".


I was under that impression also. So I went back in the archives, and in 
8.1 this was possible.


http://www.postgresql.org/docs/8.1/interactive/app-postgres.html






PostgreSQL stand-alone backend 8.1.11


8.1 is unsupported at this point; plus you say 8.4 below which this
contradicts.  Your configuration is very unclear.


This part is still unclear to me. I would like some more information on 
where the 8.4 cluster comes into play, if at all. My hunch is the OP is 
working with two different Postgres instances un-intently .





David J.




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help Required

2013-10-08 Thread David Johnston

> postgres -D /var/lib/pgsql/data patnadbold < fix.sql

What the heck is the point of feeding "VACUUM FULL;" into the standard input
of the postgres command?  "postgres" simply starts the server, it does not
execute arbitrary SQL.  Once the database is started you want to use "psql"
- either interactively or in a similar manner to the above - to connect to
the  database and run "VACUUM".

Note that:  "su postgres" -> "psql" will likely connect you to the
"postgres" database as that is the normal default.  You will need to tell
psql which database to connect to.


> PostgreSQL stand-alone backend 8.1.11

8.1 is unsupported at this point; plus you say 8.4 below which this
contradicts.  Your configuration is very unclear.


> ERROR: could not access status of transaction 33011
> DETAIL: could not open file "pg_clog/": No such file or directory
> exit

This is bothersome as well...


> After this i am able to stop /start my db server but i am not able to
> connect to my databases (it tells to run vacuum full first on patnadbold
> databases)
> 
> 
> 1)I am using postgres 8.4 version.
> 2) I had two databases on this server i) patnadbold  ii) patnaonlinedb
> 
> For me patnadbold is of no use if at this moment i lost this database that
> also fine to me.
> I wanted to connect patnaonlinedb any how and wanted to perform backup of
> this , Solution please.

The only thing not mentioned is starting postgres is single-user mode; this
may be necessary though I am not sure and others can be of more help.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Urgent-Help-Required-tp5773675p5773716.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread Chris Travers
First, while vacuum is usually preferred to vacuum full, in this case, I
usually find that vacuum full clears up enough cruft to be worth it (not
always, but especially if you are also having performance issues).

Secondly I would recommend using the vacuumdb command from the shell
instead of psql because it has some features that will help avoid some
issues.

Try:

vacuumdb -a

Or if you want to force reclaim space (if you are suffering from db bloat
relating to not vacuuming:

vacuumdb -f -a

You may need to run this as the postgres user.  It accepts the same
parameters for authentication that psql does.

Best Wishes,
Chris Travers



On Tue, Oct 8, 2013 at 8:25 AM, shailesh singh wrote:

> yes i am executing psql to connect to this database.
>
> su - postgres
> psql patnadbold
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Oct 8, 2013 at 8:48 PM, bricklen  wrote:
>
>>
>>
>>
>> On Tue, Oct 8, 2013 at 8:13 AM, shailesh singh wrote:
>>
>>>
>>> On Tue, Oct 8, 2013 at 8:36 PM, bricklen  wrote:
>>>

 On Tue, Oct 8, 2013 at 8:03 AM, shailesh singh 
 wrote:

> HINT: To avoid a database shutdown, execute a full-database VACUUM in
> "patnadbold".
> ERROR: could not access status of transaction 33011
> DETAIL: could not open file "pg_clog/": No such file or directory
> exit
>
>
> After this i am able to stop /start my db server but i am not able to
> connect to my databases (it tells to run vacuum full first on patnadbold
> databases)
>

 The message does *not* say to run "VACUUM FULL", it says to run a
 "full-database VACUUM". Different things.
 Connect to "patnadbold" and issue "VACUUM;" (without double-quotes) as
 the others have suggested.

>>>
>>> When i am trying to connect "patnadbold" , it is giving error for
>>> "execute a full-database VACUUM in "patnadbold" " .
>>> is there any way to connect this database using backend process . pl let
>>> me know the command sequnce i need to run.
>>> Thanks.
>>>
>>
>> Please keep replies CC'd to the pgsql-general list, and follow the format
>> of the other messages, which is to bottom-post.
>>
>> You need to show (again?) exactly what you are executing. Are you
>> connecting via psql or using another command? Are you connecting directly
>> to that "patnadbold" database? If you are able to connect to it, are you
>> able to issue just "VACUUM;" ?
>>
>>
>>
>>
>>
>
>
> --
>   With Regards,
>   शैलेश सिंह |Shailesh Singh
>  +९१-९६५०३१७५१७ | +91-9650317517
>
>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more.shtml


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread bricklen
On Tue, Oct 8, 2013 at 8:25 AM, shailesh singh wrote:

> yes i am executing psql to connect to this database.
>
> su - postgres
> psql patnadbold
>

..and then what? Does it immediately throw an error stating that you must
issue a VACUUM?


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread shailesh singh
yes i am executing psql to connect to this database.

su - postgres
psql patnadbold











On Tue, Oct 8, 2013 at 8:48 PM, bricklen  wrote:

>
>
>
> On Tue, Oct 8, 2013 at 8:13 AM, shailesh singh wrote:
>
>>
>> On Tue, Oct 8, 2013 at 8:36 PM, bricklen  wrote:
>>
>>>
>>> On Tue, Oct 8, 2013 at 8:03 AM, shailesh singh 
>>> wrote:
>>>
 HINT: To avoid a database shutdown, execute a full-database VACUUM in
 "patnadbold".
 ERROR: could not access status of transaction 33011
 DETAIL: could not open file "pg_clog/": No such file or directory
 exit


 After this i am able to stop /start my db server but i am not able to
 connect to my databases (it tells to run vacuum full first on patnadbold
 databases)

>>>
>>> The message does *not* say to run "VACUUM FULL", it says to run a
>>> "full-database VACUUM". Different things.
>>> Connect to "patnadbold" and issue "VACUUM;" (without double-quotes) as
>>> the others have suggested.
>>>
>>
>> When i am trying to connect "patnadbold" , it is giving error for
>> "execute a full-database VACUUM in "patnadbold" " .
>> is there any way to connect this database using backend process . pl let
>> me know the command sequnce i need to run.
>> Thanks.
>>
>
> Please keep replies CC'd to the pgsql-general list, and follow the format
> of the other messages, which is to bottom-post.
>
> You need to show (again?) exactly what you are executing. Are you
> connecting via psql or using another command? Are you connecting directly
> to that "patnadbold" database? If you are able to connect to it, are you
> able to issue just "VACUUM;" ?
>
>
>
>
>


-- 
  With Regards,
  शैलेश सिंह |Shailesh Singh
 +९१-९६५०३१७५१७ | +91-9650317517


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread bricklen
On Tue, Oct 8, 2013 at 8:13 AM, shailesh singh wrote:

>
> On Tue, Oct 8, 2013 at 8:36 PM, bricklen  wrote:
>
>>
>> On Tue, Oct 8, 2013 at 8:03 AM, shailesh singh wrote:
>>
>>> HINT: To avoid a database shutdown, execute a full-database VACUUM in
>>> "patnadbold".
>>> ERROR: could not access status of transaction 33011
>>> DETAIL: could not open file "pg_clog/": No such file or directory
>>> exit
>>>
>>>
>>> After this i am able to stop /start my db server but i am not able to
>>> connect to my databases (it tells to run vacuum full first on patnadbold
>>> databases)
>>>
>>
>> The message does *not* say to run "VACUUM FULL", it says to run a
>> "full-database VACUUM". Different things.
>> Connect to "patnadbold" and issue "VACUUM;" (without double-quotes) as
>> the others have suggested.
>>
>
> When i am trying to connect "patnadbold" , it is giving error for "execute
> a full-database VACUUM in "patnadbold" " .
> is there any way to connect this database using backend process . pl let
> me know the command sequnce i need to run.
> Thanks.
>

Please keep replies CC'd to the pgsql-general list, and follow the format
of the other messages, which is to bottom-post.

You need to show (again?) exactly what you are executing. Are you
connecting via psql or using another command? Are you connecting directly
to that "patnadbold" database? If you are able to connect to it, are you
able to issue just "VACUUM;" ?


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread Martijn van Oosterhout
On Tue, Oct 08, 2013 at 08:06:50AM -0700, bricklen wrote:
> On Tue, Oct 8, 2013 at 8:03 AM, shailesh singh wrote:
> 
> > HINT: To avoid a database shutdown, execute a full-database VACUUM in
> > "patnadbold".
> > ERROR: could not access status of transaction 33011
> > DETAIL: could not open file "pg_clog/": No such file or directory
> > exit
> >
> >
> > After this i am able to stop /start my db server but i am not able to
> > connect to my databases (it tells to run vacuum full first on patnadbold
> > databases)
> >
> 
> The message does *not* say to run "VACUUM FULL", it says to run a
> "full-database VACUUM". Different things.
> Connect to "patnadbold" and issue "VACUUM;" (without double-quotes) as the
> others have suggested.

In case it isn't clear to the original poster, VACUUM FULL will take a
lot longer than a simple VACUUM and probably not really help much.

Just plain VACUUM.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> He who writes carelessly confesses thereby at the very outset that he does
> not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] Urgent Help Required

2013-10-08 Thread Adrian Klaver

On 10/08/2013 08:03 AM, shailesh singh wrote:

Dear all,
  First of all i wish to share actual error meassge,

Below are the queries i had executed on the terminal on my server


-bash-3.2$ touch fix.sql
-bash-3.2$ echo "VACUUM FULL;" > fix.sql
-bash-3.2$ postgres -D /var/lib/pgsql/data patnadbold < fix.sql
WARNING: database "patnadbold" must be vacuumed within 100 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 100 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".

PostgreSQL stand-alone backend 8.1.11




After this i am able to stop /start my db server but i am not able to
connect to my databases (it tells to run vacuum full first on patnadbold
databases)


1)I am using postgres 8.4 version.


This seems to be at odds with "PostgreSQL stand-alone backend 8.1.11".
Are you sure you are working on the correct database cluster?


2) I had two databases on this server i) patnadbold  ii) patnaonlinedb

For me patnadbold is of no use if at this moment i lost this database
that also fine to me.
I wanted to connect patnaonlinedb any how and wanted to perform backup
of this , Solution please.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread bricklen
On Tue, Oct 8, 2013 at 8:03 AM, shailesh singh wrote:

> HINT: To avoid a database shutdown, execute a full-database VACUUM in
> "patnadbold".
> ERROR: could not access status of transaction 33011
> DETAIL: could not open file "pg_clog/": No such file or directory
> exit
>
>
> After this i am able to stop /start my db server but i am not able to
> connect to my databases (it tells to run vacuum full first on patnadbold
> databases)
>

The message does *not* say to run "VACUUM FULL", it says to run a
"full-database VACUUM". Different things.
Connect to "patnadbold" and issue "VACUUM;" (without double-quotes) as the
others have suggested.


Re: [GENERAL] Urgent Help Required

2013-10-08 Thread shailesh singh
Dear all,
 First of all i wish to share actual error meassge,

Below are the queries i had executed on the terminal on my server


-bash-3.2$ touch fix.sql
-bash-3.2$ echo "VACUUM FULL;" > fix.sql
-bash-3.2$ postgres -D /var/lib/pgsql/data patnadbold < fix.sql
WARNING: database "patnadbold" must be vacuumed within 100 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 100 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".

PostgreSQL stand-alone backend 8.1.11
backend> WARNING: database "patnadbold" must be vacuumed within 99
transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 98 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 97 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 96 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 95 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 94 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 93 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 92 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 91 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 90 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 89 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 88 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 87 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 86 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 85 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
WARNING: database "patnadbold" must be vacuumed within 84 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
"patnadbold".
ERROR: could not access status of transaction 33011
DETAIL: could not open file "pg_clog/": No such file or directory
exit


After this i am able to stop /start my db server but i am not able to
connect to my databases (it tells to run vacuum full first on patnadbold
databases)


1)I am using postgres 8.4 version.
2) I had two databases on this server i) patnadbold  ii) patnaonlinedb

For me patnadbold is of no use if at this moment i lost this database that
also fine to me.
I wanted to connect patnaonlinedb any how and wanted to perform backup of
this , Solution please.


On Tue, Oct 8, 2013 at 6:19 PM, Adrian Klaver wrote:

> On 10/08/2013 03:55 AM, shailesh singh wrote:
>
>> I had got this message while running vacuum full from backend . Now My
>> database is not starting , Help pls.
>>
>> backend> vacuum full debug;
>> WARNING:  database "debug" must be vacuumed within 99 transactions
>> HINT:  To avoid a database shutdown, execute a full-database VACUUM in
>> "debug".
>> ERROR:  relation "debug" does not exist
>>
>
>
>> Now what?
>>
>
> First some information.
>
> 1) What version of Postgres are you using?
>
> 2) Does database debug in fact exist or not?
> In other words does it show up with \l in psql?
>
> Also it not necessary to use FULL with the VACUUM.
>
>
>
>> Thanks in advance.
>>
>> Shailesh Singh
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>



-- 
  With Regards,
  शैलेश सिंह |Shailesh Singh
 +९१-९६५०३१७५१७ | +91-9650317517


Re: [GENERAL] Urgent Help Required

2013-10-08 Thread David Johnston
Adrian Klaver-3 wrote
> On 10/08/2013 03:55 AM, shailesh singh wrote:
>> I had got this message while running vacuum full from backend . Now My
>> database is not starting , Help pls.
>>
>> backend> vacuum full debug;
>> WARNING:  database "debug" must be vacuumed within 99 transactions
>> HINT:  To avoid a database shutdown, execute a full-database VACUUM in
>> "debug".
>> ERROR:  relation "debug" does not exist
> 
>>
>> Now what?
> 
> First some information.
> 
> 1) What version of Postgres are you using?
> 
> 2) Does database debug in fact exist or not?
>   In other words does it show up with \l in psql?
> 
> Also it not necessary to use FULL with the VACUUM.
> 
>>
>> Thanks in advance.
>>
>> Shailesh Singh

>From the documentation:

http://www.postgresql.org/docs/9.2/interactive/sql-vacuum.html

"With no parameter, VACUUM processes every table in the current database
that the current user has permission to vacuum. With a parameter, VACUUM
processes only that table."

Since you must be connected to a database to issue VACUUM to specify which
database would be redundant.  Your specification of "debug" in the above
command was not taken to be a database but rather a relation/table.

The basic steps are:

1) connect to the "debug" database.
2) issue the command  "VACUUM" with no parameters

The reason for the error is that transaction id wraparound is on the verge
of occurring.  After having solved the immediate problem by manually
vacuuming you should try and describe to us why it is the auto-vacuum
service has failed to vacuum the debug database in time to prevent the
warning.

The immediate solution will work on any version but the cause analysis will
require knowing the PostgreSQL versions, its configuration, and basic usage
characteristics. Regardless, though, always provide version information when
asking for help.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Urgent-Help-Required-tp5773675p5773692.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help Required

2013-10-08 Thread Adrian Klaver

On 10/08/2013 03:55 AM, shailesh singh wrote:

I had got this message while running vacuum full from backend . Now My
database is not starting , Help pls.

backend> vacuum full debug;
WARNING:  database "debug" must be vacuumed within 99 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
ERROR:  relation "debug" does not exist




Now what?


First some information.

1) What version of Postgres are you using?

2) Does database debug in fact exist or not?
In other words does it show up with \l in psql?

Also it not necessary to use FULL with the VACUUM.



Thanks in advance.

Shailesh Singh



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Urgent Help Required

2013-10-08 Thread shailesh singh
I had got this message while running vacuum full from backend . Now My
database is not starting , Help pls.

backend> vacuum full debug;
WARNING:  database "debug" must be vacuumed within 99 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
ERROR:  relation "debug" does not exist
backend> vacuum full;
WARNING:  database "debug" must be vacuumed within 98 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 97 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 96 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 95 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 94 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 93 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 92 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 91 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 90 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 89 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 88 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 87 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 86 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 85 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 84 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
WARNING:  database "debug" must be vacuumed within 83 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
"debug".
ERROR:  could not access status of transaction 449971277
DETAIL:  could not open file "pg_clog/01AD": No such file or directory


Now what?

Thanks in advance.

Shailesh Singh


Re: [GENERAL] URGENT: temporary table not recognized?

2012-02-13 Thread Jasen Betts
On 2012-01-06, Phoenix Kiula  wrote:
> On Fri, Jan 6, 2012 at 6:53 PM, Steve Crawford
> wrote:



> Thanks Steve.
>
> The file has 350 million lines. Sed, Awk etc are a little painful when
> the file is 18GB witht hat many lines.

On files of that size they're a lot nicer than an interactive editor.

It's not like you need to find space on the disk for an edited copy:

(
echo "copy table_name from stdin other_parameters; " 
cat bigfile | sed script_or_scriptfile 
echo '\\.'
) | psql connection_parameters

> I'd want Postgresql to ignore the line altogether when something is
> missing. Is this an option we can use, or are rules hoisted on us?

The copy command is optimised and intended for use with data that is 
known to be good, or atleast acceptable to the database.



-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-07 Thread Phoenix Kiula
On Fri, Jan 6, 2012 at 10:38 PM, John R Pierce  wrote:

> you should check your attitude at the door.  this isn't Microsoft Pay per
> Incident Tech Support.


I saw the door. Found some other attitudes that were allowed to be let
in. Like asking me to write my own patch. You see, attitudes come in
different shapes.



> what you're asking for is right on that very URL you so conveniently quoted,
> both the configuration file format ("Global Configuration Section"), and
> where the sample pgloader.conf can be found...


No it isn't. The config file is not clear. And it's available in three
different forms in three different places, which I found only thanks
to Google. It's a community project, I get it.

Moving on. I'll do the awk/sed thing. COPY from other databases
already has several such convenient features. Postgresql is not
interested in implementing them. Sure.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread John R Pierce

On 01/06/12 5:33 PM, Phoenix Kiula wrote:

>  http://pgloader.projects.postgresql.org/

Sorry. That I already did.

But where's the config file? How to configure the config file?
Where's the simple doc (not on that ugly PGFoundry website, I mean in
English that people can understand what to do, with a starting
sample?)





you should check your attitude at the door.  this isn't Microsoft Pay 
per Incident Tech Support.




what you're asking for is right on that very URL you so conveniently 
quoted, both the configuration file format ("Global Configuration 
Section"), and where the sample pgloader.conf can be found...



Please see the given configuration example which should be distributed 
in/usr/share/doc/pgloader/examples/pgloader.conf.


The example configuration file comes with example data and can be used 
a unit test ofpgloader.





so, yes, you need to download the package and un-tar it to see the 
sample .conf


$ wget http://pgfoundry.org/frs/download.php/2294/pgloader-2.3.2.tar.gz
$ tar xzf pgloader-2.3.2.tar.gz
$ cd pgloader-2.3.2
$ more examples/pgloader.conf



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Phoenix Kiula
On Fri, Jan 6, 2012 at 8:19 PM, Adrian Klaver  wrote:
> On Friday, January 06, 2012 4:16:09 pm Phoenix Kiula wrote:
>> On Fri, Jan 6, 2012 at 6:54 PM, Adrian Klaver  
>> wrote:
>> > Try:
>> > copy vl from 'data.txt' WITH CSV DELIMITER '|';
>>
>> Doesn't work. Can't see what the different in CSV is from a text file.
>> Same errors are thrown.
>>
>> > If that doesn't work take a look at pgloader:
>> > http://pgfoundry.org/projects/pgloader/
>>
>> Wow, another geeky tool. Hard to find documentation. Archaic
>> presentation, no simple steps to install and get using. Anyway doesn't
>> seem to provide the options I need
>> (http://pgloader.projects.postgresql.org/) --
>
> Install:
> Download
> tar -xzvf pgloader-2.3.2.tar.gz
> cd pgloader-2.3.2/
> sudo make
>
> Very difficult.
>
> http://pgloader.projects.postgresql.org/


Sorry. That I already did.

But where's the config file? How to configure the config file?
Where's the simple doc (not on that ugly PGFoundry website, I mean in
English that people can understand what to do, with a starting
sample?)



>> a) Ability to assign more than one NULL value
...

So Null, or Empty Value? (I will ignore the Phd I need for all that
geeky "reformat" jazz)

My request is simple. If a value is missing, ignore the line and move
on. This is an option that's perfectly alright to expect/need. I know
PG is being super-disciplined and strict, which is great if that's the
default, but to remove any option of flexibility on this pretext is a
little silly. And sounds like an excuse.


>> b) Ability to ignore lines altogether that have any problem
>
> reject_log
> In case of errors processing input data, a human readable log per rejected 
> input
> data line is produced into the reject_log file.
> reject_data
> In case of errors processing input data, the rejected input line is appended 
> to
> the reject_data file.


I don't need a reject log if the main command will croak and be
utterly useless, as COPY is already. If the full command will go but
some lines will be ignored then a reject log is useful so I can
identify the ignored lines. Which is it? My sense is it's the former,
which means this entire software is just a more complicated way of
(non-)achieving the same thing I do with COPY.



> I await with bated breath your most excellent patch to COPY.


Thanks for your help on this list. I'll be moving to more modern
couchDB type databases eventually anyway, but for now I'm stuck with
PG.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Adrian Klaver
On Friday, January 06, 2012 4:16:09 pm Phoenix Kiula wrote:
> On Fri, Jan 6, 2012 at 6:54 PM, Adrian Klaver  wrote:
> > Try:
> > copy vl from 'data.txt' WITH CSV DELIMITER '|';
> 
> Doesn't work. Can't see what the different in CSV is from a text file.
> Same errors are thrown.
> 
> > If that doesn't work take a look at pgloader:
> > http://pgfoundry.org/projects/pgloader/
> 
> Wow, another geeky tool. Hard to find documentation. Archaic
> presentation, no simple steps to install and get using. Anyway doesn't
> seem to provide the options I need
> (http://pgloader.projects.postgresql.org/) --

Install:
Download
tar -xzvf pgloader-2.3.2.tar.gz 
cd pgloader-2.3.2/
sudo make

Very difficult.

http://pgloader.projects.postgresql.org/
> 
> a) Ability to assign more than one NULL value
null
You can configure here how null value is represented into your flat data file.

This parameter is optional and defaults to '' (that is empty string). If 
defined 
on a table level, this local value will overwrite the global one.

empty_string
You can configure here how empty values are represented into your flat data 
file.

This parameter is optional and defaults to \ (that is backslash followed by 
space). If defined on a table level, this local value will overwrite the global 
one.

reformat
Use this option when you need to preprocess some column data with pgloader 
reformatting modules, or your own. The value of this option is a comma 
separated 
list of columns to rewrite, which are a colon separated list of column name, 
reformat module name, reformat function name. Here's an example to reformat 
column dt_cx with the mysql.timestamp() reformatting function:

reformat = dt_cx:mysql:timestamp
See global setting option reformat_path for configuring where pgloader will 
look 
for reformat packages and modules.

If you want to write a new formating function, provide a python package called 
reformat (a directory of this name containing an empty __init__.py file will 
do) 
and place in there arbitrary named modules (foo.py files) containing functions 
with the following signature:

def bar(reject, input)
The reject object has a log(self, messages, data = None) method for you to log 
errors into section.rej.log and section.rej files.

> b) Ability to ignore lines altogether that have any problem

reject_log
In case of errors processing input data, a human readable log per rejected 
input 
data line is produced into the reject_log file.
reject_data
In case of errors processing input data, the rejected input line is appended to 
the reject_data file.

> 
> Really, other databases have mechanisms to ignore "problem lines"
> while copying. Does Postgresql allow me to *ignore* lines while
> COPYING?

No.

I await with bated breath your most excellent patch to COPY.


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Steve Crawford

On 01/06/2012 03:55 PM, Phoenix Kiula wrote:

...

In general, when you have data scrubbing issues like this, grep/sed/awk/...
are your friends. Clean it up then import it.

Thanks Steve.

The file has 350 million lines. Sed, Awk etc are a little painful when
the file is 18GB witht hat many lines.

I'd want Postgresql to ignore the line altogether when something is
missing. Is this an option we can use, or are rules hoisted on us?



I've found grep, sed and friends to be quite effective and proper 
pre-cleaning to have a relatively minor impact on performance. Done 
properly, you will just be piping the data through a very simple 
grep/sed/awk/... into psql. No extra disk-reads, minimal memory use and 
a bit of CPU. And you will be in charge of deciding how suspect data is 
handled.


If by "rules hoisted" you mean "will PostgreSQL make arbitrary and 
possibly incorrect assumptions to attempt to force bad data into a 
table" then the answer is "no". In fact, it has become more and more 
picky over time. Trust me, at some point you will thank it for doing so.


For example the following used to "work":
select current_date < 2020-01-01;

But it returned "false" which was probably not what the user wanted. 
(2020-01-01 is the integer 2018 which PostgreSQL interpreted as being 
less than current_date). But it sure looks OK at first glance when you 
really meant '2020-01-01'::date.


In current versions, that statement will throw an error just at 
PostgreSQL does with dates like February 31 and a variety of other 
things that certain other DBMS' deem good.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Phoenix Kiula
On Fri, Jan 6, 2012 at 6:54 PM, Adrian Klaver  wrote:
>
> Try:
> copy vl from 'data.txt' WITH CSV DELIMITER '|';


Doesn't work. Can't see what the different in CSV is from a text file.
Same errors are thrown.


> If that doesn't work take a look at pgloader:
> http://pgfoundry.org/projects/pgloader/


Wow, another geeky tool. Hard to find documentation. Archaic
presentation, no simple steps to install and get using. Anyway doesn't
seem to provide the options I need
(http://pgloader.projects.postgresql.org/) --

a) Ability to assign more than one NULL value
b) Ability to ignore lines altogether that have any problem

Really, other databases have mechanisms to ignore "problem lines"
while copying. Does Postgresql allow me to *ignore* lines while
COPYING?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Steve Crawford

On 01/06/2012 03:12 PM, Phoenix Kiula wrote:

...

Sounds like you are using statement pooling - every statement can be
assigned to a different server connection. You may need transaction pooling
or session pooling:

http://pgbouncer.projects.postgresql.org/doc/usage.html



Thanks Steve. YES! I changed it to transaction pooling and now it works.


But Marko is correct. If you were using:
begin;
create temporary...
\copy
commit;

and if your pooler was set to statement then you should have seen an error.

Are you by any chance routing stderr to /dev/null or otherwise hiding 
messages? If you are using "psql.2>/dev/null" then everything would 
*look* the same with pooling set to statement or transaction but you 
would be blissfully ignorant of the errors.


Cheers,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Phoenix Kiula
On Fri, Jan 6, 2012 at 6:53 PM, Steve Crawford
 wrote:
> On 01/06/2012 03:42 PM, Phoenix Kiula wrote:
>>
>> On Fri, Jan 6, 2012 at 6:20 PM, Adrian Klaver
>>  wrote:
>>
>>> http://www.postgresql.org/docs/9.0/interactive/sql-copy.html
>>>
>>> Search for
>>> NULL
>>
>>
>>
>> Thanks Adrian.
>>
>> Without examples, it's hard to predict syntax. If the value after a
>> pipe is missing altogether, I suppose the missing value is "\n"
>> (newline). But this doesn't work:
>>
>>    copy vl from 'data.txt' WITH DELIMITER AS '|' NULL '\n';
>>
>> None of these work either:
>>
>>    copy vl from 'data.txt' WITH DELIMITER AS '|' NULL \n;
>>    copy vl from 'data.txt' WITH DELIMITER AS '|' NULL \\n;
>>    copy vl from 'data.txt' WITH DELIMITER AS '|' NULL '';
>>
>> The first two give errors, the third one throws the same missing value
>> for column error.
>>
>> The data is stored like this:
>>
>>     123|big string here|189209209|US|2001-01-01
>>     123|big string here|189209209|US|2001-01-01
>>     123|big string here|189209209|US|2001-01-01
>>     123|big string here|189209209|US|2001-01-01
>>
>> But sometimes, the strings are:
>>
>>     |big string here|189209209|US|2001-01-01
>>     |big string here|189209209|US|2001-01-01
>>
>> Or
>>
>>     123|big string here|189209209|US
>>     123|big string here|189209209|US|
>>
>> So you see either the first column, which is the ID in a way, is
>> missing so the "missing character" is probably a blank (''?). In this
>> case I want COPY to just ignore this line.
>>
>> Or the last column is missing, where the missing character can be a
>> newline I suppose?
>>
>> So how do I specify this in the COPY command so that it doesn't croak?
>> If a line's ID is missing, it should ignore the line and go on instead
>> of not doing anything by throwing an error for EVERYTHING!
>>
>> Thanks.
>>
>>
> Missing data is one thing, missing delimiters is another. Try doing a small
> copy of data with just a few lines to see which variants are actually
> causing the error. My money is on the one that has a mismatch between the
> table column count and the data column count. I.e., the row with three
> delimiters instead of four:
>
>
> 23|big string here|189209209|US
>
> When you say "ignore", do you mean that you want PostgreSQL to assume a null
> value for the missing column or to not import that row at all?
>
> In general, when you have data scrubbing issues like this, grep/sed/awk/...
> are your friends. Clean it up then import it.
>
> I suppose you could import all rows into a big text field and process it in
> PostgreSQL but I doubt you will find that to be an optimal solution.



Thanks Steve.

The file has 350 million lines. Sed, Awk etc are a little painful when
the file is 18GB witht hat many lines.

I'd want Postgresql to ignore the line altogether when something is
missing. Is this an option we can use, or are rules hoisted on us?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Adrian Klaver

On 01/06/2012 03:42 PM, Phoenix Kiula wrote:

On Fri, Jan 6, 2012 at 6:20 PM, Adrian Klaver  wrote:


http://www.postgresql.org/docs/9.0/interactive/sql-copy.html

Search for
NULL




Thanks Adrian.

Without examples, it's hard to predict syntax. If the value after a
pipe is missing altogether, I suppose the missing value is "\n"
(newline). But this doesn't work:

copy vl from 'data.txt' WITH DELIMITER AS '|' NULL '\n';

None of these work either:

copy vl from 'data.txt' WITH DELIMITER AS '|' NULL \n;
copy vl from 'data.txt' WITH DELIMITER AS '|' NULL \\n;
copy vl from 'data.txt' WITH DELIMITER AS '|' NULL '';


Try:
copy vl from 'data.txt' WITH CSV DELIMITER '|';


If that doesn't work take a look at pgloader:
http://pgfoundry.org/projects/pgloader/






--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Steve Crawford

On 01/06/2012 03:42 PM, Phoenix Kiula wrote:

On Fri, Jan 6, 2012 at 6:20 PM, Adrian Klaver  wrote:


http://www.postgresql.org/docs/9.0/interactive/sql-copy.html

Search for
NULL



Thanks Adrian.

Without examples, it's hard to predict syntax. If the value after a
pipe is missing altogether, I suppose the missing value is "\n"
(newline). But this doesn't work:

copy vl from 'data.txt' WITH DELIMITER AS '|' NULL '\n';

None of these work either:

copy vl from 'data.txt' WITH DELIMITER AS '|' NULL \n;
copy vl from 'data.txt' WITH DELIMITER AS '|' NULL \\n;
copy vl from 'data.txt' WITH DELIMITER AS '|' NULL '';

The first two give errors, the third one throws the same missing value
for column error.

The data is stored like this:

 123|big string here|189209209|US|2001-01-01
 123|big string here|189209209|US|2001-01-01
 123|big string here|189209209|US|2001-01-01
 123|big string here|189209209|US|2001-01-01

But sometimes, the strings are:

 |big string here|189209209|US|2001-01-01
 |big string here|189209209|US|2001-01-01

Or

 123|big string here|189209209|US
 123|big string here|189209209|US|

So you see either the first column, which is the ID in a way, is
missing so the "missing character" is probably a blank (''?). In this
case I want COPY to just ignore this line.

Or the last column is missing, where the missing character can be a
newline I suppose?

So how do I specify this in the COPY command so that it doesn't croak?
If a line's ID is missing, it should ignore the line and go on instead
of not doing anything by throwing an error for EVERYTHING!

Thanks.


Missing data is one thing, missing delimiters is another. Try doing a 
small copy of data with just a few lines to see which variants are 
actually causing the error. My money is on the one that has a mismatch 
between the table column count and the data column count. I.e., the row 
with three delimiters instead of four:


23|big string here|189209209|US

When you say "ignore", do you mean that you want PostgreSQL to assume a 
null value for the missing column or to not import that row at all?


In general, when you have data scrubbing issues like this, 
grep/sed/awk/... are your friends. Clean it up then import it.


I suppose you could import all rows into a big text field and process it 
in PostgreSQL but I doubt you will find that to be an optimal solution.


Cheers,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Phoenix Kiula
On Fri, Jan 6, 2012 at 6:20 PM, Adrian Klaver  wrote:

> http://www.postgresql.org/docs/9.0/interactive/sql-copy.html
>
> Search for
> NULL



Thanks Adrian.

Without examples, it's hard to predict syntax. If the value after a
pipe is missing altogether, I suppose the missing value is "\n"
(newline). But this doesn't work:

   copy vl from 'data.txt' WITH DELIMITER AS '|' NULL '\n';

None of these work either:

   copy vl from 'data.txt' WITH DELIMITER AS '|' NULL \n;
   copy vl from 'data.txt' WITH DELIMITER AS '|' NULL \\n;
   copy vl from 'data.txt' WITH DELIMITER AS '|' NULL '';

The first two give errors, the third one throws the same missing value
for column error.

The data is stored like this:

123|big string here|189209209|US|2001-01-01
123|big string here|189209209|US|2001-01-01
123|big string here|189209209|US|2001-01-01
123|big string here|189209209|US|2001-01-01

But sometimes, the strings are:

|big string here|189209209|US|2001-01-01
|big string here|189209209|US|2001-01-01

Or

123|big string here|189209209|US
123|big string here|189209209|US|

So you see either the first column, which is the ID in a way, is
missing so the "missing character" is probably a blank (''?). In this
case I want COPY to just ignore this line.

Or the last column is missing, where the missing character can be a
newline I suppose?

So how do I specify this in the COPY command so that it doesn't croak?
If a line's ID is missing, it should ignore the line and go on instead
of not doing anything by throwing an error for EVERYTHING!

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Adrian Klaver

On 01/06/2012 03:12 PM, Phoenix Kiula wrote:

On Fri, Jan 6, 2012 at 4:24 PM, Steve Crawford
  wrote:

On 01/06/2012 01:11 PM, Phoenix Kiula wrote:





Thanks Steve. YES! I changed it to transaction pooling and now it works.

Another problem through.

I need to COPY a huge text file into a table, with about 350 million
lines in the file (i.e., 350 million rows in the table).

While copying, some lines do not have data. They are empty values.

How can I specify in COPY command that if data is not found, it should
be ignored? In my temp table definition, I set this column as "NULL"
anyway, so it should be ok if this column was left empty!

What can I do in my COPY command to circumvent this?


http://www.postgresql.org/docs/9.0/interactive/sql-copy.html

Search for
NULL



Thanks.




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Phoenix Kiula
On Fri, Jan 6, 2012 at 4:24 PM, Steve Crawford
 wrote:
> On 01/06/2012 01:11 PM, Phoenix Kiula wrote:
>>
>> On Fri, Jan 6, 2012 at 11:46 AM, Tom Lane  wrote:
>>>
>>> Phoenix Kiula  writes:

 Hi. I'm using Postgresql 9.0.5, and the connection is made via
 pgbouncer.
>>>
>>> Perhaps pgbouncer is redirecting the second command to a different
>>> session?
>>>
>>
>>
>> Thanks Tom. I'm in the exact same session in my terminal, and the
>> commands are entered within 2 seconds of each other. With copy/paste,
>> maybe split microseconds of each other.
>>
>> How can I make sure pgbouncer takes it all in the same session? I also
>> tried the two commands within a transaction.
>>
>
> Sounds like you are using statement pooling - every statement can be
> assigned to a different server connection. You may need transaction pooling
> or session pooling:
>
> http://pgbouncer.projects.postgresql.org/doc/usage.html



Thanks Steve. YES! I changed it to transaction pooling and now it works.

Another problem through.

I need to COPY a huge text file into a table, with about 350 million
lines in the file (i.e., 350 million rows in the table).

While copying, some lines do not have data. They are empty values.

How can I specify in COPY command that if data is not found, it should
be ignored? In my temp table definition, I set this column as "NULL"
anyway, so it should be ok if this column was left empty!

What can I do in my COPY command to circumvent this?

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Tom Lane
Marko Kreen  writes:
> On Fri, Jan 6, 2012 at 11:24 PM, Steve Crawford
>  wrote:
>> On 01/06/2012 01:11 PM, Phoenix Kiula wrote:
>>> How can I make sure pgbouncer takes it all in the same session? I also
>>> tried the two commands within a transaction.

>> Sounds like you are using statement pooling - every statement can be
>> assigned to a different server connection. You may need transaction pooling
>> or session pooling:
>> http://pgbouncer.projects.postgresql.org/doc/usage.html

> Statement pooling throws error on open transaction.

Yeah, if it still fails within a single transaction, it gets harder to
blame pgbouncer.  But there are not very many other candidates.  I
wondered about a funny setting of search_path, but ISTM that could at
worst result in copying into the wrong table (ie some other table named
"vl"), not failure to find any "vl" at all.

It might be worth turning on statement logging and ensuring that
log_line_prefix includes the process PID.  Then the postmaster log would
provide indisputable evidence whether the CREATE and the COPY are
executed in the same session or not.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Marko Kreen
On Fri, Jan 6, 2012 at 11:24 PM, Steve Crawford
 wrote:
> On 01/06/2012 01:11 PM, Phoenix Kiula wrote:
>>
>> On Fri, Jan 6, 2012 at 11:46 AM, Tom Lane  wrote:
>>>
>>> Phoenix Kiula  writes:

 Hi. I'm using Postgresql 9.0.5, and the connection is made via
 pgbouncer.
>>>
>>> Perhaps pgbouncer is redirecting the second command to a different
>>> session?
>>>
>>
>>
>> Thanks Tom. I'm in the exact same session in my terminal, and the
>> commands are entered within 2 seconds of each other. With copy/paste,
>> maybe split microseconds of each other.
>>
>> How can I make sure pgbouncer takes it all in the same session? I also
>> tried the two commands within a transaction.
>>
>
> Sounds like you are using statement pooling - every statement can be
> assigned to a different server connection. You may need transaction pooling
> or session pooling:
>
> http://pgbouncer.projects.postgresql.org/doc/usage.html

Statement pooling throws error on open transaction.

-- 
marko

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Steve Crawford

On 01/06/2012 01:11 PM, Phoenix Kiula wrote:

On Fri, Jan 6, 2012 at 11:46 AM, Tom Lane  wrote:

Phoenix Kiula  writes:

Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.

Perhaps pgbouncer is redirecting the second command to a different
session?




Thanks Tom. I'm in the exact same session in my terminal, and the
commands are entered within 2 seconds of each other. With copy/paste,
maybe split microseconds of each other.

How can I make sure pgbouncer takes it all in the same session? I also
tried the two commands within a transaction.



Sounds like you are using statement pooling - every statement can be 
assigned to a different server connection. You may need transaction 
pooling or session pooling:


http://pgbouncer.projects.postgresql.org/doc/usage.html

Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Phoenix Kiula
On Fri, Jan 6, 2012 at 11:46 AM, Tom Lane  wrote:
> Phoenix Kiula  writes:
>> Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.
>
> Perhaps pgbouncer is redirecting the second command to a different
> session?
>



Thanks Tom. I'm in the exact same session in my terminal, and the
commands are entered within 2 seconds of each other. With copy/paste,
maybe split microseconds of each other.

How can I make sure pgbouncer takes it all in the same session? I also
tried the two commands within a transaction.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Bosco Rama
Tom Lane wrote:
> Phoenix Kiula  writes:
>> Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.
> 
> Perhaps pgbouncer is redirecting the second command to a different
> session?

This may be OT, but are temp tables also removed when setting a new session
authorization?

Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Tom Lane
Phoenix Kiula  writes:
> Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.

Perhaps pgbouncer is redirecting the second command to a different
session?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] URGENT: temporary table not recognized?

2012-01-06 Thread Phoenix Kiula
Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.

I create a temporary table, and then want to import data into this
table via a COPY command. Yet, this just created table is not being
recognized. What's up?

>From my terminal:



mydb=#
mydb=# create temporary table vl (
  alias varchar(35)
  ,dates timestamp without time zone
  ,referers text null
);
CREATE TABLE
Time: 1.871 ms

mydb=# copy vl from '/backup/data.txt' WITH DELIMITER AS '|';
ERROR:  relation "vl" does not exist

mydb=#



Weird! Welcome any thoughts.

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Order

2011-05-11 Thread Joshua J. Kugler
On Saturday 07 May 2011, John R Pierce elucidated thus:
> On 05/07/11 6:08 AM, Bob Wilson wrote:
> > Hello
> > This is Bob and I will like to order ( Indexing Table )Do get back
> > to me with the types and cost for the ones you do carry and let me
> > know if there is an extra cost when using visa or master
> > Card.Kindly get back  to me with your name Are you the sales
> > manager or the Owner?
>
> * smallint - $2
> * integer - $4
> * bigint - $8
> * varchar - $1/character
> * boolean - $1 each
> * bytea - $1/byte
> * date - $6
> * timestamp - $8
> * ...

http://www.google.com/search?q=bob+wilson+urgent+order

Makes for some amusing reading.

j

-- 
Joshua Kugler
Part-Time System Admin/Programmer
http://www.eeinternet.com - Fairbanks, AK
PGP Key: http://pgp.mit.edu/  ID 0x73B13B6A

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Order

2011-05-07 Thread John R Pierce

On 05/07/11 6:08 AM, Bob Wilson wrote:

Hello
This is Bob and I will like to order ( Indexing Table )Do get back to
me with the types and cost for the ones you do carry and let me know
if there is an extra cost when using visa or master Card.Kindly get
back  to me with your name Are you the sales manager or the Owner?


   * smallint - $2
   * integer - $4
   * bigint - $8
   * varchar - $1/character
   * boolean - $1 each
   * bytea - $1/byte
   * date - $6
   * timestamp - $8
   * ...


Please send cash, in small bills only in a plain brown wrapper, to

NW3C
10900 Nuckols Rd # 325
Glen Allen, VA 23060-9288


:)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Urgent Order

2011-05-07 Thread Bob Wilson
Hello
This is Bob and I will like to order ( Indexing Table )Do get back to
me with the types and cost for the ones you do carry and let me know
if there is an extra cost when using visa or master Card.Kindly get
back  to me with your name Are you the sales manager or the Owner?
Regards
Bob Wilson

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent -- High memory usage on PostgreSQL server

2010-12-13 Thread hubert depesz lubaczewski
On Sun, Dec 12, 2010 at 09:49:52PM -0800, savio rodriges wrote:
> Hello,
> 
> We are facing very HIGH memory utilization on postgreSQL server and need help.

which number from all of what's below is making you worried?

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent -- High memory usage on PostgreSQL server

2010-12-12 Thread Jan Kesten
Hello Savio,

> top - 21:43:35 up 55 days,  8:07,  4 users,  load average: 0.05,
> 0.25, 0.17 Tasks: 257 total,   1 running, 256 sleeping,   0 stopped,
> 0 zombie Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,
> 0.0%hi,  0.0%si,  0.0%st Mem:   8165696k total,  7943160k used,
> 222536k free,   282044k buffers Swap:  8385920k total,   112828k
> used,  8273092k free,  4793732k cached

this looks kind of normal behaviour. Your system uses almost your entire
ram right, but if you look at buffers and cached  in your top output
about 5 GB of your 8GB are used there. Only the remaining 3GB are used
by postgres or other applications.

Every "new" linux I know will use non-used ram for caching your
filesystem data for example to achieve 100% memory usage. This is "a
good thing" (TM) as postgres will run better if there is caching on os
level and the instance knows about this:

> effective_cache_size =  — This value tells PostgreSQL's
> optimizer how much memory PostgreSQL has available for caching data
> and helps in determing whether or not it use an index or not. The
> larger the value increases the likely hood of using an index. This
> should be set to the amount of memory allocated to shared_buffers
> plus the amount of OS cache available. Often this is more than 50% of
> the total system memory.

So don't worry - everything is fine. You should pay attention that your
system doesn't start swapping.

Cheers,
Jan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent -- High memory usage on PostgreSQL server

2010-12-12 Thread Paul McGarry
On Mon, Dec 13, 2010 at 4:49 PM, savio rodriges  wrote:

> Hello,
>
> We are facing very HIGH memory utilization on postgreSQL server and need
> help.
>
> Mem:   8165696k total,  7943160k used,   222536k free,   282044k buffers
> Swap:  8385920k total,   112828k used,  8273092k free,  4793732k cached
>

What makes you say memory usage is high?

You have ~8gb of memory.
Less than 3gb of that is being used by applications and the OS is using the
"spare" 5gb for caching and buffers.

Paul


Re: [GENERAL] Urgent -- High memory usage on PostgreSQL server

2010-12-12 Thread Allan Kamau
On Mon, Dec 13, 2010 at 8:49 AM, savio rodriges  wrote:
> Hello,
>
> We are facing very HIGH memory utilization on postgreSQL server and need help.
>
> Below are details of PostgreSQL server,
>
> ===
> MemTotal:      8165696 kB
> CpuTotal:      8
> ===
> /etc/sysctl.conf
> 
> # Controls the maximum shared segment size, in bytes
> kernel.shmmax = 68719476736
>
> # Controls the maximum number of shared memory segments, in pages
> kernel.shmall = 4294967296
> ===
> top - 21:43:35 up 55 days,  8:07,  4 users,  load average: 0.05, 0.25, 0.17
> Tasks: 257 total,   1 running, 256 sleeping,   0 stopped,   0 zombie
> Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Mem:   8165696k total,  7943160k used,   222536k free,   282044k buffers
> Swap:  8385920k total,   112828k used,  8273092k free,  4793732k cached
> ===
> -bash-3.2$ free -m
>             total       used       free     shared    buffers     cached
> Mem:          7974       7756        217          0        275       4681
> -/+ buffers/cache:       2799       5174
> Swap:         8189        110       8079
>
> ===
> -bash-3.2$ cat /proc/meminfo
> MemTotal:      8165696 kB
> MemFree:        222576 kB
> Buffers:        282136 kB
> Cached:        4793748 kB
> SwapCached:      21144 kB
> Active:        3508524 kB
> Inactive:      4004532 kB
> HighTotal:           0 kB
> HighFree:            0 kB
> LowTotal:      8165696 kB
> LowFree:        222576 kB
> SwapTotal:     8385920 kB
> SwapFree:      8273092 kB
> Dirty:             548 kB
> Writeback:         368 kB
> AnonPages:     2415992 kB
> Mapped:         730104 kB
> Slab:           332440 kB
> PageTables:      58632 kB
> NFS_Unstable:        0 kB
> Bounce:              0 kB
> CommitLimit:  12468768 kB
> Committed_AS:  4774104 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed:    267392 kB
> VmallocChunk: 34359470967 kB
> HugePages_Total:     0
> HugePages_Free:      0
> HugePages_Rsvd:      0
> Hugepagesize:     2048 kB
> ===
> processor       : Total 8
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 26
> model name      : Intel(R) Xeon(R) CPU           E5504  @ 2.00GHz
> stepping        : 5
> cpu MHz         : 2000.118
> cache size      : 4096 KB
> physical id     : 1
> siblings        : 4
> core id         : 3
> cpu cores       : 4
> apicid          : 22
> fpu             : yes
> fpu_exception   : yes
> cpuid level     : 11
> wp              : yes
> flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx rdtscp 
> lm constant_tsc nonstop_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr 
> sse4_1 sse4_2 popcnt lahf_lm
> bogomips        : 4000.10
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 40 bits physical, 48 bits virtual
> power management: [8]
> ===
> -bash-3.2$ ipcs
>
> -- Shared Memory Segments 
> key        shmid      owner      perms      bytes      nattch     status
> 0x0052e2c1 35454978   postgres  600        1142923264 28
>
> -- Semaphore Arrays 
> key        semid      owner      perms      nsems
> 0x0052e2c1 37650432   postgres  600        17
> 0x0052e2c2 37683201   postgres  600        17
> 0x0052e2c3 37715970   postgres  600        17
> 0x0052e2c4 37748739   postgres  600        17
> 0x0052e2c5 37781508   postgres  600        17
> 0x0052e2c6 37814277   postgres  600        17
> 0x0052e2c7 37847046   postgres  600        17
>
> -- Message Queues 
> key        msqid      owner      perms      used-bytes   messages
> ===
> -bash-3.2$ ipcs -l
>
> -- Shared Memory Limits 
> max number of segments = 4096
> max seg size (kbytes) = 67108864
> max total shared memory (kbytes) = 17179869184
> min seg size (bytes) = 1
>
> -- Semaphore Limits 
> max number of arrays = 128
> max semaphores per array = 250
> max semaphores system wide = 32000
> max ops per semop call = 32
> semaphore max value = 32767
>
> -- Messages: Limits 
> max queues system wide = 16
> max size of message (bytes) = 65536
> default max size of queue (bytes) = 65536
> ===
> /var/lib/pgsql/data/postgresql.conf
> ---
> (Non-default values are as follows)
>
> maintenance_work_mem = 480MB
> checkpoint_completion_targe

[GENERAL] Urgent -- High memory usage on PostgreSQL server

2010-12-12 Thread savio rodriges
Hello,

We are facing very HIGH memory utilization on postgreSQL server and need help.

Below are details of PostgreSQL server,

===
MemTotal:  8165696 kB
CpuTotal:  8
===
/etc/sysctl.conf

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
===
top - 21:43:35 up 55 days,  8:07,  4 users,  load average: 0.05, 0.25, 0.17
Tasks: 257 total,   1 running, 256 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   8165696k total,  7943160k used,   222536k free,   282044k buffers
Swap:  8385920k total,   112828k used,  8273092k free,  4793732k cached
===
-bash-3.2$ free -m
 total   used   free sharedbuffers cached
Mem:  7974   7756217  0275   4681
-/+ buffers/cache:   2799   5174
Swap: 8189110   8079

===
-bash-3.2$ cat /proc/meminfo
MemTotal:  8165696 kB
MemFree:222576 kB
Buffers:282136 kB
Cached:4793748 kB
SwapCached:  21144 kB
Active:3508524 kB
Inactive:  4004532 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:  8165696 kB
LowFree:222576 kB
SwapTotal: 8385920 kB
SwapFree:  8273092 kB
Dirty: 548 kB
Writeback: 368 kB
AnonPages: 2415992 kB
Mapped: 730104 kB
Slab:   332440 kB
PageTables:  58632 kB
NFS_Unstable:0 kB
Bounce:  0 kB
CommitLimit:  12468768 kB
Committed_AS:  4774104 kB
VmallocTotal: 34359738367 kB
VmallocUsed:267392 kB
VmallocChunk: 34359470967 kB
HugePages_Total: 0
HugePages_Free:  0
HugePages_Rsvd:  0
Hugepagesize: 2048 kB
===
processor   : Total 8
vendor_id   : GenuineIntel
cpu family  : 6
model   : 26
model name  : Intel(R) Xeon(R) CPU   E5504  @ 2.00GHz
stepping: 5
cpu MHz : 2000.118
cache size  : 4096 KB
physical id : 1
siblings: 4
core id : 3
cpu cores   : 4
apicid  : 22
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx rdtscp lm 
constant_tsc nonstop_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 
sse4_2 popcnt lahf_lm
bogomips: 4000.10
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: [8]
===
-bash-3.2$ ipcs

-- Shared Memory Segments 
keyshmid  owner  perms  bytes  nattch status
0x0052e2c1 35454978   postgres  6001142923264 28

-- Semaphore Arrays 
keysemid  owner  perms  nsems
0x0052e2c1 37650432   postgres  60017
0x0052e2c2 37683201   postgres  60017
0x0052e2c3 37715970   postgres  60017
0x0052e2c4 37748739   postgres  60017
0x0052e2c5 37781508   postgres  60017
0x0052e2c6 37814277   postgres  60017
0x0052e2c7 37847046   postgres  60017

-- Message Queues 
keymsqid  owner  perms  used-bytes   messages
===
-bash-3.2$ ipcs -l

-- Shared Memory Limits 
max number of segments = 4096
max seg size (kbytes) = 67108864
max total shared memory (kbytes) = 17179869184
min seg size (bytes) = 1

-- Semaphore Limits 
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

-- Messages: Limits 
max queues system wide = 16
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536
===
/var/lib/pgsql/data/postgresql.conf
---
(Non-default values are as follows)

maintenance_work_mem = 480MB
checkpoint_completion_target = 0.7
work_mem = 36MB
wal_buffers = 4MB
checkpoint_segments = 8
shared_buffers = 1024MB
log_duration = on
log_min_duration_statement = 1
effective_cache_size=2048MB
===

Any idea where is the issue and what needs to be changed ?

Thank

Re: [GENERAL] Urgent please: PGPOOL II 2.3.3 hang in ssl mode

2010-05-09 Thread Tom Lane
AI Rumman  writes:
> The function call "pg_catalog.pg_encoding_to_char(d.encoding)" somehow makes
> the Pgpool hang in SSL mode.

pg_encoding_to_char returns type "name", maybe that's somehow confusing
pgpool?  Although I seriously doubt there's any connection to SSL mode.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Urgent please: PGPOOL II 2.3.3 hang in ssl mode

2010-05-08 Thread AI Rumman
I am using Pgpool II 2.3.3 with Postgresql 8.3.8.

When I use command \l at postgresql client the query is working perfectly.

But if I used the command from pgpool II client which is connected with
postgresql in ssl mode, it gets hang.

Again if I use the command from pgpool II client in non-ssl mode, it works
fine.

Any help please.


When I use the following query it works :

SELECT d.datname as "Name",
   r.rolname as "Owner",
   d.encoding as "Encoding"
FROM pg_catalog.pg_database d
  JOIN pg_catalog.pg_roles r ON d.datdba = r.oid
ORDER BY 1

But the following query does not work:

SELECT d.datname as "Name",
   r.rolname as "Owner",
   pg_catalog.pg_encoding_to_char(d.encoding) as "Encoding"
FROM pg_catalog.pg_database d
  JOIN pg_catalog.pg_roles r ON d.datdba = r.oid
ORDER BY 1

The function call "pg_catalog.pg_encoding_to_char(d.encoding)" somehow makes
the Pgpool hang in SSL mode.

I found that the following works fine:

postgres=# create table mytab  as select relname from pg_class limit 37;
SELECT
postgres=# select pg_relation_size('mytab');
 pg_relation_size
--
 8192
(1 row)

But, the folllowing hangs:

postgres=# select relname from pg_class limit 38;

Any help please. Its urgent.


Re: [GENERAL] Urgent help needed- alias name in update statement

2010-03-10 Thread Albe Laurenz
Venkat wrote:
> In postgre, when i am trying to give alias name in update 
> statement like below - 
> 
> - 
> update mytable x
> set x.name = 'asdf'
> where x.no = 1   
> ---
> 
> 
> is giving error - mytable is not having col x. 
> 
> We have migrated code from oracle to postgre 8.4. Is there 
> any solution for this. 
> (functions were compiled without any compilation errors - now 
> when we are trying to run these we are getting problems) 

The name is PostgreSQL or Postgres.

I looked at the syntax of the UPDATE statement as specified by
the SQL standard, and it says in Part 2, chapters 14.11 and 14.12,
that PostgreSQL is behaving in the standard-conforming way.

If you use nonstandard SQL extensions of a database vendor,
portability will suffer, which is neither Oracle's nor
PostgreSQL's fault.

I guess you'll have to rewrite those UPDATE statements.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent help needed- alias name in update statement

2010-03-09 Thread Raymond O'Donnell
On 09/03/2010 13:51, venkatra...@tcs.com wrote:
> Hello,
> 
> In postgre, when i am trying to give alias name in update statement like 
> below -
> 
> -
> update mytable x
> set x.name = 'asdf'
> where x.no = 1 
> ---

Leave leave off the "x." :

update mytable
set name = 'asdf'
where no = 1;

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent help needed- alias name in update statement

2010-03-09 Thread Adrian Klaver
On Tuesday 09 March 2010 5:51:31 am venkatra...@tcs.com wrote:
> Hello,
>
> In postgre, when i am trying to give alias name in update statement like
> below -
>
> -
> update mytable x
> set x.name = 'asdf'
> where x.no = 1
> ---
>
> is giving error - mytable is not having col x.
>
> We have migrated code from oracle to postgre 8.4. Is there any solution
> for this.
> (functions were compiled without any compilation errors - now when we are
> trying to run these we are getting problems)
>
> Please help..
>

From here:
http://www.postgresql.org/docs/8.4/interactive/sql-update.html

"column

The name of a column in table. The column name can be qualified with a 
subfield name or array subscript, if needed. Do not include the table's name in 
the specification of a target column — for example, UPDATE tab SET tab.col = 1 
is invalid. "


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Urgent help needed- alias name in update statement

2010-03-09 Thread venkatrao . b
Hello,

In postgre, when i am trying to give alias name in update statement like 
below -

-
update mytable x
set x.name = 'asdf'
where x.no = 1 
---

is giving error - mytable is not having col x.

We have migrated code from oracle to postgre 8.4. Is there any solution 
for this.
(functions were compiled without any compilation errors - now when we are 
trying to run these we are getting problems)

Please help..

Venkat
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: [GENERAL] Urgent Help required

2009-10-16 Thread Mike Christensen
Hmm would this be a bad time to ask for PostGres 1.0 support?

On Fri, Oct 16, 2009 at 1:55 PM, Jeff Davis  wrote:
> On Fri, 2009-10-16 at 11:26 +0100, Neha Patel wrote:
>> We are running with postgres sql 7.3.2. We were trying to create an
>> index on a big table. The create index command ran for nearly 5 hours
>> at which point we decided to interrupt it. Since this was interrupted,
>> any operations attempted on the table on which the index was being
>> created gives following error in pgsql log:
>
>> LOG: all server processes terminated; reinitializing shared memory and
>> semaphor
>
> 1. Shut down postgresql and make a full filesystem copy of the PGDATA
> directory. This will ensure that anything else you do won't leave you in
> a worse position.
>
> 2. Upgrade to the latest version of postgresql 7.3, which is 7.3.21
>
> 3. Start up again
>
> 4. Try to fix the problem:
>  a. see if there are any indexes on the table
>  b. if so, drop them
>  c. try to get a good logical backup using pg_dump. You may want to
> disable index scans by using "SET enable_indexscan = f;".
>  d. If that doesn't work, you may have catalog corruption. Examine the
> catalogs (documented here:
> http://www.postgresql.org/docs/7.3/static/catalogs.html ), and look
> entries related to your table in pg_class and pg_index, and see if
> anything looks wrong.
>  e. start from a fresh install and restore using the logical backup
>
> 5. Upgrade all data to a recent version of postgresql. You're on a
> really ancient version that has fallen out of official support. This may
> require several upgrade steps, but you should get upgraded to 8.3.8 or
> 8.4.1. A lot of bugs have been fixed, and as long as you are on 7.3, you
> will still be at serious risk.
>
> Regards,
>        Jeff Davis
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread Jeff Davis
On Fri, 2009-10-16 at 11:26 +0100, Neha Patel wrote:
> We are running with postgres sql 7.3.2. We were trying to create an
> index on a big table. The create index command ran for nearly 5 hours
> at which point we decided to interrupt it. Since this was interrupted,
> any operations attempted on the table on which the index was being
> created gives following error in pgsql log:

> LOG: all server processes terminated; reinitializing shared memory and
> semaphor

1. Shut down postgresql and make a full filesystem copy of the PGDATA
directory. This will ensure that anything else you do won't leave you in
a worse position.

2. Upgrade to the latest version of postgresql 7.3, which is 7.3.21

3. Start up again

4. Try to fix the problem:
 a. see if there are any indexes on the table
 b. if so, drop them
 c. try to get a good logical backup using pg_dump. You may want to
disable index scans by using "SET enable_indexscan = f;".
 d. If that doesn't work, you may have catalog corruption. Examine the
catalogs (documented here:
http://www.postgresql.org/docs/7.3/static/catalogs.html ), and look
entries related to your table in pg_class and pg_index, and see if
anything looks wrong.
 e. start from a fresh install and restore using the logical backup

5. Upgrade all data to a recent version of postgresql. You're on a
really ancient version that has fallen out of official support. This may
require several upgrade steps, but you should get upgraded to 8.3.8 or
8.4.1. A lot of bugs have been fixed, and as long as you are on 7.3, you
will still be at serious risk.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread John R Pierce

Neha Patel wrote:


Hi,

 


We are running with postgres sql 7.3.2. We were trying to...



well, right off the bat, thats a -really- old version.   Release notes 
say February 2003.  7.3 was updated to 7.3.21, and we're currently on 
8.4 (while still supporting 8.3, 8.2, 8.1, and 7.4).There were a LOT 
of fixes between 7.3.2 and 7.3.21, see the release notes for each 
version in between...  
http://www.postgresql.org/docs/current/static/release.html






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread Neha Patel
Hi David,

Many thanks for your reply. After good 10 hours of work we managed to
restore from a backup.

Regards
Neha>
 

-Original Message-
From: David Fetter [mailto:da...@fetter.org] 
Sent: 16 October 2009 17:28
To: Neha Patel
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Urgent Help required

On Fri, Oct 16, 2009 at 11:26:40AM +0100, Neha Patel wrote:
> Hi,
>  
> We are running with postgres sql 7.3.2.

Whatever you thought your most urgent priority was, it's actually
getting your database off of a major version of PostgreSQL, 7.3, whose
end-of-life was well over a year ago.

Your second most urgent priority is creating an upgrade strategy and
integrating it into your development and deployment processes.

I hope you have good backups.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread David Fetter
On Fri, Oct 16, 2009 at 11:26:40AM +0100, Neha Patel wrote:
> Hi,
>  
> We are running with postgres sql 7.3.2.

Whatever you thought your most urgent priority was, it's actually
getting your database off of a major version of PostgreSQL, 7.3, whose
end-of-life was well over a year ago.

Your second most urgent priority is creating an upgrade strategy and
integrating it into your development and deployment processes.

I hope you have good backups.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread Grzegorz Jaśkiewicz
On Fri, Oct 16, 2009 at 11:26 AM, Neha Patel  wrote:

>  Hi,
>
>
>
> We are running with postgres sql 7.3.2. We were trying to create an index
> on a big table. The create index command ran for nearly 5 hours at which
> point we decided to interrupt it. Since this was interrupted, any
> operations attempted on the table on which the index was being created
> gives following error in pgsql log:
>
upgrade to whatever is the newest 7.3.X version now first, and also - 7.3 is
an ancient history - in terms of age, so you might wanna look at upgrading
to 8.3 or newer soon.


>
>
> LOG:  all server processes terminated; reinitializing shared memory and
> semaphor
>
> es
>
>
> LOG:  database system was interrupted at 2009-10-16 10:44:54 BST
>
>
> LOG:  checkpoint record is at 150/71A0C0CC
>
>
> LOG:  redo record is at 150/71A0C0CC; undo record is at 0/0; shutdown TRUE
>
>
> LOG:  next transaction id: 1757299460; next oid: 43508941
>
>
> LOG:  database system was not properly shut down; automatic recovery in
> progress
>
> LOG:  ReadRecord: record with zero length at 150/71A0C10C
>
>
> LOG:  redo is not required
>
>
> LOG:  database system is ready
>
>
>
>
> Any idea what this means and what we need to do to resolve access to this
> table again? We can see the next oid number increases over time. Access to
> all other tables in the database is fine. Any help would be greatly
> appreciated.
>
>
>
> Many Thanks
>
> *Neha.*
>



-- 
GJ


[GENERAL] Urgent Help required

2009-10-16 Thread Neha Patel
Hi,
 
We are running with postgres sql 7.3.2. We were trying to create an
index on a big table. The create index command ran for nearly 5 hours at
which point we decided to interrupt it. Since this was interrupted, any
operations attempted on the table on which the index was being created
gives following error in pgsql log:
 
LOG:  all server processes terminated; reinitializing shared memory and
semaphor
es

LOG:  database system was interrupted at 2009-10-16 10:44:54 BST

LOG:  checkpoint record is at 150/71A0C0CC

LOG:  redo record is at 150/71A0C0CC; undo record is at 0/0; shutdown
TRUE  
LOG:  next transaction id: 1757299460; next oid: 43508941

LOG:  database system was not properly shut down; automatic recovery in
progress
LOG:  ReadRecord: record with zero length at 150/71A0C10C

LOG:  redo is not required

LOG:  database system is ready

 
Any idea what this means and what we need to do to resolve access to
this table again? We can see the next oid number increases over time.
Access to all other tables in the database is fine. Any help would be
greatly appreciated. 
 
Many Thanks
Neha.


Re: [GENERAL] urgent request : PSQLException: FATAL: could not open relation XXX: No such file or directory

2009-02-02 Thread Adrian Klaver

I copied back to list as your chances of getting an answer are greater.

On Sunday 01 February 2009 10:15:04 pm you wrote:
> The Postgres version is PostgreSQL version 8.3.4.
> OS type - Windows XP Proffesional Version 2002
> Our application is a client server Java application which collects data
> from the db and send to Platform to be shown to the user.
> Only our application is touching Postgresdb.
>
> The application was running smoothly.. But at some point of time the
> db(postgres service) stopped and Platform stopped receiving data from db.
> While checking on the problem we found this exception in the server logs.
>
>
>
> Warm regards,
> Preethi K Valsalan, Team Lead
> Sphere Networks FZCo. P.O.Box 341010, Dubai UAE
> Tel: (+9714) 501-5863   Fax: (+9714) 501-5872
> preethi.valsa...@sphere.ae
> www.sphere.ae
>
>

Some more questions:
Was does Postgres logging show about the shutdown?
What does the XP logging show?
Files in the data folder map to objects in the database, is it possible some 
code in the application deleted objects?
Is there a Anti-Virus program running on this machine?
Do you know which files disappeared?

As to recovery, do you have a backup?



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] urgent request : PSQLException: FATAL: could not open relation XXX: No such file or directory

2009-02-01 Thread Adrian Klaver
On Sunday 01 February 2009 12:38:21 am Preethi Valsalan wrote:
>  Hi
>
> I am working on an application where Postgresql is used as the db. I have a
> trigger and three functions running on that trigger. The data in db is
> updated by a thread each 1 minute continuously.
>
> I kept my application running for 2 days and on runtime Postgres stopped
> running. But when I restarted it and restarted my application I got the
> error "PSQLException: FATAL: could not open relation XXX: No such file or
> directory" . I found that some of the files in the db(data folder) are
> lost(just dissappeared). Can u explain me why it happened and how can I fix
> this problem, since this is blocking my application.
>

You will need to supply more information to start the troubleshooting process:
Postgres version
OS type and version
What is your application doing?
Are there other application running that touch the db or its data directory?

>From this particular post what does "on runtime Postgres stopped running" mean?


Thanks,

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] urgent request : PSQLException: FATAL: could not open relation XXX: No such file or directory

2009-02-01 Thread Preethi Valsalan
 

 Hi

I am working on an application where Postgresql is used as the db. I have a
trigger and three functions running on that trigger. The data in db is
updated by a thread each 1 minute continuously. 

I kept my application running for 2 days and on runtime Postgres stopped
running. But when I restarted it and restarted my application I got the
error "PSQLException: FATAL: could not open relation XXX: No such file or
directory" . I found that some of the files in the db(data folder) are
lost(just dissappeared). Can u explain me why it happened and how can I fix
this problem, since this is blocking my application. 


cid:image001.gif@01C92A25.A541A8B0


cid:image002.gif@01C92A25.A541A8B0

Warm regards,

Preethi K Valsalan, Team Lead

Sphere Networks FZCo. P.O.Box 341010, Dubai UAE

Tel: (+9714) 501-5863   Fax: (+9714) 501-5872

  preethi.valsa...@sphere.ae

  www.sphere.ae 



 

 

<><><>

Re: [GENERAL] Urgent - Grant

2008-11-18 Thread Tom Lane
"A. Kretschmer" <[EMAIL PROTECTED]> writes:
> am  Tue, dem 18.11.2008, um 11:25:16 -0300 mailte Gustavo Rosso folgendes:
>> banco=> create table tabla (x   integer);
>> NOTICE:  CREATE TABLE / UNIQUE will create implicit index 
>> "tabla_oid_idx" for table "tabla"
>> *ERROR:  must be owner of relation tabla*

> Which version?
> Apparently very old, since OID are deprecated for a long time.

There has never been any released version of PostgreSQL that
automatically created a unique index on OID.  What it looks like to me
is that Gustavo is running a version that contains some broken
third-party patches.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent - Grant

2008-11-18 Thread Stephane Bortzmeyer
On Tue, Nov 18, 2008 at 11:46:07AM -0300,
 Gustavo Rosso <[EMAIL PROTECTED]> wrote 
 a message of 68 lines which said:

> PostgreSQL 7.4 informix1.8 on i686-pc-linux-gnu, compiled by GCC gcc  
> (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)
> Is it old version?

It is still in Debian "stable" (which also has 8.1) but it will no
longer be present in the next "stable", called "lenny", which will be
released in one or two months.

So, you may prepare to migrate.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent - Grant

2008-11-18 Thread Gustavo Rosso

Andreas, my version is:
select version();
PostgreSQL 7.4 informix1.8 on i686-pc-linux-gnu, compiled by GCC gcc 
(GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)

Is it old version?
Can to be the reason for my problem?


A. Kretschmer escribió:

am  Tue, dem 18.11.2008, um 11:25:16 -0300 mailte Gustavo Rosso folgendes:
  

People of world, help help please.
I created a DB with super-user postgres, I give all privileges to DB 
(banco is my DB)

GRANT ALL ON DATABASE banco TO PUBLIC;
GRANT CREATE ON DATABASE banco TO PUBLIC

But other users can't create tables in this DB
ERROR:
banco=> create table tabla (x   integer);
NOTICE:  CREATE TABLE / UNIQUE will create implicit index 
"tabla_oid_idx" for table "tabla"

*ERROR:  must be owner of relation tabla*
banco=>



Which version?

Apparently very old, since OID are deprecated for a long time.


I can't reproduce your problem:


test=# create user foo;
CREATE ROLE
test=# GRANT ALL ON DATABASE test to public;
GRANT
test=# GRANT CREATE ON DATABASE test to public;
GRANT
test=# \q
[EMAIL PROTECTED]:~$ psql-dev -U foo test
psql (8.4devel)
Type "help" for help.

test=> select current_user;
 current_user
--
 foo
(1 row)

test=> create table tabla (x   integer);
CREATE TABLE
test=> create table tablb (x   integer primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"tablb_pkey" for table "tablb"
CREATE TABLE
test=>  




Regards, Andreas
  


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent - Grant

2008-11-18 Thread A. Kretschmer
am  Tue, dem 18.11.2008, um 11:25:16 -0300 mailte Gustavo Rosso folgendes:
> People of world, help help please.
> I created a DB with super-user postgres, I give all privileges to DB 
> (banco is my DB)
> GRANT ALL ON DATABASE banco TO PUBLIC;
> GRANT CREATE ON DATABASE banco TO PUBLIC
> 
> But other users can't create tables in this DB
> ERROR:
> banco=> create table tabla (x   integer);
> NOTICE:  CREATE TABLE / UNIQUE will create implicit index 
> "tabla_oid_idx" for table "tabla"
> *ERROR:  must be owner of relation tabla*
> banco=>

Which version?

Apparently very old, since OID are deprecated for a long time.


I can't reproduce your problem:


test=# create user foo;
CREATE ROLE
test=# GRANT ALL ON DATABASE test to public;
GRANT
test=# GRANT CREATE ON DATABASE test to public;
GRANT
test=# \q
[EMAIL PROTECTED]:~$ psql-dev -U foo test
psql (8.4devel)
Type "help" for help.

test=> select current_user;
 current_user
--
 foo
(1 row)

test=> create table tabla (x   integer);
CREATE TABLE
test=> create table tablb (x   integer primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"tablb_pkey" for table "tablb"
CREATE TABLE
test=>  



Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Urgent - Grant

2008-11-18 Thread Gustavo Rosso

People of world, help help please.
I created a DB with super-user postgres, I give all privileges to DB 
(banco is my DB)

GRANT ALL ON DATABASE banco TO PUBLIC;
GRANT CREATE ON DATABASE banco TO PUBLIC

But other users can't create tables in this DB
ERROR:
banco=> create table tabla (x   integer);
NOTICE:  CREATE TABLE / UNIQUE will create implicit index 
"tabla_oid_idx" for table "tabla"

*ERROR:  must be owner of relation tabla*
banco=>

But they can use select,update,delete
What happen? What must to do?

Thanks, thanks...

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Urgent] Regexp_replace question

2007-09-25 Thread Phoenix Kiula
On 25/09/2007, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> > How can I remove characters that form a part of regular expressions?
>
> Why do you want to do that?


Because these values were inserted into the DB due to a faulty
application. So cleansing was called for.

I just ended up doing it with replace instead of regexp_replace, one
character at a time.

Thanks!

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [Urgent] Regexp_replace question

2007-09-24 Thread Michael Fuhr
On Tue, Sep 25, 2007 at 01:36:26PM +0800, Phoenix Kiula wrote:
> How can I remove characters that form a part of regular expressions?

Why do you want to do that?

> I would like to remove all instances of the following characters:
> 
> [
> ]
> \
> +

test=> select id, t, regexp_replace(t, e'[[\\]+]', '', 'g') from foo;
 id | t  | regexp_replace 
++
  1 | foo[]+\bar | foobar
(1 row)

test=> select id, t, translate(t, e'[]\\+', '') from foo;
 id | t  | translate 
++---
  1 | foo[]+\bar | foobar
(1 row)

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] [Urgent] Regexp_replace question

2007-09-24 Thread Phoenix Kiula
Hi,

How can I remove characters that form a part of regular expressions? I
would like to remove all instances of the following characters:

[
]
\
+

Given that these all mean something in regexp, I am trying to prefix
them with a backslash, but it doesn't work. I tried the following:

update TABLE
set COLUMN = regexp_replace(COLUMN, E'[\\\[\]\+]+', '')
;

This did not work so I tried the naked characters separated by a pipe,
as follows:

update TABLE
set COLUMN= regexp_replace(COLUMN, E'[\|[|]|+]+', '')
;

But this of course did not work either. I've read and re-read the
pattern matching doc page ( http://snipurl.com/pg_regexp ) but it
doesn't seem to offer much help.

Any thoughts on what I am doing wrong?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Anton Melser
On 31/08/2007, Jeff Davis <[EMAIL PROTECTED]> wrote:
> On Fri, 2007-08-31 at 22:34 +0800, Phoenix Kiula wrote:
> > On 31/08/2007, Josh Tolley <[EMAIL PROTECTED]> wrote:
> > > On 8/31/07, Zoltan Boszormenyi <[EMAIL PROTECTED]> wrote:
> > > > Phoenix Kiula írta:
> >
> > > In addition to what others have already said, when things calm down
> > > you should consider implementing some sort of monitoring system that
> > > is configured to start screaming before you run into problems like
> > > this. At my place of work, we've set up Nagios to monitor the space
> > > left on various partitions, and email us when a partition gets above
> > > 90% full.
> >
> >
> >
> > Wow, Nagois seems like a superb tool. Thanks for the recommendation!
> >
>
> You might also consider OpenNMS.

I spent about 3 hours trying to get it running and said - I'm at eval
stage, and nagios/centreon is installed and working... (even if not as
theoretically nice)... there are lots of very promising systems out
there (hyperic, zenoss, etc) but if it ain't an apt-get or yum away
then... why not just go with what *is* there? Surely it must be being
used by more people, if not, why aren't the others in the repos?
Random ramblings!
Cheers
Anton


-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Jeff Davis
On Fri, 2007-08-31 at 22:34 +0800, Phoenix Kiula wrote:
> On 31/08/2007, Josh Tolley <[EMAIL PROTECTED]> wrote:
> > On 8/31/07, Zoltan Boszormenyi <[EMAIL PROTECTED]> wrote:
> > > Phoenix Kiula írta:
> 
> > In addition to what others have already said, when things calm down
> > you should consider implementing some sort of monitoring system that
> > is configured to start screaming before you run into problems like
> > this. At my place of work, we've set up Nagios to monitor the space
> > left on various partitions, and email us when a partition gets above
> > 90% full.
> 
> 
> 
> Wow, Nagois seems like a superb tool. Thanks for the recommendation!
> 

You might also consider OpenNMS.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Erik Jones


On Aug 31, 2007, at 8:35 AM, Phoenix Kiula wrote:

Thanks everyone. Yes, /var was full because of the backups that're  
going there.


Database is back working.

It was my backup script. It is set to save a daily backup to the /var/
folder, which is not clever. I'll change it to be in the "backup"
folder which is a mounted one.

On that note, is it recommended to store the data of the database on a
different hard disk than the one on which the database is running? How
can I change the data folder for a live database?

Many thanks!


The data directory is where the database is "running".  If you're  
referring to where the postgres binaries are, it doesn't matter as  
they are loaded into memory when the server starts.  As far as moving  
the data directory goes, you can't move it for a running database.   
All of the options to move a server's data directory involve, at some  
point, shutting down the db.  Alternatively, if you're running out of  
space on the disk currently holding the data, you can add another  
drive in a new tablespace.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Phoenix Kiula
On 31/08/2007, Josh Tolley <[EMAIL PROTECTED]> wrote:
> On 8/31/07, Zoltan Boszormenyi <[EMAIL PROTECTED]> wrote:
> > Phoenix Kiula írta:

> In addition to what others have already said, when things calm down
> you should consider implementing some sort of monitoring system that
> is configured to start screaming before you run into problems like
> this. At my place of work, we've set up Nagios to monitor the space
> left on various partitions, and email us when a partition gets above
> 90% full.



Wow, Nagois seems like a superb tool. Thanks for the recommendation!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Phoenix Kiula
On 31/08/2007, Zoltan Boszormenyi <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula írta:
> > I am getting this message when I start the DB:
> >
> >
> > psql: FATAL:  could not access status of transaction 0
> > DETAIL:  Could not write to file "pg_subtrans/01F8" at offset 221184:
> > No space left on device.
> >
> >
> > What is this about and how do I solve this? A "df -h" on my system shows 
> > this:
> >
> >
> > FilesystemTypeSize  Used Avail Use% Mounted on
> > ...
> > /dev/sda2 ext39.9G  9.5G 0 100% /var
> >
>
> This is the problem. Free up some space under /var or move either
> the whole partition or PostgreSQL's data directory to a new disk.
> The data directory lives under /var/lib/postgresql (mainstream) or
> /var/lib/pgsql (RedHat speciality).
>



Thanks everyone. Yes, /var was full because of the backups that're going there.

Database is back working.

It was my backup script. It is set to save a daily backup to the /var/
folder, which is not clever. I'll change it to be in the "backup"
folder which is a mounted one.

On that note, is it recommended to store the data of the database on a
different hard disk than the one on which the database is running? How
can I change the data folder for a live database?

Many thanks!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Josh Tolley
On 8/31/07, Zoltan Boszormenyi <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula írta:
> > I am getting this message when I start the DB:
> >
> >
> > psql: FATAL:  could not access status of transaction 0
> > DETAIL:  Could not write to file "pg_subtrans/01F8" at offset 221184:
> > No space left on device.
> >
> >
> > What is this about and how do I solve this? A "df -h" on my system shows 
> > this:
> >
> >
> > FilesystemTypeSize  Used Avail Use% Mounted on
> > ...
> > /dev/sda2 ext39.9G  9.5G 0 100% /var
> >

In addition to what others have already said, when things calm down
you should consider implementing some sort of monitoring system that
is configured to start screaming before you run into problems like
this. At my place of work, we've set up Nagios to monitor the space
left on various partitions, and email us when a partition gets above
90% full.

- eggyknap

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Zoltan Boszormenyi

Phoenix Kiula írta:

I am getting this message when I start the DB:


psql: FATAL:  could not access status of transaction 0
DETAIL:  Could not write to file "pg_subtrans/01F8" at offset 221184:
No space left on device.


What is this about and how do I solve this? A "df -h" on my system shows this:


FilesystemTypeSize  Used Avail Use% Mounted on
...
/dev/sda2 ext39.9G  9.5G 0 100% /var
  


This is the problem. Free up some space under /var or move either
the whole partition or PostgreSQL's data directory to a new disk.
The data directory lives under /var/lib/postgresql (mainstream) or
/var/lib/pgsql (RedHat speciality).

--
--
Zoltán Böszörményi
Cybertec Geschwinde & Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Merlin Moncure
On 8/31/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> I am getting this message when I start the DB:
>
>
> psql: FATAL:  could not access status of transaction 0
> DETAIL:  Could not write to file "pg_subtrans/01F8" at offset 221184:
> No space left on device.
>
>
> What is this about and how do I solve this? A "df -h" on my system shows this:

You should take the database down if it is not already and immediately
take a file system backup of the database and move it to a secure
location.  You may temporarily make some space by symlinking database
folders to partitions that have space (/home)...such as pg_xlog, or
folders inside the database proper.

After having freed up at least a few 100 mb of space, start the
database and make sure it comes up properly.  If it does, take a
proper backup and investigate a long term solution to the storage
problem...buy a drive :-)

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Bruce McAlister
Looks like you're out of disk space on:

/dev/sda2 ext39.9G  9.5G 0 100% /var


is this where your database resides?



Phoenix Kiula wrote:
> I am getting this message when I start the DB:
>
>
> psql: FATAL:  could not access status of transaction 0
> DETAIL:  Could not write to file "pg_subtrans/01F8" at offset 221184:
> No space left on device.
>
>
> What is this about and how do I solve this? A "df -h" on my system shows this:
>
>
> FilesystemTypeSize  Used Avail Use% Mounted on
> /dev/sda5 ext39.9G  2.5G  6.9G  27% /
> /dev/sda1 ext3 99M   17M   78M  18% /boot
> none tmpfs2.0G 0  2.0G   0% /dev/shm
> /dev/sda7 ext3197G   17G  171G   9% /home
> /dev/sda8 ext3   1012M   34M  927M   4% /tmp
> /dev/sda3 ext39.9G  4.4G  5.0G  47% /usr
> /dev/sda2 ext39.9G  9.5G 0 100% /var
> /tmp  none   1012M   34M  927M   4% /var/tmp
>
>
> Please help!
>
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
>
>
>   

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Tommy Gildseth

Phoenix Kiula wrote:

I am getting this message when I start the DB:


psql: FATAL:  could not access status of transaction 0
DETAIL:  Could not write to file "pg_subtrans/01F8" at offset 221184:
No space left on device.


What is this about and how do I solve this? A "df -h" on my system shows this:

  



/dev/sda2 ext39.9G  9.5G 0 100% /var
  


Well, the error message is pretty clear, and assuming you don't keep 
your database in any non-standard location, you /var partition is indeed 
full.


--
Tommy Gildseth


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] URGENT: Whole DB down ("no space left on device")

2007-08-31 Thread Phoenix Kiula
I am getting this message when I start the DB:


psql: FATAL:  could not access status of transaction 0
DETAIL:  Could not write to file "pg_subtrans/01F8" at offset 221184:
No space left on device.


What is this about and how do I solve this? A "df -h" on my system shows this:


FilesystemTypeSize  Used Avail Use% Mounted on
/dev/sda5 ext39.9G  2.5G  6.9G  27% /
/dev/sda1 ext3 99M   17M   78M  18% /boot
none tmpfs2.0G 0  2.0G   0% /dev/shm
/dev/sda7 ext3197G   17G  171G   9% /home
/dev/sda8 ext3   1012M   34M  927M   4% /tmp
/dev/sda3 ext39.9G  4.4G  5.0G  47% /usr
/dev/sda2 ext39.9G  9.5G 0 100% /var
/tmp  none   1012M   34M  927M   4% /var/tmp


Please help!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Urgent job opening for PostgreSQL DBA.

2007-04-01 Thread Poornima
Hi,

We have excellent and urgent job opening for PostgreSQL DBA  with one
of our esteemed client in Pune/ Bangalore.

Skillset   : PostgreSQL DBA

Experience : 5 - 7 yrs

Location : Pune/ Bangalore.

If interested, please send me your updated resume  in MS word format
along with the CTC details:

Current CTC :
Expected CTC  :
Notice Period   :

Please send your Academic Details :

BE   :  %,  University
HSC :  %,  Board
SSC :  %,  Board

Humble Request: Please give me references of your friends or
colleagues who would be interested in the above position since there
are few people working as PostgreSQL DBA.

Please revert back ASAP.

Thanks & Regards,
Poornima
---
Staffing Associate
E-Mail: [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Urgent job opening for PostgreSQL DBA.

2007-04-01 Thread Poornima
Hi,

We have excellent and urgent job opening for PostgreSQL DBA  with one
of our esteemed client in Pune/ Bangalore.

Skillset   : PostgreSQL DBA

Experience : 5 - 7 yrs

Location : Pune/ Bangalore.

If interested, please send me your updated resume  in MS word format
along with the CTC details:

Current CTC :
Expected CTC  :
Notice Period   :

Please send your Academic Details :

BE   :  %,  University
HSC :  %,  Board
SSC :  %,  Board

Humble Request: Please give me references of your friends or
colleagues who would be interested in the above position since there
are few people working as PostgreSQL DBA.

Please revert back ASAP.

Thanks & Regards,
Poornima
---
Staffing Associate
E-Mail: [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] urgent: upgraded to 8.2, getting kernel panics

2007-02-23 Thread Devrim GUNDUZ

On Fri, 2007-02-23 at 17:14 -0500, Merlin Moncure wrote:

> BUG: spinlock recursion CPU0 postmaster...not tainted.



> Has anybody seen any problem like this or have any suggestions about
> possible resolution...should I be posting to the LKML?  

AFAIR (+ some quick Googling), this is related to a problem in kernel.
You may need to update to a newer Fedora release since FC4 is not
supported anymore :(. 

Even if you report to LKML, they will probably suggest you using a newer
kernel. However, I think system will not let you compile a new kernel
and panic again during a high load... So...

If you have a free space, install a newer Fedora release on this system,
mount the existing $PGDATA and try if this fixes the problem...
-- 
Devrim GÜNDÜZ
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/




signature.asc
Description: This is a digitally signed message part


[GENERAL] urgent: upgraded to 8.2, getting kernel panics

2007-02-23 Thread Merlin Moncure

Ok,

This may the wrong place to look for answers to this, but I figured it
couldn't hurt...so here goes:

On friday we upgraded a critical backend server to postgresql 8.2
running on fedora core 4.  Since then we have received three kernel
panics during periods of moderate to high load (twice during the
pg_dump backup run).

Platform is IBM x360 series running SCSI, software raid on the backplane.

After the first crash we yum updated the system which obviously did
not fix the problem.  I was leaning hardware problem until this last
time and I was able to catch the following off the terminal:

BUG: spinlock recursion CPU0 postmaster...not tainted.
bunch of other stuff ending in:
Kernel Panic: not syncing: Bad locking

One of the other developers snapped a picture of the kernel panic with
his digital camera and is going to send over the pictures when he gets
home this evening.

Has anybody seen any problem like this or have any suggestions about
possible resolution...should I be posting to the LKML?  Any
suggestions are welcome and appreciated.

At this juncture we are going to downgrade the postmaster back to 8.1
and see if that fixes the panics.  If it doesn't this discussion is
over but if it does we are extremely curious about looking for a fix
for this issue...we have about 8 weeks of development that is on hold
until we can put a 8.2 server in production.  Management has already
authorized a new server but they want a 100% guarantee this is going
to fix the problem.

thanks in advance,
merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT - startup process (PID 29541) was

2006-10-07 Thread Bruce Momjian
Emanuele Rocca wrote:
-- Start of PGP signed section.
> Hello Tom,
> 
> * Tom Lane <[EMAIL PROTECTED]>, [2006-10-06 15:16 -0400]:
> >  Um, were you running with full_page_writes off?  Bad idea in 8.1 :-( ...
> 
> The manual [1] says that full_page_writes is ignored and always treated
> as if it was set to on. Is it wrong?

Well, early versions of 8.1.X did honor full_page_writes, but 8.1.4
disabled it.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] URGENT - startup process (PID 29541) was terminated by signal 6

2006-10-07 Thread Emanuele Rocca
Hello Tom,

* Tom Lane <[EMAIL PROTECTED]>, [2006-10-06 15:16 -0400]:
>  Um, were you running with full_page_writes off?  Bad idea in 8.1 :-( ...

The manual [1] says that full_page_writes is ignored and always treated
as if it was set to on. Is it wrong?

ciao,
ema

[1] 
http://www.postgresql.org/docs/8.1/static/runtime-config-wal.html#GUC-FULL-PAGE-WRITES


signature.asc
Description: Digital signature


Re: [GENERAL] URGENT - startup process (PID 29541) was terminated by signal 6

2006-10-06 Thread Tom Lane
andy rost <[EMAIL PROTECTED]> writes:
> Our Opteron DB server had a problem with its RAID controller requiring 
> an immediate shutdown of our Postgres server (8.1.3 on FreeBSD 6.0 
> release number 10). We used kill -QUIT on the postmaster PID.

> 2006-10-06 12:32:40 CDT PANIC:  heap_clean_redo: no block
> ...
> 2006-10-06 12:08:48 CDT PANIC:  right sibling is not next child in 
> "winddir_idxu"

Um, were you running with full_page_writes off?  Bad idea in 8.1 :-( ...
especially on hardware that turns out to not be 100% reliable.

I think your only hope of restarting the database is to use
pg_resetxlog.  This is likely to leave you with some database
corruption, in the form of partially applied recent transactions.
I'd recommend a dump and reload, or at least REINDEXing all the
indexes, to forestall problems from that.  While you are at it,
update to 8.1.4.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] URGENT - startup process (PID 29541) was terminated by signal 6

2006-10-06 Thread andy rost
Our Opteron DB server had a problem with its RAID controller requiring 
an immediate shutdown of our Postgres server (8.1.3 on FreeBSD 6.0 
release number 10). We used kill -QUIT on the postmaster PID.


After repairing and rebooting the server we tried to start Postgres and 
get the following:


2006-10-06 12:32:40 CDT FATAL:  the database system is starting up
2006-10-06 12:32:40 CDT FATAL:  the database system is starting up
2006-10-06 12:32:40 CDT PANIC:  heap_clean_redo: no block
2006-10-06 12:32:40 CDT LOG:  startup process (PID 29541) was terminated 
by signal 6
2006-10-06 12:32:40 CDT LOG:  aborting startup due to startup process 
failure



At shutdown we had the following:

2006-10-06 12:08:48 CDT PANIC:  right sibling is not next child in 
"winddir_idxu"

2006-10-06 12:08:48 CDT STATEMENT:  VACUUM ANALYZE VERBOSE
2006-10-06 12:08:49 CDT LOG:  server process (PID 91933) was terminated 
by signal 6

2006-10-06 12:08:49 CDT LOG:  terminating any other active server processes

Anything I can do?


Andrew Rost
National Operational Hydrologic Remote Sensing Center (NOHRSC)
National Weather Service, NOAA
1735 Lake Dr. West, Chanhassen, MN 55317-8582
Voice: (952)361-6610 x 234
Fax: (952)361-6634
[EMAIL PROTECTED]
http://www.nohrsc.noaa.gov



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Reimer

ok,

I fixed it using the dd command and the pg_dump is not complaining anymore, 
but can I trust in my database now?


Thanks in advance!

- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "Reimer" <[EMAIL PROTECTED]>
Cc: 
Sent: Friday, March 24, 2006 1:11 PM
Subject: Re: [GENERAL] URGENT! could not access status of transaction



"Reimer" <[EMAIL PROTECTED]> writes:

pg_dump: ERROR:  could not access status of transaction 1768711534
DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0696": =
Arquivo ou dire
t=F3rio n=E3o encontrado



How is the best way to fix? pg_resetxlog?


resetxlog won't help: you've got corrupt data in that table.  See the
archives for previous threads about finding and removing corrupt rows.

I concur with the nearby suggestion that you should be running something
newer than 7.4.6, but even with an update you'll still be needing to
clean up this corruption manually :-(

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Tom Lane
"Reimer" <[EMAIL PROTECTED]> writes:
> pg_dump: ERROR:  could not access status of transaction 1768711534
> DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0696": =
> Arquivo ou dire
> t=F3rio n=E3o encontrado

> How is the best way to fix? pg_resetxlog?

resetxlog won't help: you've got corrupt data in that table.  See the
archives for previous threads about finding and removing corrupt rows.

I concur with the nearby suggestion that you should be running something
newer than 7.4.6, but even with an update you'll still be needing to
clean up this corruption manually :-(

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Jim C. Nasby
On Fri, Mar 24, 2006 at 10:51:25AM -0300, Reimer wrote:
> I forgot to mention postgreSQL version: 7.4.6

Had you been running the latest 7.4 release, this probably wouldn't have
happened. There's been a number of data-loss bugs fixed since 7.4.6.

> Thanks!
>   - Original Message - 
>   From: Reimer 
>   To: pgsql-general@postgresql.org 
>   Sent: Friday, March 24, 2006 10:41 AM
>   Subject: [GENERAL] URGENT! could not access status of transaction
> 
> 
>   Hello,
> 
>   We had some disk problems here and now pg_dump is complaining with:
> 
>   pg_dump: ERROR:  could not access status of transaction 1768711534
>   DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0696": Arquivo 
> ou dire
>   t?rio n?o encontrado
>   pg_dump: SQL command to dump the contents of table "tt_fin" failed: 
> PQendcopy() fa
>   iled.
>   pg_dump: Error message from server: ERROR:  could not access status of 
> transaction
>1768711534
>   DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0696": Arquivo 
> ou dire
>   t?rio n?o encontrado
>   pg_dump: The command was: COPY "REIMER".tt_fin (codfil, sequen, tipger, 
> tipnum, co
>   dusu, filger, filnum, filusu, seqger, seqnum, datope, vlrmov, seqtrn, 
> codctr, codc
>   us, numdoc, observ, flgfin, chvtrn, atu_em) TO stdout;
> 
>   I know is something related with the hardware and not with postgresql but I 
> don?t want to restore all the database.
> 
>   How is the best way to fix? pg_resetxlog?
> 
>   Thanks in advance!
> 
>   Reimer

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Reimer



I forgot to mention postgreSQL version: 
7.4.6
 
Thanks!

  - Original Message - 
  From: 
  Reimer 
  To: pgsql-general@postgresql.org 
  
  Sent: Friday, March 24, 2006 10:41 
  AM
  Subject: [GENERAL] URGENT! could not 
  access status of transaction
  
  Hello,
   
  We had some 
  disk problems here and now pg_dump is complaining with:
   
  pg_dump: ERROR:  could not access status of 
  transaction 1768711534DETAIL:  could not open file 
  "/usr/local/pgsql/data/pg_clog/0696": Arquivo ou diretório não 
  encontradopg_dump: SQL command to dump the contents of table "tt_fin" 
  failed: PQendcopy() failed.pg_dump: Error message from server: 
  ERROR:  could not access status of 
  transaction 1768711534DETAIL:  could not open file 
  "/usr/local/pgsql/data/pg_clog/0696": Arquivo ou diretório não 
  encontradopg_dump: The command was: COPY "REIMER".tt_fin (codfil, sequen, 
  tipger, tipnum, codusu, filger, filnum, filusu, seqger, seqnum, datope, 
  vlrmov, seqtrn, codctr, codcus, numdoc, observ, flgfin, chvtrn, atu_em) TO 
  stdout;
  I know is something related with the hardware and 
  not with postgresql but I don´t want to restore all 
  the database.
   
  How is the best way to fix? 
  pg_resetxlog?
   
  Thanks in advance!
   
  Reimer


[GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Reimer



Hello,
 
We had some 
disk problems here and now pg_dump is complaining with:
 
pg_dump: ERROR:  could not access status of 
transaction 1768711534DETAIL:  could not open file 
"/usr/local/pgsql/data/pg_clog/0696": Arquivo ou diretório não 
encontradopg_dump: SQL command to dump the contents of table "tt_fin" 
failed: PQendcopy() failed.pg_dump: Error message from server: 
ERROR:  could not access status of 
transaction 1768711534DETAIL:  could not open file 
"/usr/local/pgsql/data/pg_clog/0696": Arquivo ou diretório não 
encontradopg_dump: The command was: COPY "REIMER".tt_fin (codfil, sequen, 
tipger, tipnum, codusu, filger, filnum, filusu, seqger, seqnum, datope, 
vlrmov, seqtrn, codctr, codcus, numdoc, observ, flgfin, chvtrn, atu_em) TO 
stdout;
I know is something related with the hardware and 
not with postgresql but I don´t want to restore all 
the database.
 
How is the best way to fix? 
pg_resetxlog?
 
Thanks in advance!
 
Reimer


Re: [GENERAL] Urgent !!! Please Help Me

2006-03-20 Thread Michelle Konzack
Am 2006-03-13 23:58:40, schrieb r irussel:
> Hello Every body:
>
>   I have implemented psql version 7.4.2 on Debian linux version 3.2

Where does this PostgreSQL version come from?

And there is NO version 3.2 of Debian GNU/Linux.
Only 3.0 (Woody), 3.1 (Sarge) and maybe 4.0 (Etch).

You should install at least to Sarge 3.1 and PostgreSQL 7.4.7-6sarge1.

Greetings
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT!!! SELECT statement please help

2006-03-15 Thread Guy Rouillier
Why have you asked the same question 3 times in five minutes?
Additional responses below.

[EMAIL PROTECTED] wrote:
> hi all,
> 
> i have a web based java application with a postgres db.
> 
> now i am trying to generate a temp table which contains all hour
> records for a selected date range. eg. if the user selects 2006-03-14
> as from and 2006-03-14 as to, the system should insert 24 hour
> records with their individual time stamps (eg 2006-03-14 12:00:00).   
> 
> now currently, i have a select statement which searches another
> table, finds data and then inserts them. but the problem is that if
> there is no data for an hour, it does not insert anything.  
> 
> now i can think of a couple of solutions,
> 1. insert additional records using a second statement, which compares
> the table and inserts any missing records 2. insert data as usual and
> then using a second statement, compare the content and insert any
> missing records.   
> 
> my current implementation is:
> " INSERT INTO temp_table (edit_time,edit_time_count) " +
> " SELECT to_timestamp(to_char (last_edit_timestamp,'-MM-DD
>HH24:00:00'), '-MM-DD HH24:00:00')AS
> edit_time, " + " count(to_char (last_edit_timestamp,'-MM-DD
> HH24:00:00')) as edit_time_count " + " FROM " + tableName + " " +
> sqlWhereStr + " GROUP BY to_char (last_edit_timestamp,'-MM-DD
> HH24:00:00') " + " ORDER BY to_char (last_edit_timestamp,'-MM-DD
> HH24:00:00'); ";
> 
> the problem is that if the select statement does not return anything
> for a particular hour record, that record is not inserted. now i
> would like to have a table with all records including hours with 0
> count.   
> 
> can somebody please help me urgently. i would really appreciate it.
> thanks. 

Don't rely on your SELECT to provide the timestamps.  You know what
dates you are working with, and you know the hour range (1-24).  So
prefill the table with 24 rows for each day in the date range.  Then use
your select statement to update any rows for which you have data.

-- 
Guy Rouillier


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT!!! SELECT statement please help

2006-03-15 Thread Ben
Have you considered using a stored proceedure? It seems like it might easily 
give you the logic you're after.


http://www.postgresql.org/docs/8.1/static/xplang.html

On Sun, 12 Mar 2006, [EMAIL PROTECTED] wrote:


hi all,

i have a web based java application with a postgres db.

now i am trying to generate a temp table which contains all hour
records for a selected date range. eg. if the user selects 2006-03-14
as from and 2006-03-14 as to, the system should insert 24 hour records
with their individual time stamps (eg 2006-03-14 12:00:00).

now currently, i have a select statement which searches another table,
finds data and then inserts them. but the problem is that if there is
no data for an hour, it does not insert anything.

now i can think of a couple of solutions,
1. insert additional records using a second statement, which compares
the table and inserts any missing records
2. insert data as usual and then using a second statement, compare the
content and insert any missing records.

my current implementation is:
" INSERT INTO temp_table (edit_time,edit_time_count) " +
" SELECT to_timestamp(to_char (last_edit_timestamp,'-MM-DD
HH24:00:00'),
  '-MM-DD HH24:00:00')AS edit_time, " +
" count(to_char (last_edit_timestamp,'-MM-DD HH24:00:00')) as
edit_time_count " +
" FROM " + tableName + " " + sqlWhereStr +
" GROUP BY to_char (last_edit_timestamp,'-MM-DD HH24:00:00') " +
" ORDER BY to_char (last_edit_timestamp,'-MM-DD HH24:00:00'); ";

the problem is that if the select statement does not return anything
for a particular hour record, that record is not inserted. now i would
like to have a table with all records including hours with 0 count.

can somebody please help me urgently. i would really appreciate it.
thanks.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] URGENT!!! SELECT statement please help

2006-03-15 Thread sconeek
hi all,

i have a web based java application with a postgres db.

now i am trying to generate a temp table which contains all hour
records for a selected date range. eg. if the user selects 2006-03-14
as from and 2006-03-14 as to, the system should insert 24 hour records
with their individual time stamps (eg 2006-03-14 12:00:00).

now currently, i have a select statement which searches another table,
finds data and then inserts them. but the problem is that if there is
no data for an hour, it does not insert anything.

now i can think of a couple of solutions,
1. insert additional records using a second statement, which compares
the table and inserts any missing records
2. insert data as usual and then using a second statement, compare the
content and insert any missing records.

my current implementation is:
" INSERT INTO temp_table (edit_time,edit_time_count) " +
" SELECT to_timestamp(to_char (last_edit_timestamp,'-MM-DD
HH24:00:00'),
   '-MM-DD HH24:00:00')AS edit_time, " +
" count(to_char (last_edit_timestamp,'-MM-DD HH24:00:00')) as
edit_time_count " +
" FROM " + tableName + " " + sqlWhereStr +
" GROUP BY to_char (last_edit_timestamp,'-MM-DD HH24:00:00') " +
" ORDER BY to_char (last_edit_timestamp,'-MM-DD HH24:00:00'); ";

the problem is that if the select statement does not return anything
for a particular hour record, that record is not inserted. now i would
like to have a table with all records including hours with 0 count.

can somebody please help me urgently. i would really appreciate it.
thanks.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Urgent !!! Please Help Me

2006-03-14 Thread Richard Huxton

r irussel wrote:

Hello Every body:

I have implemented psql version 7.4.2 on Debian linux version 3.2
and it was running very well. At this moment it is being crashed when
executing a single select statement. Details are mentionded as
follows:


You really should be running something later than 7.4.2 - there are a 
lot of bug-fixes between that and 7.4.12



DB Dump size before this problem is arised: 95 MB DB ENCODING  is
"EUC_JP"

After some investigation the problem is narrowed down :

-- A table namly t60 has 411120 rows .
--There is no Index for the t60 table .
-- t60 has column named c1, c2, c3 .
-- Statement " SELECT c1, c3 FROM t60 " Returns All rows of specified
coumn.

But when execute Statement  " SELECT c2  FROM t60 " causes Error like

1." Invalid memory alloc request size  82127290 " 2. "ERROR:  out of
memory DETAIL:  Failed on request of size 875573295"


It looks like column c2 on t60 has been corrupted on at least one row. 
Have you experienced any crashes on this database?


Anyway - the simplest thing to do is to copy the good rows into another 
table and skip only the damaged data.


1. CREATE TABLE new_t60 AS SELECT * FROM t60 LIMIT 0;
2. INSERT INTO new_t60 SELECT * FROM t60 WHERE c1 BETWEEN ??? AND ???
   Adjusting the ??? will let you work around the problem row(s). 
Column c1 might not be the best choice - your primary key is what you 
want to use.

3. Dump the new table's data.
4. Drop table t60 (remove any foreign-key references and turn off 
triggers first)

5. Recreate table t60
6. Restore new_t60's data into t60 and restore foreign-key references etc.

Then, you can dump the new table and restore it with just the damaged 
row(s) needing to be replaced/repaired.


Then, you need to upgrade to 7.4.12 and also figure out why this 
happened. Have you had crashes? Is your RAM good? Are your disks syncing 
when they say they are?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Urgent !!! Please Help Me

2006-03-14 Thread r irussel
Hello Every body:     I have implemented psql version 7.4.2 on Debian linux version 3.2  and it was running very well. At this moment it is being crashed when executing a single select statement. Details are mentionded as follows:      DB Dump size before this problem is arised: 95 MB   DB ENCODING  is "EUC_JP"     After some investigation the problem is narrowed down :     -- A table namly t60 has 411120 rows .     --There is no Index for the t60 table .     -- t60 has column named c1, c2, c3 .     -- Statement " SELECT c1, c3 FROM t60 " Returns All rows of specified coumn.      But when execute Statement  " SELECT c2  FROM t60 " causes Error like   &nbs
 p;
      1." Invalid memory alloc request size  82127290 "       2. "ERROR:  out of memory   DETAIL:  Failed on request of size 875573295"     I have then made an B-Tree Index of t60 and when execute "REINDEX t60 " then get error   "ERROR:  index row requires 336336 bytes, maximum size is 8191"     I have also tried to Make a full dump of the DB but failed but success fully taken schema wise dump accept the the schema that holds t60 table.     Then I have copied full pgsql folder where pgsql is installed to another computer with same configaration and started the pgsql server on that mchaine successfully.And did all operation again on that DB as mentioned above .But gotten same errors again.     Now I have restored the DB from Previous bak
 cup db
 dump. But my client says that they have entered many data that stored on t60 table   and there is no way reenter data again.     I will be glad if any body  let me know ASAP :     1.Why the problem is Happenning?     2. What is the solution to restore full datbase?        Regards     R.I.  Russell      
		Yahoo! Mail
Bring photos to life! New PhotoMail  makes sharing a breeze. 


Re: [GENERAL] Urgent : Postgresql installation error somebody help

2005-10-15 Thread Magnus Hagander
> Hi all,
> I am trying to create an installable for my metalscan java 
> application. Here are the specs:- I have created the 
> installable using GKsetup(Gkware.com) a third party tool. I 
> am using Postgresql 8.0.2 as my database for my Java 
> application(metalscan). When the setup executable is double 
> clicked on a laptop computer (having Windows2003 server) In 
> the first part of my installation I am installing my Java 
> application into the target computer and when this gets 
> complete my database installation will start.During the 
> database installation I am actually invoking a java program 
> which executes some command line commands. I am using the 
> msiexec command(msiexec /i postgresql-8.0-int.msi /qr 
> INTERNALLAUNCH=1 ADDLOCAL=server,psql,docs,pgadmin 
> SERVICEDOMAIN="+machineName+" [EMAIL PROTECTED] 
> SUPERPASSWORD=postgres SERVICEACCOUNT=postgres1 
> BASEDIR=C:\\postgres") to start the silent mode installation 
> of postgres.before this step I am u sing the net user /add 
> command to create a user account in the windows2003 computer. 
> Then using the NtRights.exe utility to grant the 
> "logonasservice " right to the user I have created(username 
> -postgres1). After this I am invoking msiexec as mentioned 
> above. When I start the database installation in the silent 
> mode using msiexec after the message "starting services" the 
> following error appears and the  Postgresql installation is 
> rolled back.- 
> 
> Service 'PostgreSQL Database Server 8.0' (pgsql-8.0) failed 
> to start. Verify that you have sufficient privileges to start 
> system services.
> 
> Please note that I have logged in as the administrator of the 
> laptop computer. We tried the same installation program in 
> our development network and the installation is working fine. 
> Please tell me as to why I am getting this error and how I 
> can solve this problem. 

Do you get anything in the eventlog, or in the postgresql log directory?

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


  1   2   >