On 2012-03-22, Martin Gregorie mar...@gregorie.org wrote:
Thats fairly standard. A good CSV parser only requires a field to be
quoted if it contains commas or quotes.
quotes,commas, or linebreaks
copy ( values (2,'comma, etc'),(3,'and quote.'),(1,'line
break') ) to stdout with csv;
--
⚂⚃
On Thu, 2012-03-22 at 09:32 +, Arvind Singh wrote:
Help needed in parsing PostgreSQL CSV Log
Hello friends,
I am working an a section of application which needs to Parse CSV Logs
generated by PostgreSql server.
- The Logs are stored C:\Program Files\PostgreSQL\9.0\data\pg_log
- The
Arvind Singh wrote:
Help needed in parsing PostgreSQL CSV Log
[...]
**However the main problem that is, the Log format is not readable**
A Sample Log data line
2012-03-21 11:59:20.640
IST,postgres,stock_apals,3276,localhost:1639,4f697540.ccc,10,idle
,2012-03-21 11:59:20
Nick wrote:
I have a pretty well tuned setup, with appropriate indexes and 16GB of
available RAM. Should this be taking this long? I forced it to not use
a sequential scan and that only knocked a second off the plan.
QUERY
PLAN
On 31 Jan 2012, at 4:55, Nick wrote:
I have a pretty well tuned setup, with appropriate indexes and 16GB of
available RAM. Should this be taking this long? I forced it to not use
a sequential scan and that only knocked a second off the plan.
Quoth David Johnston pol...@yahoo.com:
A) SELECT user_id, CASE WHEN course_name = 'Maths' THEN completed ELSE false
END math_cmp, CASE WHEN course_name = 'English' THEN completed ELSE false
END AS english_cmp FROM applications
a) Expand to multiple columns and store either the default
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Sebastian Tennant
Sent: Thursday, January 26, 2012 6:55 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Help needed creating a view
Hi list,
Given an 'applications'
, January 26, 2012 8:50 PM
Subject: Re: [GENERAL] Help needed creating a view
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Sebastian Tennant
Sent: Thursday, January 26, 2012 6:55 AM
To: pgsql-general@postgresql.org
On Sun, Jan 15, 2012 at 12:17 PM, plasmasoftware net
administra...@plasmasoftware.net wrote:
*select j.kode_barang kode_barang,j.isi_Satuan from J_master_barang j **order
by j.kode_barang asc*
*display *
kode isi
01 24
01B 12
01C 1
i
On 4/01/2012 11:07 PM, bbo...@free.fr wrote:
so i tried to copy the old 9.0 tree to a machine with a still working 9,0
postgres, but it stops with
Starting PostgreSQL 9.0 database server: mainError: could not exec
/usr/lib/postgresql/9.0/bin/pg_ctl /usr/lib/postgresql/9.0/bin/pg_ctl
start -D
On Wed, 4 Jan 2012 15:50:25 +0100, Bruno Boettcher wrote:
Hello!
just made a stupid move... upgraded a working system and without
checking if the backup was ok
so i end up with a debian system having upgraded to 9,1 without
converting the database, and a scrambled backup which is
On Wed, Jan 04, 2012 at 10:06:53AM -0800, Adrian Klaver wrote:
Hello!
So when you are running pg_ctlcluster 9.0 main start what user are you
running
as?
tried as root...
Have you tried to directly start the 9.0 cluster as the postgres user?:
just tried, same error
postgres@agenda:~$
On Wednesday, January 04, 2012 11:42:01 pm Bruno Boettcher wrote:
On Wed, Jan 04, 2012 at 10:06:53AM -0800, Adrian Klaver wrote:
Hello!
ok, got it, from your lines i saw that the binaries of the server were
removed
so i copied them over from the other server, and got the server
On Wednesday, January 04, 2012 6:50:25 am Bruno Boettcher wrote:
Hello!
just made a stupid move... upgraded a working system and without
checking if the backup was ok
so i end up with a debian system having upgraded to 9,1 without
converting the database, and a scrambled backup
On Wednesday, January 04, 2012 9:46:42 am you wrote:
On Wed, Jan 04, 2012 at 07:41:32AM -0800, Adrian Klaver wrote:
Hello!
Define scrambled backup.
I am CCing list so more eyes can see this.
well disks on both side had block loss, without me noticing
so the backups were on the
The first and most obvious check will be the pg_hba.conf file for the allowed
hosts ... these files are specific to the instance.
From: Jacques Lamothe jlamo...@allconnect.com
To: pgsql-general@postgresql.org pgsql-general@postgresql.org
Sent: Wednesday, 21
On 12/20/11 1:02 PM, Jacques Lamothe wrote:
Hi, I have 2 cluster databases, running on the same host, Linux with
redHat. My fist database port is set to default but my second database
port is set to 5436 in the postgresql.conf file. While everything is
ok with local connections, I cannot
Hi Ray,
Have you got any luck to get around this issue?
I am having the same issue. I just installed PostgreSQL 9.1 with Stack
Builder 3.0.0.
Every time I was trying to install additional software I received the error
message popped out saying ...http://www.postgresql.org/application-v2.xml
On 12/11/11 20:51, alextc wrote:
Hi Ray,
Have you got any luck to get around this issue?
I am having the same issue. I just installed PostgreSQL 9.1 with Stack
Builder 3.0.0.
Every time I was trying to install additional software I received the error
message popped out saying
Allan Kamau wrote:
#COPY a.t(raw_data)FROM '/data/tmp/t.txt' WITH FORMAT text;
yields ERROR: syntax error at or near FORMAT
You'll have to use the syntax as documented:
COPY ... FROM ... WITH (FORMAT 'text');
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list
On 24/10/2011 20:23, Allan Kamau wrote:
Hi,
I have a tab delimited file with over a thousand fields (columns)
which I would like to import into postgreSQL.
I have opted to import the entire record (line) of this file into a
single field in a table, one table record per file line. Later
On Mon, Oct 24, 2011 at 11:29 PM, Raymond O'Donnell r...@iol.ie wrote:
On 24/10/2011 20:23, Allan Kamau wrote:
Hi,
I have a tab delimited file with over a thousand fields (columns)
which I would like to import into postgreSQL.
I have opted to import the entire record (line) of this file into
On 24/10/2011 22:39, Allan Kamau wrote:
On Mon, Oct 24, 2011 at 11:29 PM, Raymond O'Donnell r...@iol.ie wrote:
On 24/10/2011 20:23, Allan Kamau wrote:
Hi,
I have a tab delimited file with over a thousand fields (columns)
which I would like to import into postgreSQL.
I have opted to import
Hello,
2.Is there any enterprise version available with all features?
We just completed migrating one of our products to PostgreSQL and load
testing it. My suggestion- if your product uses stored procedures/packages
heavily, have a look at EnterpriseDB. Otherwise, try plain simple
PostgreSQL.
On Mon, Oct 10, 2011 at 12:57:42PM +0100, Sarma Chavali wrote:
Could you please help us to find answers to the following questions?
1.What version of PostgreSQL is stable at the moment for production?
http://www.postgresql.org/ - shows latest release 9.1.1.
2.Is there any enterprise
On 10 Říjen 2011, 16:50, hubert depesz lubaczewski wrote:
On Mon, Oct 10, 2011 at 12:57:42PM +0100, Sarma Chavali wrote:
Could you please help us to find answers to the following questions?
1.What version of PostgreSQL is stable at the moment for production?
http://www.postgresql.org/ -
On 10/10/11 19:57, Sarma Chavali wrote:
Hi Guys,
We are new to PostgreSQL world.
But, our company is planning to migrate the one of the existing
application to PostgreSQL from Oracle.
Could you please help us to find answers to the following questions?
1.What version of
On Mon, Oct 10, 2011 at 8:25 PM, Craig Ringer ring...@ringerc.id.au wrote:
On 10/10/11 19:57, Sarma Chavali wrote:
Hi Guys,
We are new to PostgreSQL world.
But, our company is planning to migrate the one of the existing
application to PostgreSQL from Oracle.
Could you please help us
Hi Guys,
We are new to PostgreSQL world.
But, our company is planning to migrate the one of the existing
application to PostgreSQL from Oracle.
2.Is there any enterprise version available with all features?
The free PostgreSQL comes with all available features; it's not a lite
Siva,
in addition to what others said, please note that underscore matches any
character. to change it use escape char.
http://www.postgresql.org/docs/9.1/static/functions-matching.html#FUNCTIONS-LIKE
2011/9/28 Siva Palanisamy siv...@hcl.com
Hi All,
** **
I am trying to retrieve
On Wed, 2011-09-28 at 12:33 +0530, Siva Palanisamy wrote:
Hi All,
I am trying to retrieve the contact names based on the keyed search
string. It performs good for the English alphabets and behaves
strangely for special chars such as _,/,\,%
The % character is used by SQL as the
On 11/07/2011 3:44 PM, Jignesh Ramavat wrote:
*BUT IF i want result in following format then?*
chapter number=1 current_date=2011-07-11
document doc_name=Billing/
document doc_name=EManager/
document doc_name=Immunization/
document doc_name=NueMD/
document doc_name=NueMDSched/
document
If #1 was solved by using the raid approach, what happens if one of
the disks containing one of my table spaces crashes.
if you are using raid, your tablespaces are on raid volumes comprised of
2 or more drives, any one of those drives may fail, and the full data is
still available. if you
On 06/23/2011 09:37 AM, Natusch, Paul wrote:
I have an application for which data is being written to many disks
simultaneously. I would like to use a postgres table space on each
disk. If one of the disks crashes it is tolerable to lose that data,
however, I must continue to write to the
On 06/23/2011 09:37 PM, Natusch, Paul wrote:
I have an application for which data is being written to many disks
simultaneously. I would like to use a postgres table space on each disk.
If one of the disks crashes it is tolerable to lose that data, however,
I must continue to write to the other
On Tue, Jun 21, 2011 at 1:07 AM, Vikram Vaswani
vikram.vasw...@loudcloudsystems.com wrote:
So my first question is, I'd like to know if PostgreSQL has similar issues
when running in a clustered scenario.
Postgres has nothing quite like the MySQL cluster mode with NDB. You
will have to
On 06/21/2011 01:25 PM, David Fetter wrote:
Dynamically generated tables are generally a problem at the design
level. Neither PostgreSQL nor any other engine will solve that.
It depends a bit on what the OP means by dynamically generated tables.
I'm not entirely sure what you mean by a
On 06/21/2011 10:00 AM, Vick Khera wrote:
Postgres has nothing quite like the MySQL cluster mode with NDB. You
will have to re-think your solution if you want to use postgres to
distribute your queries and data across multiple servers.
The closest thing to a NDB cluster in PostgreSQL is
On 22/06/11 10:00, Greg Smith wrote:
On 06/21/2011 10:00 AM, Vick Khera wrote:
Postgres has nothing quite like the MySQL cluster mode with NDB. You
will have to re-think your solution if you want to use postgres to
distribute your queries and data across multiple servers.
The closest
On Tue, Jun 21, 2011 at 05:07:10AM +, Vikram Vaswani wrote:
Hello
I'm new to PostgreSQL, coming at it from a MySQL background. I'm
currently looking at switching one of our applications (which
currently uses MySQL) over to PostgreSQL and had some questions.
We're considering the
On Tue, 2011-05-03 at 01:14 -0400, Tom Lane wrote:
Craig Ringer cr...@postnewspapers.com.au writes:
This message is very weird: could not read from file pg_clog/02CD at
offset 73728: Success.
Probably indicates an attempted read from beyond EOF. The main
relation-access code paths have
On 02/05/11 03:32, Iztok Stotl wrote:
My database crashed and server won't start ...
--
LOG: database system was interrupted while in recovery at 2011-05-01
19:31:37 CEST
HINT: This probably means that some data is corrupted and you will
Craig Ringer cr...@postnewspapers.com.au writes:
This message is very weird: could not read from file pg_clog/02CD at
offset 73728: Success.
Probably indicates an attempted read from beyond EOF. The main
relation-access code paths have been fixed to give a more intelligible
error message about
On Mon, Apr 25, 2011 at 8:50 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Tuesday, April 26, 2011, Tomas Vondra t...@fuzzy.cz wrote:
Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
Sorry, spoke too soon.
I can COPY individual chunks to files. Did that by year, and at least
the dumping
On Tue, Apr 26, 2011 at 3:24 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Apr 25, 2011 at 8:50 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Tuesday, April 26, 2011, Tomas Vondra t...@fuzzy.cz wrote:
Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
Sorry, spoke too soon.
I
Dne 26.4.2011 04:50, Phoenix Kiula napsal(a):
Tomas, the line where it crashed, here are the 10 or so lines around it:
head -15272350 /backup/links/links_all.txt | tail -20
No, those lines are before the one that causes problems - line number is
15272357, and you've printed just 15272350
Dne 26.4.2011 14:41, Phoenix Kiula napsal(a):
On Tue, Apr 26, 2011 at 3:24 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
Are you sure you're getting all the data out of the source (broken)
database you think you are? Are you sure those rows are in the dump?
Actually I am not. Some
On Fri, Apr 22, 2011 at 8:35 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the
On Mon, Apr 25, 2011 at 9:19 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Fri, Apr 22, 2011 at 8:35 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error.
On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
If I COPY each individual file back into the table, it works. Slowly,
but seems to work. I tried to combine all the files into one go, then
truncate the table, and pull it all in in one go (130 million rows or
so) but this time it gave the same
Dne 25.4.2011 19:31, Alban Hertroys napsal(a):
On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
If I COPY each individual file back into the table, it works. Slowly,
but seems to work. I tried to combine all the files into one go, then
truncate the table, and pull it all in in one go (130
On Tue, Apr 26, 2011 at 1:56 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 25.4.2011 19:31, Alban Hertroys napsal(a):
On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
If I COPY each individual file back into the table, it works. Slowly,
but seems to work. I tried to combine all the files into one
Dne 25.4.2011 20:40, Phoenix Kiula napsal(a):
I did a COPY FROM and populated the entire table. In my hard disk, the
space consumption went up by 64GB.
So you have dumped the table piece by piece, it worked, and now you have
a complete copy of the table? All the rows?
Yet, when I do a
Phoenix Kiula phoenix.ki...@gmail.com writes:
I did a COPY FROM and populated the entire table. In my hard disk, the
space consumption went up by 64GB.
Yet, when I do a SELECT * FROM mytable LIMIT 1 the entire DB
crashes. There is no visible record.
There should certainly be a visible record
Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
Sorry, spoke too soon.
I can COPY individual chunks to files. Did that by year, and at least
the dumping worked.
Now I need to pull the data in at the destination server.
If I COPY each individual file back into the table, it works. Slowly,
On Tuesday, April 26, 2011, Tomas Vondra t...@fuzzy.cz wrote:
Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
Sorry, spoke too soon.
I can COPY individual chunks to files. Did that by year, and at least
the dumping worked.
Now I need to pull the data in at the destination server.
If I COPY
On Tuesday, April 26, 2011, Tomas Vondra t...@fuzzy.cz wrote:
Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
Sorry, spoke too soon.
I can COPY individual chunks to files. Did that by year, and at least
the dumping worked.
Now I need to pull the data in at the destination server.
If I COPY
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data, and at some point it finds out it
needs to allocate
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data, and
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data,
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's the output:
http://www.heypasteit.com/clip/WNR
Also including below, but because this will wrap etc, you can look at
the link above.
Thanks
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's the output:
http://www.heypasteit.com/clip/WNR
Also including below, but
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's
On a fast network it should only take a few minutes. Now rsyncing
live 2.4 TB databases, that takes time. :) Your raptors, if they're
working properly, should be able to transfer at around 80 to
100Megabytes a second. 10 to 15 seconds a gig. 30 minutes or so via
gig ethernet. I'd run
Dne 20.4.2011 12:56, Phoenix Kiula napsal(a):
On a fast network it should only take a few minutes. Now rsyncing
live 2.4 TB databases, that takes time. :) Your raptors, if they're
working properly, should be able to transfer at around 80 to
100Megabytes a second. 10 to 15 seconds a gig. 30
Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
There's a very nice guide on how to do that
http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html
It sure seems like the problem you have (invalid alloc request etc.).
The really annoying part is locating the block, as you
On Thu, Apr 21, 2011 at 7:27 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
There's a very nice guide on how to do that
http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html
It sure seems like the problem you have (invalid alloc
On Thu, Apr 21, 2011 at 11:49 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Thu, Apr 21, 2011 at 7:27 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
There's a very nice guide on how to do that
Phoenix,
how large (in total) is this database)?
can you copy (cp -a) the data directory somewhere? I would do this
just in case :-)
regarding the manual recovery process:
1. you'll have to isolate corrupted table.
you can do this by dumping all tables one-by-one (pg_dump -t TABLE)
until you
Thanks Filip.
I know which table it is. It's my largest table with over 125 million rows.
All the others are less than 100,000 rows. Most are in fact less than 25,000.
Now, which specific part of the table is corrupted -- if it is row
data, then can I dump specific parts of that table? How?
Thanks Filip.
I know which table it is. It's my largest table with over 125 million
rows.
All the others are less than 100,000 rows. Most are in fact less than
25,000.
Now, which specific part of the table is corrupted -- if it is row
data, then can I dump specific parts of that table?
2011/4/18 Phoenix Kiula phoenix.ki...@gmail.com:
Thanks Filip.
I know which table it is. It's my largest table with over 125 million rows.
All the others are less than 100,000 rows. Most are in fact less than 25,000.
Now, which specific part of the table is corrupted -- if it is row
data,
On Mon, Apr 18, 2011 at 11:02 PM, t...@fuzzy.cz wrote:
Thanks Filip.
I know which table it is. It's my largest table with over 125 million
rows.
All the others are less than 100,000 rows. Most are in fact less than
25,000.
Now, which specific part of the table is corrupted -- if it is
Dne 18.4.2011 20:27, Phoenix Kiula napsal(a):
What am I to do now? Even reindex is not working. I can try to drop
indexes and create them again. Will that help?
It might help, but as someone already pointed out, you're running a
version that's 3 years old. So do a hot file backup (stop the
On Mon, Apr 18, 2011 at 5:44 PM, Tomas Vondra t...@fuzzy.cz wrote:
Still, do the file backup as described in the previous posts. You could
even do an online backup using pg_backup_start/pg_backup_stop etc.
As soon as you have a working file system backup, get the tw_cli
utility for the 3ware
On Tue, Apr 19, 2011 at 8:35 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Apr 18, 2011 at 5:44 PM, Tomas Vondra t...@fuzzy.cz wrote:
Still, do the file backup as described in the previous posts. You could
even do an online backup using pg_backup_start/pg_backup_stop etc.
As soon
On Mon, Apr 18, 2011 at 8:52 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Tue, Apr 19, 2011 at 8:35 AM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Mon, Apr 18, 2011 at 5:44 PM, Tomas Vondra t...@fuzzy.cz wrote:
Still, do the file backup as described in the previous posts. You
System logs maybe? Something about a process getting killed? Have
you tried turning up the verbosity of the pg logs?
Syslog has to be compiled with PG? How do I enable it? Where should I
look for it?
The documentation, whenever it mentions syslog, always just assumes
the expression If
On Mon, Apr 18, 2011 at 9:23 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
System logs maybe? Something about a process getting killed? Have
you tried turning up the verbosity of the pg logs?
Syslog has to be compiled with PG? How do I enable it? Where should I
look for it?
The
On 04/15/2011 01:15 PM, Edison So wrote:
I have a DELL server running Windows server 2003 and Postgres 8.1.
I used pg_dump to back up a database test:
pg_dump -v -h localhost -p 5432 -U postgres -F c -b -f
D:/db_dump/backup.bak test
The backup was showing the following error.
.
pg_dump: dumping
Hello Adrian,
Thank you for the reply.
I will definitely give it a try on Monday.
I am trying to use pg_dump command to backup up each table in the database
and restore them one by one using the -t option. It is going to be a
painful process because 8.1 pg_dump does not have the exclude option
Naturally a boolean can only have two values,
really?
pasman
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Le 15/02/2011 15:49, Luca Ferrari a écrit :
Hello,
I've got a doubt about partial indexes and the path chosen by the optimizer.
Consider this simple scenario:
CREATE TABLE p( pk serial NOT NULL , val2 text, val1 text, b boolean, PRIMARY
KEY (pk) );
INSERT INTO p(pk, val1, val2, b) VALUES(
On 16/02/11 01:49, Luca Ferrari wrote:
Hello,
I've got a doubt about partial indexes and the path chosen by the optimizer.
Consider this simple scenario:
CREATE TABLE p( pk serial NOT NULL , val2 text, val1 text, b boolean, PRIMARY
KEY (pk) );
INSERT INTO p(pk, val1, val2, b) VALUES(
Guillaume Lelarge guilla...@lelarge.info writes:
Le 15/02/2011 15:49, Luca Ferrari a écrit :
So a sequential scan. I know that the optimizer will not consider an index
if
it is not filtering, but I don't understand exactly why in this case.
Accessing a page in an index is way costier then
Luca Ferrari wrote:
Hello,
I've got a doubt about partial indexes and the path chosen by the optimizer.
Consider this simple scenario:
CREATE TABLE p( pk serial NOT NULL , val2 text, val1 text, b boolean, PRIMARY
KEY (pk) );
INSERT INTO p(pk, val1, val2, b) VALUES(
Peter Eisentraut wrote:
On tis, 2011-01-18 at 10:33 +1100, raf wrote:
p.s. if anyone in debian locale land is listening,
'E' does not sort before ','. what were you thinking? :-)
What is actually happening is that the punctuation is sorted in a second
pass after the letters. Which is
raf wrote:
Peter Eisentraut wrote:
On tis, 2011-01-18 at 10:33 +1100, raf wrote:
p.s. if anyone in debian locale land is listening,
'E' does not sort before ','. what were you thinking? :-)
What is actually happening is that the punctuation is sorted in a second
pass after the
raf r...@raf.org writes:
the behaviour i expect (and see on macosx-10.6.6) is:
id | name
+---
4 | CLARK
2 | CLARK, PETER
3 | CLARKE
1 | CLARKE, DAVID
the behaviour i don't expect but see anyway (on debian-5.0) is:
id | name
On Mon, Jan 17, 2011 at 02:19:14PM -0500, Tom Lane wrote:
No, not particularly. Sort order is determined by lc_collate
not lc_messages. Unfortunately it's entirely possible that OSX
will give you a different sort order than Linux even for similarly
named lc_collate settings. About the only
Tom Lane wrote:
raf r...@raf.org writes:
the behaviour i expect (and see on macosx-10.6.6) is:
id | name
+---
4 | CLARK
2 | CLARK, PETER
3 | CLARKE
1 | CLARKE, DAVID
the behaviour i don't expect but see anyway (on debian-5.0) is:
On tis, 2011-01-18 at 10:33 +1100, raf wrote:
p.s. if anyone in debian locale land is listening,
'E' does not sort before ','. what were you thinking? :-)
What is actually happening is that the punctuation is sorted in a second
pass after the letters. Which is both correct according to the
Michael,
I'm new to PostgreSQL, but have worked with other databases. I'm trying to
write a trigger to default a timestamp column to a fixed interval before
another. The test setup is as follows:
create table test
( date1 timestamp,
date2 timestamp
);
create or replace function
Le 27/12/2010 18:57, Michael Satterwhite a écrit :
I'm new to PostgreSQL, but have worked with other databases. I'm trying to
write a trigger to default a timestamp column to a fixed interval before
another. The test setup is as follows:
create table test
( date1 timestamp,
On Mon, Dec 27, 2010 at 9:57 AM, Michael Satterwhite
mich...@weblore.com wrote:
CREATE TRIGGER t_listing_startdate before insert or update on test
for each row execute procedure t_listing_startdate();
Now that you've created a trigger function, you need to attached to your table:
On Mon, Dec 27, 2010 at 1:14 PM, Michael Satterwhite
mich...@weblore.com wrote:
I've *GOT* to be missing something in this post. You start by quoting the
Create Trigger that attaches the trigger to the table. Then you tell me that
I've got to do what you showed that I did.
Oops, your right,
Michael,
I'm new to PostgreSQL, but have worked with other databases. I'm trying
to write a trigger to default a timestamp column to a fixed interval
before another. The test setup is as follows:
Try this pg_dump of a working example:
CREATE FUNCTION t_listing_startdate() RETURNS trigger
Le 27/12/2010 22:16, Michael Satterwhite a écrit :
On Monday, December 27, 2010 12:58:40 pm Guillaume Lelarge wrote:
Le 27/12/2010 18:57, Michael Satterwhite a écrit :
I'm new to PostgreSQL, but have worked with other databases. I'm trying
to write a trigger to default a timestamp column to a
201 - 300 of 807 matches
Mail list logo