On 17/03/2009 15:04, Ivano Luberti wrote:
Thanks but it keeps on not finding the file: the warning has disappeared
ERROR: could not open file c:\temp\anagraficaANIDIs.csv for reading:
No such file or directory
You haven't said whether the file is on the same machine as the server -
is
I'm sorry, you are right that is the problem
I had interpreted that as the file should reside on the same machine
where pgAdmin (or another client) runs , not the server.
Thank you again
Raymond O'Donnell ha scritto:
On 17/03/2009 15:04, Ivano Luberti wrote:
Thanks but it keeps on not
On 17/03/2009 15:28, Ivano Luberti wrote:
I'm sorry, you are right that is the problem
I had interpreted that as the file should reside on the same machine
where pgAdmin (or another client) runs , not the server.
Thank you again
You're welcome! That actually cost me a half-hour or so of
On Wed, 2009-02-18 at 11:56 -0700, Bill Todd wrote:
If the COPY command fails does it identify the offending row?
Yes, it tries to identify the failing row in the error message.
After reading the manual and the wiki I assume that there is no way to
tell copy to start with the Nth record in
On Wednesday 18 February 2009 10:56:45 am Bill Todd wrote:
If the COPY command fails does it identify the offending row?
After reading the manual and the wiki I assume that there is no way to
tell copy to start with the Nth record in the input file. Is that
correct? It seems like such an
Adrian Klaver wrote:
On Wednesday 18 February 2009 10:56:45 am Bill Todd wrote:
If the COPY command fails does it identify the offending row?
After reading the manual and the wiki I assume that there is no way to
tell copy to start with the Nth record in the input file. Is that
correct? It
Bill Todd wrote:
Thanks for the suggestion but pgloader appears to be a Linux only
solution and my environment is Windows. The other problem is that
there is no documentation that I could find (other than a PDF made
from slides).
Bill
Bill,
pgloader is a Python app, It should work on win32
On Wednesday 18 February 2009 2:00:19 pm Tony Caduto wrote:
Bill Todd wrote:
Thanks for the suggestion but pgloader appears to be a Linux only
solution and my environment is Windows. The other problem is that
there is no documentation that I could find (other than a PDF made
from slides).
On Wed, Feb 11, 2009 at 11:22 AM, SHARMILA JOTHIRAJAH
sharmi...@yahoo.com wrote:
Hi,
A question about the Postgresql's COPY command.
This is the syntax of this command from the manual
COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ]
.
I
:
From: Scott Marlowe scott.marl...@gmail.com
Subject: Re: [GENERAL] COPy command question
To: sharmi...@yahoo.com
Cc: General postgres mailing list pgsql-general@postgresql.org
Date: Thursday, February 12, 2009, 1:35 PM
On Wed, Feb 11, 2009 at 11:22 AM, SHARMILA JOTHIRAJAH
sharmi...@yahoo.com wrote
On Wed, Feb 11, 2009 at 10:22:23AM -0800, Sharmila Jothirajah wrote:
I want to migrate my tables from Oracle to Postgres.
The COPY FROM command can take input from 'file' or 'STDIN'.
Is it possible for the COPY command to take its input from a
java program(which contains the oracle resultset)
Yes should work perfectly as suggested by Sam,
chk this for jdbc support:
http://kato.iki.fi/sw/db/postgresql/jdbc/copy/
Sam Mason wrote:
On Wed, Feb 11, 2009 at 10:22:23AM -0800, Sharmila Jothirajah wrote:
I want to migrate my tables from Oracle to Postgres.
The COPY FROM command can
On Tue, Jan 6, 2009 at 11:41 AM, Pedro Doria Meunier
pdo...@netmadeira.com wrote:
Hi All,
This is a bit embarassing ... but ...
I have a partial set of data that I want to restore via COPY ... FROM command
I have created a public folder for the effect and chown'ed both the folder and
the
Pedro Doria Meunier pdo...@netmadeira.com writes:
All I'm getting is a Permission denied upon issuing the COPY command from
within psql interactive terminal!
Since you didn't show what you did or what the error was, we're just
guessing ... but I'm going to guess that you should use \copy not
Hi Scott
Txs for replying.
Anyway I've found the problem (silly me... (blush) )
It had to do (of course) with the forest perms in the folder tree ...
As soon as I moved the file into the data/ folder and executed the COPY ...
FROM feeding it the file from that location everything worked as
Pedro Doria Meunier wrote:
I have created a public folder for the effect and chown'ed both the folder
and
the file to be fed into COPY to a+rw ...
The server user (usually via the group or other permissions blocks)
must also have at least execute ('x') permissions on every directory
between
Adrian Klaver wrote:
On Sunday 21 December 2008 1:49:18 am Herouth Maoz wrote:
Adrian Klaver wrote:
Snip
Are you sure the problem is not in $datefield = * . That the script
that formats the data file is not correctly adding * to the right file.
Seems almost like sometimes the
On Tuesday 23 December 2008 6:43:56 am Herouth Maoz wrote:
Well, every time this happens, I re-run the procedure, with all the
lines in the data files up to the given table deleted. And it works.
Then I restore the original data file. And the next day it works. It
only happens once in a
Adrian Klaver wrote:
Snip
Are you sure the problem is not in $datefield = * . That the script that
formats the data file is not correctly adding * to the right file. Seems
almost like sometimes the second CMD is being run against the table that the
first CMD should be run on. In other
(Sorry for the forward, I forgot to CC the list)
On Wed, Dec 17, 2008 at 9:38 AM, Herouth Maoz hero...@unicell.co.il wrote:
and for non-transaction tables (ones that have records that might
change but don't accumulate based on time) it's DELETE without WHERE.
In that case, you are better off
On Sunday 21 December 2008 1:49:18 am Herouth Maoz wrote:
Adrian Klaver wrote:
Snip
Are you sure the problem is not in $datefield = * . That the script
that formats the data file is not correctly adding * to the right file.
Seems almost like sometimes the second CMD is being run against
On Wednesday 17 December 2008 12:38:40 am Herouth Maoz wrote:
I have a strange situation that occurs every now and again.
We have a reports system that gathers all the data from our various
production systems during the night, where we can run heavy reports on
it without loading the
Joshua D. Drake wrote:
On Fri, 2008-12-05 at 12:00 -0700, Bill Todd wrote:
Joshua D. Drake wrote:
On Thu, 2008-12-04 at 19:35 -0700, Bill Todd wrote:
null as IS NULL results in the following error.
ERROR: syntax error at or near is
LINE 5: null as is null
Bill Todd [EMAIL PROTECTED] writes:
I am beginning to suspect this is impossible.
That's correct: see the COPY reference page. A quoted value is never
considered to match the NULL string.
regards, tom lane
--
Sent via pgsql-general mailing list
On Thu, 2008-12-04 at 19:35 -0700, Bill Todd wrote:
Using 8.3.3 I am trying to import a CSV file using the following copy
command.
copy billing.contact from 'c:/export/contact.csv'
with delimiter as ','
null as ''
csv quote as '';
The following record record causes an error because the
On 27/11/2008 20:52, Bill Todd wrote:
Substituting the input parameter for the literal path does not work and
neither does using PREPARE/EXECUTE. How can I pass the file path as a
parameter?
You could write a pl/pgsql function which constructs the query as a
string and then runs it with
Raymond O'Donnell wrote:
On 27/11/2008 20:52, Bill Todd wrote:
Substituting the input parameter for the literal path does not work and
neither does using PREPARE/EXECUTE. How can I pass the file path as a
parameter?
You could write a pl/pgsql function which constructs the query as a
Raymond O'Donnell wrote:
On 27/11/2008 20:52, Bill Todd wrote:
Substituting the input parameter for the literal path does not work and
neither does using PREPARE/EXECUTE. How can I pass the file path as a
parameter?
You could write a pl/pgsql function which constructs the query as a
On 27/11/2008 23:09, Bill Todd wrote:
Raymond O'Donnell wrote:
You could write a pl/pgsql function which constructs the query as a
string and then runs it with EXECUTE.
According to the PostgreSQL help file EXECUTE is used to execute a
prepared statement. I tried that but when I call
Is that the *first* error message you got?
Yes it is.
In fact I made a mistake in the first email, so instead:
INSERT INTO A ( Col1, Col2 )
VALUES (2, '-- any text' );
please change with:
INSERT INTO A ( Col1, Col2 )
VALUES (1, '-- any text' );
However I
Sabin Coanda wrote:
Hi,
I have PostgreSQL 8.3.5, compiled by Visual C++ build 1400 on Windows OS.
I try to use the COPY command to optimize the backup/restore performance,
but I found a problem. I reproduce it below.
I can't reproduce it here on 8.3 on linux.
I backup the database
Sorry, my fault that I run the script in the query window of pgAdmin, not in
the system console. I check it again in the system console and it works
well.
Thanks,
Sabin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Sabin Coanda [EMAIL PROTECTED] writes:
I backup the database plain with the command:
pg_dump.exe -U postgres -F p -v -f backup_plain.sql DemoDB
I create a new database, and I run the script. But it rise me the error:
ERROR: syntax error at or near 1
LINE 49: 1 -- any text
I look for
Abraham, Danny wrote:
String in DB:
D:\Program Files\BMC Software\CONTROL-D\wa/reports
In the output files \| are duplicated: The string in the output text
fileis
D:\\Program Files\\BMC Software\\CONTROL-D\\wa/reports
== ==== ==
On Wed, 5 Nov 2008 05:06:36 -0600
Abraham, Danny [EMAIL PROTECTED] wrote:
Hi,
String in DB:
D:\Program Files\BMC Software\CONTROL-D\wa/reports
In the output files \| are duplicated: The string in the output
text fileis
D:\\Program Files\\BMC Software\\CONTROL-D\\wa/reports
==
On Wed, 13 Aug 2008 16:32:18 -0500
ries van Twisk [EMAIL PROTECTED] wrote:
On Aug 13, 2008, at 4:25 PM, Ivan Sergio Borgonovo wrote:
I need to write an import function with enough isolation from
apache daemon.
Code has no input other than cvs files and a signal about when to
start the
On Aug 13, 2008, at 4:25 PM, Ivan Sergio Borgonovo wrote:
I need to write an import function with enough isolation from apache
daemon.
Code has no input other than cvs files and a signal about when to
start the import.
The sql code that will be executed will be static.
I may end up writing a
Abraham, Danny wrote:
I am loading a huge file using C, STDIN
Using C?
Have you written a C program using libpq to load some data, which it
reads from its stdin?
Or do you mean COPY FROM STDIN ?
Something else?
Perhaps if you provided a clearer and more complete explanation of your
problem
David Wilson wrote:
On Mon, Jul 28, 2008 at 1:24 AM, Klint Gore [EMAIL PROTECTED] wrote:
Try just a single \
e.g.
ge.xls,application/vnd.ms-excel,71168,\320\317\021\340\241[snip]
Thanks- I did try that, and it at least gave the expected output from
select, but is there a way to verify that
Klint Gore [EMAIL PROTECTED] writes:
David Wilson wrote:
I'm not certain how to check the actual byte width of a column within a
row,
select length(bytea_field) from table
If you want the actual on-disk footprint, use pg_column_size()
regards, tom lane
--
Sent via
Tom Lane wrote:
Klint Gore [EMAIL PROTECTED] writes:
David Wilson wrote:
I'm not certain how to check the actual byte width of a column within a
row,
select length(bytea_field) from table
If you want the actual on-disk footprint, use pg_column_size()
Size on disk would have the
David Wilson wrote:
My application is adding a bytea column to a table into which data is
dumped in approximately 4k row batches, one batch approximately every
10 seconds. To this point, those dumps have used copy from stdin;
however, I'm having some difficulty getting bytea encodings to work
On Mon, Jul 28, 2008 at 1:24 AM, Klint Gore [EMAIL PROTECTED] wrote:
Try just a single \
e.g.
ge.xls,application/vnd.ms-excel,71168,\320\317\021\340\241[snip]
Thanks- I did try that, and it at least gave the expected output from
select, but is there a way to verify that it's actually handling
On 6:01 pm 07/21/08 Jack Orenstein [EMAIL PROTECTED] wrote:
to this:
psql -h $SOURCE_HOST ... -c copy binary $SOURCE_SCHEMA.$SOURCE_T
ABLE to
stdout |\
psql ... -c copy binary $TARGET_SCHEMA.$TARGET_TABLE from stdin
http://www.postgresql.org/docs/8.3/interactive/sql-copy.html
The
On 4:05 pm 07/21/08 Jack Orenstein [EMAIL PROTECTED] wrote:
We will now be adding 8.3.x databases to the mix, and will need to
copy between 7.4.x and 8.3.x in both directions. The datatypes we use
I believe it should work.
Also, one feature I believe started in the 8.X line (8.2?), is the
Francisco Reyes wrote:
On 4:05 pm 07/21/08 Jack Orenstein [EMAIL PROTECTED] wrote:
What if we do a binary copy instead?
What do you mean by a binary copy?
pg_dump -Fc?
No, I mean changing this:
psql -h $SOURCE_HOST ... -c copy $SOURCE_SCHEMA.$SOURCE_TABLE to stdout |\
psql ... -c
We're using a statement like this to dump between 500K and 5
million rows.
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn '0')
TO '/dev/shm/SomeFile.csv'
Upon first run, this operation can take several minutes. Upon
second run, it will be complete in generally well
We're using a statement like this to dump between 500K and 5 million
rows.
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn '0')
TO '/dev/shm/SomeFile.csv'
Upon first run, this operation can take several minutes. Upon second
run, it will be complete in generally well under a
On Mon, May 5, 2008 at 6:18 AM, Hans Zaunere [EMAIL PROTECTED] wrote:
We're using a statement like this to dump between 500K and 5 million
rows.
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn '0')
TO '/dev/shm/SomeFile.csv'
Upon first run, this operation can take
On Sun, May 4, 2008 at 5:11 PM, Hans Zaunere [EMAIL PROTECTED] wrote:
Hello,
We're using a statement like this to dump between 500K and 5 million rows.
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn '0')
TO '/dev/shm/SomeFile.csv'
Wait, are you really creating a .csv file in shared
On Mon, May 5, 2008 at 9:03 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Sun, May 4, 2008 at 5:11 PM, Hans Zaunere [EMAIL PROTECTED] wrote:
Hello,
We're using a statement like this to dump between 500K and 5 million
rows.
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn
We're using a statement like this to dump between 500K and 5
million rows.
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn '0')
TO '/dev/shm/SomeFile.csv'
Upon first run, this operation can take several minutes. Upon
second run, it will be complete in
Hans Zaunere [EMAIL PROTECTED] writes:
We're using a statement like this to dump between 500K and 5 million rows.
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn '0')
TO '/dev/shm/SomeFile.csv'
Upon first run, this operation can take several minutes. Upon second run,
it will be
On Sun, May 4, 2008 at 5:11 PM, Hans Zaunere [EMAIL PROTECTED] wrote:
Hello,
We're using a statement like this to dump between 500K and 5 million rows.
COPY(SELECT SomeID FROM SomeTable WHERE SomeColumn '0')
TO '/dev/shm/SomeFile.csv'
Upon first run, this operation can take several
kevin kempter wrote:
Any thoughts on what I'm doing wrong?
I suspect that pg_dump is going to do a better job than using psql to
generate the input for the remote load. pg_dump can dump single tables
and can use COPY style data formatting.
As for why your current command isn't working ... You
kevin kempter wrote:
Hi List;
I want to run a copy (based on a select) to STDOUT and pipe it to a
psql copy from STDIN on a different host.
here's what I have:
1) a .sql file that looks like this:
copy (
select
cust_id,
cust_name,
last_update_dt
from sl_cust
)
to STDOUT
with
blackwater dev [EMAIL PROTECTED] writes:
I have data that I'm running through pg_escape_sting in php and then adding
to stdin for a copy command. The problem is O'reilly is being changed to
O''Reilly in the string and then in the db.
pg_escape_string is designed to produce a string properly
blackwater dev wrote:
I have data that I'm running through pg_escape_sting in php and then adding
to stdin for a copy command. The problem is O'reilly is being changed to
O''Reilly in the string and then in the db. I saw with the copy command I
can specify the escape but it isn't working for
Le lundi 11 février 2008, Klint Gore a écrit :
Is there any way to make copy work with fixed width files?
I'll try to see about implementing this in pgloader, shouldn't be complex. But
we have some other things on the TODO (which could get formalized by now...).
So at the moment the
Klint Gore [EMAIL PROTECTED] writes:
Is there any way to make copy work with fixed width files?
I'd suggest using a simple sed script to convert the data into the
format COPY understands.
regards, tom lane
---(end of
Il Wednesday 26 December 2007 12:58:34 Reg Me Please ha scritto:
Hi all.
I have this composite type:
create type ct as (
ct1 text,
ct2 int
);
Then I have this table
create table atable (
somedata numeric,
otherdata text,
compo ct
);
when I try to COPY data to that table
Il Wednesday 26 December 2007 14:31:04 Reg Me Please ha scritto:
Il Wednesday 26 December 2007 12:58:34 Reg Me Please ha scritto:
Hi all.
I have this composite type:
create type ct as (
ct1 text,
ct2 int
);
Then I have this table
create table atable (
somedata
I expect that it is not quite as easy as that.
My advice (as a non-expert) would be to install the same version of pg
onto the target machine, and use etl
(http://en.wikipedia.org/wiki/Extract,_transform,_load) to transfer the
data. Basically you just need a small script (I like PHP myself,
--- On Mon, 12/24/07, Alex Vinogradovs [EMAIL PROTECTED] wrote:
P.S. datafiles are 85GB in size, I couldn't really dump
and restore...
Don't for get the steps of compressing and uncompressing between dump and
restore.;) If the file is still too big, you can always use tar to spit the
file
Sorry guys, was my mistake... I found one file missing in global
tablespace. Copying it there fixed the problem.
Thanks everyone!
On Mon, 2007-12-24 at 14:07 -0800, Richard Broersma Jr wrote:
--- On Mon, 12/24/07, Alex Vinogradovs [EMAIL PROTECTED] wrote:
P.S. datafiles are 85GB in size,
On Mon, 10 Dec 2007, A. Ozen Akyurek wrote:
We have a large table (about 9,000,000 rows and total size is about 2.8 GB)
which is exported to a binary file.
How was it exported? With COPY tablename TO 'filename' WITH BINARY?
The BINARY key word causes all data to be stored/read as binary
Reg Me Please [EMAIL PROTECTED] writes:
In order to speed up the COPY ... FROM ... command, I've
disabled everything (primary key, not null, references, default and indexes)
in the table definition before doing the actual COPY.
Later I can restore them with ALTER TABLE ... and CREATE INDEX ...
Il Thursday 13 December 2007 19:56:02 Tom Lane ha scritto:
Reg Me Please [EMAIL PROTECTED] writes:
In order to speed up the COPY ... FROM ... command, I've
disabled everything (primary key, not null, references, default and
indexes) in the table definition before doing the actual COPY.
On Dec 13, 2007 4:31 PM, Reg Me Please [EMAIL PROTECTED] wrote:
Il Thursday 13 December 2007 19:56:02 Tom Lane ha scritto:
Reg Me Please [EMAIL PROTECTED] writes:
In order to speed up the COPY ... FROM ... command, I've
disabled everything (primary key, not null, references, default and
Lew wrote:
Try eliminating the double quotes in the CSV file. Wannabe NULL
would then be ,, (consecutive commas)
in the CSV. From the docs, you don't even need the NULL AS
clause in your COPY statement.
Ivan Sergio Borgonovo wrote:
Exactly what I did because fortunately there weren't too
On Tue, 27 Nov 2007 21:12:00 -0500
Lew [EMAIL PROTECTED] wrote:
Lew wrote:
Try eliminating the double quotes in the CSV file. Wannabe
NULL would then be ,, (consecutive commas)
in the CSV. From the docs, you don't even need the NULL AS
clause in your COPY statement.
Ivan Sergio
On Thursday 29 November 2007 2:40 pm, Ivan Sergio Borgonovo wrote:
On Tue, 27 Nov 2007 21:12:00 -0500
Lew [EMAIL PROTECTED] wrote:
Lew wrote:
Try eliminating the double quotes in the CSV file. Wannabe
NULL would then be ,, (consecutive commas)
in the CSV. From the docs, you don't
Ivan Sergio Borgonovo wrote:
I'd expect this:
\copy tablename from 'filename.csv' WITH NULL as '' CSV HEADER
whould import as NULL.
The input file is UTF-8 (not Unicode/UTF-16).
I checked the hexdump and the wannabe NULL are actually
2c 22 22 2c - ,,
all fields are varchar that admit NULL
On Sun, 25 Nov 2007 13:22:48 -0500
Lew [EMAIL PROTECTED] wrote:
I went to the docs for COPY and they say,
The default is \N (backslash-N) in text mode, and a empty value
with no quotes in CSV mode.
That with no quotes phrase caught my attention.
Try eliminating the double quotes in the
if you dump/undump using a pipe there is no temp file ...
Abandoned schrieb:
On Nov 2, 3:49 pm, Sascha Bohnenkamp [EMAIL PROTECTED] wrote:
maybe pg_dump | pg_undump could work?
I mean pg_dump with the appropriate parameters and undump directly to
the other database?
This may one of the
maybe pg_dump | pg_undump could work?
I mean pg_dump with the appropriate parameters and undump directly to
the other database?
This may one of the fastest ways to do it I think.
Abandoned schrieb:
Hi..
I want to copy my database..
I have a database which is name db01 and i want to copy it
On Nov 2, 3:49 pm, Sascha Bohnenkamp [EMAIL PROTECTED] wrote:
maybe pg_dump | pg_undump could work?
I mean pg_dump with the appropriate parameters and undump directly to
the other database?
This may one of the fastest ways to do it I think.
Abandoned schrieb:
Hi..
I want to copy my
On Nov 2, 5:30 pm, Sascha Bohnenkamp [EMAIL PROTECTED] wrote:
if you dump/undump using a pipe there is no temp file ...
Abandoned schrieb:
On Nov 2, 3:49 pm, Sascha Bohnenkamp [EMAIL PROTECTED] wrote:
maybe pg_dump | pg_undump could work?
I mean pg_dump with the appropriate parameters
On Sat, Nov 03, 2007 at 01:42:09PM -, Abandoned wrote:
I tryed pg_dump but it is very slowly. Are there any faster way to
copy database?
Have you tried CREATE DATABASE .. TEMPLATE ? (See amual for syntax)
Have a nice day,
--
Martijn van Oosterhout [EMAIL PROTECTED]
On 11/3/07, Abandoned [EMAIL PROTECTED] wrote:
I tryed pg_dump but it is very slowly. Are there any faster way to
copy database?
Assuming it's all happening on the same db server, yes:
psql template1
create database newdb template olddb
---(end of
Tom Lane wrote:
Rainer Bauer [EMAIL PROTECTED] writes:
Wouldn't it be possible to copy the database folder and somehow instruct the
postmaster to include the copied data after a restart?
See CREATE DATABASE's TEMPLATE option. It's a bit crude but I think
it'll help.
Thanks, Tom. Works like a
Hi,
Anyone have comparisons/benchmarks to give some
idea of the potential performance gains?
Say compared to doing the stuff here:
http://www.postgresql.org/docs/8.2/static/populate.html
Regards,
Link.
At 09:35 AM 11/5/2007, Toru SHIMOGAKI wrote:
Dimitri, thank you for your quoting. I'm a
El dom, 04-11-2007 a las 02:16 +0100, Rainer Bauer escribió:
Abandoned wrote:
I tryed pg_dump but it is very slowly. Are there any faster way to
copy database?
Actually, I was looking for something along the same line.
I often want to test some functionality in my program based on the
On 11/4/07, Reg Me Please [EMAIL PROTECTED] wrote:
Hi all.
I'd like to know whether the indexes on a table are updated or not during
a COPY ... FROM request.
That is, should I drop all indexes during a COPY ... FROM in order to gain
the maximum speed to load data?
Thanks.
Although
Il Sunday 04 November 2007 14:59:10 Josh Tolley ha scritto:
On 11/4/07, Reg Me Please [EMAIL PROTECTED] wrote:
Hi all.
I'd like to know whether the indexes on a table are updated or not during
a COPY ... FROM request.
That is, should I drop all indexes during a COPY ... FROM in order
On Nov 4, 2007, at 9:15 AM, Reg Me Please wrote:
Il Sunday 04 November 2007 14:59:10 Josh Tolley ha scritto:
On 11/4/07, Reg Me Please [EMAIL PROTECTED] wrote:
Hi all.
I'd like to know whether the indexes on a table are updated or
not during
a COPY ... FROM request.
That is, should I
Il Sunday 04 November 2007 16:21:41 Erik Jones ha scritto:
On Nov 4, 2007, at 9:15 AM, Reg Me Please wrote:
Il Sunday 04 November 2007 14:59:10 Josh Tolley ha scritto:
On 11/4/07, Reg Me Please [EMAIL PROTECTED] wrote:
Hi all.
I'd like to know whether the indexes on a table are updated
Josh Tolley [EMAIL PROTECTED] writes:
Although questions of which is faster often depend very heavily on
the data involved, the database schema, the hardware, etc., typically
people find it best to drop all indexes during a large import and
recreate them afterward.
See also the extensive
Hi,
Le Sunday 04 November 2007 11:22:19 Reg Me Please, vous avez écrit :
That is, should I drop all indexes during a COPY ... FROM in order to
gain the maximum speed to load data?
When looking for a way to speed up data loading, you may want to consider
pgbulkload, a project which optimizes
Dimitri, thank you for your quoting. I'm a pg_bulkload author.
pg_bulkload is optimized especially for appending data to table with indexes.
If you use it, you don't need to drop index before loading data. But you have to
consider conditions carefully as Dimitri said below. See also pg_bulkload
Abandoned wrote:
I tryed pg_dump but it is very slowly. Are there any faster way to
copy database?
Actually, I was looking for something along the same line.
I often want to test some functionality in my program based on the same
dataset. However, dump/restore takes too long to be of any use.
Rainer Bauer [EMAIL PROTECTED] writes:
Wouldn't it be possible to copy the database folder and somehow instruct the
postmaster to include the copied data after a restart?
See CREATE DATABASE's TEMPLATE option. It's a bit crude but I think
it'll help.
regards, tom lane
On Monday 01 October 2007 05:20:52 pere roca wrote:
Hi everybody,
I want to enter a .CSV file using COPY comand and plpgsql. It enters
lat,lon and some data. In the CSV data there is no field (such as
user_name or current_time) that allow distinguish future queries for
different users
am Thu, dem 30.08.2007, um 14:59:06 +0800 mailte Ow Mun Heng folgendes:
Is there a way to do a dump of a database using a select statement?
A complete database or just a simple table?
eg: \copy trd to 'file' select * from table limit 10
Since 8.2 you can use COPY (select * from table) TO
On Thu, 2007-08-30 at 09:14 +0200, A. Kretschmer wrote:
am Thu, dem 30.08.2007, um 14:59:06 +0800 mailte Ow Mun Heng folgendes:
Is there a way to do a dump of a database using a select statement?
A complete database or just a simple table?
a simple table.. couple million records, want some
Thanks again guys =)
I've managed to use temp table to load the data and create new table/s
Now, how do I convert a text field with 'YY/MM/DD' to date field 'DD/MM/YY'?
On 13/08/07, Tom Lane [EMAIL PROTECTED] wrote:
Paul Lambert [EMAIL PROTECTED] writes:
novice wrote:
db5= \copy maintenance
On Tue, Aug 14, 2007 at 10:50:33AM +0800, Ow Mun Heng wrote:
Hi,
Writing a script to pull data from SQL server into a flat-file (or just
piped in directly to PG using Perl DBI)
Just wondering if the copy command is able to do a replace if there are
existing data in the Db already. (This
On 8/12/07, novice [EMAIL PROTECTED] wrote:
I resolved it by doing this - is there another more efficient method?
And yes, the text file I am working with doesn't have any TABs
5162 OK SM 06/12/04 06:12
substr(data, 30, 2)||'-'||substr(data, 27,
2)||'-20'||substr(data, 24,
novice [EMAIL PROTECTED] writes:
I'm having trouble loading the date field. Should I convert it first
or should I be using a text processor before loading the data in?
3665 OK SM 07/07/13 06:09
5162 OK SM 07/02/12 06:10
3665 OK SM 07/06/19 06:10
What
I'm using pg version 8.2.4. What is the best method to load this data?
I have just a little over 55,000 entries.
db5= \copy maintenance FROM test.txt
ERROR: invalid input syntax for integer: 3665 OK SM
07/07/13 06:09
CONTEXT: COPY maintenance, line 1, column maintenance_id: 3665
301 - 400 of 636 matches
Mail list logo