line/putline or file transmission.
Thanks also Enrico, your idea is good.
Martijn van Oosterhout wrote:
On Wed, Jun 27, 2007 at 02:54:05PM -0400, Jaime Silvela wrote:
The problem is that the getline/readline interface, which does exist for
Ruby, doesn't seem to work so well, and anyway o
best
solution may be a network mount as you suggest Erik.
Thanks,
Jaime
Martijn van Oosterhout wrote:
On Wed, Jun 27, 2007 at 10:32:32AM -0400, Jaime Silvela wrote:
I've been looking around for this functionality:
Is it possible to use COPY with a file that doesn't reside in the DB
I've been looking around for this functionality:
Is it possible to use COPY with a file that doesn't reside in the DB's
filesystem? I know there is sort of a solution in using COPY from stdin/
COPY to stdout, however that depends on calling the psql command, which
breaks the flow of control of
Tom Lane wrote:
Jaime Silvela <[EMAIL PROTECTED]> writes:
A long time ago I wrote to the list about a problem I was having with
COPY losing rows from an import file: the number of imported rows was
not equal to the number of rows in the file, and two consecutive imports
from th
A long time ago I wrote to the list about a problem I was having with
COPY losing rows from an import file: the number of imported rows was
not equal to the number of rows in the file, and two consecutive imports
from the same file would get different row counts. Several people tried
to reprodu
I was reading an interview with Chris Date the other day, which got me
thinking about a problem I'm currently having:
I have an application that keeps information in 6 denormalized tables,
D1 through D6. To tie things together, all these tables have a common
column, let's call it obj_id.
There
I know you've probably discussed this in many places, but I have a crash
right now I need to recover from, and I'm not finding documentation that
fast.
Where should I go?
Here are the details on starting, after a kill -9 of a process brought
Postgres down.
Is there a page/s with information
e was generally restarted with a script that would
call postmaster with the "-i" option. The conf file never allowed TCP/IP
connections, which didn't become apparent until I tried to restart using
pg_ctl with no options.
Thank you , and sorry again
Jaime
Tom Lane wrote:
J
Apologies for the duplication - I've been having email problems.
Jaime Silvela wrote:
I know you've probably discussed this in many places, but I have a
crash right now I need to recover from, and I'm not finding
documentation that fast.
Where should I go?
Below you can
I know you've probably discussed this in many places, but I have a crash
right now I need to recover from, and I'm not finding documentation that
fast.
Where should I go?
Below you can see the log on starting, after a kill -9 of a process
brought Postgres down.
After letting postgres run for a
CT field-1, .. field-n, 3 FROM ;
Thank you,
Jaime
Klint Gore wrote:
On Tue, 03 Apr 2007 12:45:54 -0400, Jaime Silvela <[EMAIL PROTECTED]> wrote:
I'd like to be able to do something like
COPY mytable (field-1, .. field-n, id = my_id) FROM file;
How do you get my_id? Can
I agree that with a temp table, the portfolio_id could be cleanly
inserted as you suggest, from the temp table into the staging table. The
staging table would need a portfolio_id, since it could house data from
several different spreadsheets at the same time. In fact, the staging
table could be
That's sort of what I have already, and my problem is that the
portfolio_id field does not exist in the CSV files. I'd like to be able
to assign a portfolio_id, for the current file's entries. Another person
in the list suggested dynamically adding a column with the portfolio_id
to the file, an
eet for portfolio_id = 9.
Any ideas on this?
Thanks
Jaime
brian wrote:
Jaime Silvela wrote:
I've written a web application where users can upload spreadsheets,
instead of having to key in forms. The spreadsheets get parsed and
INSERTED into a table, and with the INSERT gets added an
I've written a web application where users can upload spreadsheets,
instead of having to key in forms. The spreadsheets get parsed and
INSERTED into a table, and with the INSERT gets added an identifier so
that I can always trace back what a particular row in the table
corresponds to.
I'd like
I have a similar situation. Here's what I do.
I have a stand-alone comment table:
Comments
id
timestamp
text
Then I have individual product tables to tie a table to a comment:
Table_A_Comment
id
id_ref_a references tableA
id_comment references Comm
evaluating things.
Jaime Silvela wrote:
In case anyone is interested, I was able to solve this, more or less.
Here's my new "Latest value" query:
select obj_id, val_type_id, (max(row(observation_date, val))).val
from measurements
group by obj_id, val_type_id
It was only necess
) returns dtval as $$
select case when $1.dt > $2.dt then $1 else $2 end
$$ language sql;
create aggregate max (
sfunc = dtval_larger,
basetype = dtval,
stype = dtval
);
Jaime Silvela wrote:
The problem I'm trying to solve is pretty standard. I have a table
that records measurements of di
I think the problem is that the restore window looks for snapshots with
the extension "backup", as made with the default options of the pgAdmin
Backup function.
If you're using "Plain", the backups are in .sql files, and you can
execute with psql or a query window in pgAdmin.
Rohit Prakash Kha
I completed another migration from 8.1.3 to 8.2.3, and again the problem with "unexpected data beyond EOF", exactly twice like before, but in two tables different
from the last time.
The kernel bug hypothesis seems a strong one. I told Unix Ops about the possible bug, and one of the guys said 2.6.
The problem I'm trying to solve is pretty standard. I have a table that
records measurements of different types at different times.
CREATE TABLE measurements (
obj_id int4,
val_type_id int4 references lu_val_type(val_type_id),
val numeric,
observation_date date
);
I want a query as simple a
there, and will
let you know if I find anything on my part.
Thanks again
Jaime
Tom Lane wrote:
Jaime Silvela <[EMAIL PROTECTED]> writes:
Here's the uname -a:
Linux wdlbc22r06 2.6.5-7.244-bigsmp #1 SMP Mon Dec 12 18:32:25 UTC 2005
i686 i686 i386 GNU/Linux
The previous thr
Status: cfg=no, avail=yes, need=no, active=unknown
We're running on 32-bit mode for compatibility with some libraries.
How can I determine whether this is due to a buggy kernel?
Tom Lane wrote:
Jaime Silvela <[EMAIL PROTECTED]> writes:
The kernel is Linux 2.6.5
2.6.5.what (
that this happens on the restore part, not the dump part then?
Thanks
Jaime
Tom Lane wrote:
Jaime Silvela <[EMAIL PROTECTED]> writes:
I'm seeing these messages
ERROR: unexpected data beyond EOF in block 4506 of relation
"coverage_test_val"
HINT: This h
I'm doing dry runs to migrate my database from 8.1.3 to 8.2.3, with
pg_dumpall | gg_restore, using the executables from 8.2.3.
I'm seeing these messages
ERROR: unexpected data beyond EOF in block 4506 of relation
"coverage_test_val"
HINT: This has been seen to occur with buggy kernels; consi
Correction: my sever is running 8.1.3
Jaime Silvela wrote:
Just bringing back to life a message I sent last July.
The problem I was having was that when importing very large data sets,
COPY seemed to drop some data. I built a script to use INSERTs, and
same problem. My server runs 8.1.3 on
Just bringing back to life a message I sent last July.
The problem I was having was that when importing very large data sets,
COPY seemed to drop some data. I built a script to use INSERTs, and same
problem. My server runs 8.1.3 on Linux. Several people investigated,
Reece Hart was unable to r
I copied the files over, restarted, and everything's fine.
Tom Lane wrote:
Jaime Silvela <[EMAIL PROTECTED]> writes:
I looked for those timezone files, and they're missing on my production
installation. Probably the upgrade from 7.* to 8.1 was a quick&dirty one.
Thanks. Yes, I do confirm it was EST5EDT and I ran your regression to
confirm.
I looked for those timezone files, and they're missing on my production
installation. Probably the upgrade from 7.* to 8.1 was a quick&dirty one.
I don't even have a 'timezone' folder in the share directory. Would it
I'm running a production database on Linux (select version() =
"PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.3")
I read that all 8.1.* versions are DST-compliant, and sure enough, my
development server, which runs 8.1.0, switched fine, as did my 8.2.3
database at home.
Th
should have in fact pointed to "doclib.staging_document"
Silly mistake, but it seems that ERROR messages don't specify the schema
of tables. It would be a time-saver to show them.
Thanks
Jaime Silvela wrote:
I'm finding a strange sort of 'zombie' table behavior.
I try t
I'm finding a strange sort of 'zombie' table behavior.
I try to delete a row of a database that used to be referenced by a
table that has now been deleted.
I do
delete from staging_deal where staging_deal_id = 1
and get
ERROR: update or delete on "staging_deal" violates foreign key
constraint
32 matches
Mail list logo