I am trying to insert data from 2 columns in tableB (colX and colY) into the
same two columns of tableB, with a join like where clause. Is this possible?
For example:
INSERT INTO tableA (colX, colY)
(SELECT colX, colY
FROM tableB
WHERE
tableA.blockname = tableB.block_name
AND
On Apr 24, 2013, at 6:14 AM, Bill Moran wmo...@potentialtech.com wrote:
Write your own client that uses the copy interface to
load a file from wherever and send it to the server.
Or just use the one built in to psql, as Jasen suggested.
I am using copy to to write data from the db out
I would like to run the COPY command as a user other than postgres. I find it
a bit of a pain (or at least requiring an extra step or two) to have the
postgres user own the files that I am creating with COPY TO. Here is a simple
example where the location '/some/path/to/file/file.csv' is owned
I am trying to perform a join between two tables where I need to join year in
table 1 with the year component of a timestamp in table 2.
Something like this:
table1.year = table2.timestamp
where timestamp has the format: 2009-01-01 00:00:00
I've tried
date_trunc('year', table2.timestamp) =
On Apr 11, 2013, at 10:55 AM, Ryan Kelly rpkell...@gmail.com wrote:
You want date_part, not date_trunc.
-Ryan
Thanks Ryan. It looks like EXTRACT(YEAR FROM table2.time2) works as well.
On Mar 14, 2013, at 10:27 AM, Kevin Grittner kgri...@ymail.com wrote:
I didn't understand your description of what you are trying to do,
and the example has so many columns and cases that it would take a
long time to understand it. Can you distill this down to just a
few columns and
I hace a fairly large table with two columns that I need to de-normalize (235
million rows) There has got to be a better (i.e. faster) approach than what I
am doing. I am using a MAX CASE on each of the 24 variables (column names
variable and value) that I want to unstack. Any suggestions
I have a fairly large table with two columns that I need to de-normalize (235
million rows) There has got to be a better (i.e. faster) approach than what I
am doing. I am using a MAX CASE on each of the 24 variables (column names
variable and value) that I want to unstack. Any suggestions
On Feb 20, 2013, at 5:42 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On 02/19/2013 07:04 PM, NiRE wrote:
All of a sudden my macbook pro force quit and when I restarted it it is
run by user PostgreSQL user - i do not know the password and cannot
change it. I cannot find out online how to
I am trying to write a query that grabs one particular day from a timestamp
column. The data are ordered in 15 minute chunks like this:
2010-07-07 12:45:00
2010-07-07 13:00:00
2010-07-07 13:15:00
2010-07-07 13:30:00
etc…
If I wanted all records from july 7th 2010, I would expect 4 x 24 = 96
I am trying to write a query that grabs one particular day from a timestamp
column. The data are ordered in 15 minute chunks like this:
2010-07-07 12:45:00
2010-07-07 13:00:00
2010-07-07 13:15:00
2010-07-07 13:30:00
etc…
If I wanted all records from july 7th 2010, I would expect 4 x 24 = 96
the mean of the first and last four of each 15 minute records.
Perhaps date_trunk only works for the timestamp?
On Feb 4, 2013, at 8:50 AM, Misa Simic misa.si...@gmail.com wrote:
WHERE derived_tsoil_fifteen_min_stacked.time2::date = '2010-07-07'::date
On Monday, February 4, 2013, Kirk Wythers
the mean of the first and last four of each 15 minute records.
Perhaps date_trunk only works for the timestamp?
On Feb 4, 2013, at 8:50 AM, Misa Simic misa.si...@gmail.com wrote:
WHERE derived_tsoil_fifteen_min_stacked.time2::date = '2010-07-07'::date
On Monday, February 4, 2013, Kirk Wythers
I am looking for suggestions on aggregation techniques using a timestamp
column. In my case I have tried:
date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2),
but date_truck only seems to aggregate the timestamp. I thought I could use
AVG(derived_tsoil_fifteen_min_stacked.value)
in
of these ideas for your use case?
Cheers
Brent Wood
Programme leader: Environmental Information Delivery
NIWA
DDI: +64 (4) 3860529
From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org]
on behalf of Kirk Wythers [wythe...@umn.edu
On Feb 4, 2013, at 3:26 PM, Jason Dusek jason.du...@gmail.com wrote:
2013/2/4 Kirk Wythers wythe...@umn.edu:
I am looking for suggestions on aggregation techniques using a timestamp
column. In my case I have tried:
date_trunc('hour', derived_tsoil_fifteen_min_stacked.time2
, 2013, Kirk Wythers wrote:
Hi Brent,
Nice to hear from you. I hope your world is good.
On Feb 4, 2013, at 2:14 PM, Brent Wood brent.w...@niwa.co.nz wrote:
Hi Kirk,
We have a (near) real time data database for instrument observations from
our research vessels. All observations
I am trying to some up with an approach that uses date_truc to aggregate 15
minute time series data to hourly bins. My current query which utilizes a view,
does performs a join after which I use a series a WHERE statements to specify
which of the 15 minute records I want to look at.
I think
On Jan 18, 2013, at 8:10 AM, Igor Neyman iney...@perceptron.com wrote:
Yes, my mistake, I meant to ask about fifteen_min_stacked_view definition,
and Postgres parameters from postgresql.conf configuration file, at least
those - modified from default setting and related to resource
fifteen_min_stacked_propper is so
that variable names in fifteen_min_stacked_propper.variable can be used to join
on data_key.variable_channel.
Does that make sense?
Kirk
-Original Message-
From: Kirk Wythers [mailto:kwyth...@umn.edu]
Sent: Friday, January 18, 2013 10:50 AM
Not enough information:
Postgres version?
OS?
Some Postgres configuration parameters, specifically related to RESOURCE
USAGE and QUERY TUNING?
Table structures (including indexes) for: fifteen_min_stacked_propper,
fifteen_min, and data_key?
View definition for
= PostgreSQL 9.1.4
Does that help?
-Original Message-
From: Kirk Wythers [mailto:kwyth...@umn.edu]
Sent: Thursday, January 17, 2013 3:59 PM
To: Igor Neyman
Cc: Kirk Wythers; pgsql-general@postgresql.org
Subject: Re: [GENERAL] speeding up a join query that utilizes a view
Not enough
I am looking for advice on a performance problem. I'm pretty sure that the
culprit of my slow performance is a view that is several hundred million
records in size. Because it is a view, I can only index the underlying table,
but because the view generates an un-pivoted version of the
Can anyone see what I'm misisng? I am trying to run a psql copy from command
through a bash script to load a buch of cdv files into the same table. I'm
getting an error about the file infile not existing?
#!/bin/sh
for infile in /path_to_files/*.csv
do
cat infile | psql dbname -c \copy
On Jan 11, 2013, at 12:18 PM, Szymon Guz mabew...@gmail.com wrote:
On 11 January 2013 19:13, Kirk Wythers kwyth...@umn.edu wrote:
Can anyone see what I'm misisng? I am trying to run a psql copy from
command through a bash script to load a buch of cdv files into the same
table. I'm
I have a column of type TIMESTAMP, I'd like to query all records from 2011. If
it were text I could use a partial such as:
WHERE
text ~ '2011'
There must be a simple way to pull the year part out of a timestamp format.
Thanks in advance.
--
Sent via pgsql-general mailing list
On Jan 8, 2013, at 6:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The OP didn't
suggest how many years his data covers, but it's quite possible that
pulling a full year's worth of data will read enough of the table that
there's no point in worrying about whether an index could be used
anyway.
I am trying to re-cast a column as a timestamp
ALTER TABLE sixty_min ALTER COLUMN time2 TYPE timestamp;
ERROR: column time2 cannot be cast to type timestamp without time zone
The column time2 is currently a varchar. I actually do not want to mess with
time zones, but it seems that postgres
I am trying to re-cast a column as a timestamp
ALTER TABLE sixty_min ALTER COLUMN time2 TYPE timestamp;
ERROR: column time2 cannot be cast to type timestamp without time zone
The column time2 is currently a varchar. I actually do not want to mess with
time zones, but it seems that postgres
I am trying to re-cast a column as a timestamp
ALTER TABLE sixty_min ALTER COLUMN time2 TYPE timestamp;
ERROR: column time2 cannot be cast to type timestamp without time zone
The column time2 is currently a varchar. I actually do not want to mess with
time zones, but it seems that postgres
I am trying to follow the example given below, but in reverse. I have something
like table2, that I am trying to un-pivot to look like table1… And I seem to
be having a booger of a time.
My attempt:
SELECT * FROM crosstab(
'SELECT rowid, key1,key1
FROM test WHERE key1=
I have been given an interesting problem to solve in a postgres db. I was given
two tables
t1:
sitetreatment variableid (pk)
-
A X BLUE
I have been given an interesting problem to solve in a postgres db. I was given
two tables
t1:
sitetreatment variableid (pk)
-
A X BLUE
I have been using COPY FROM to do a mass import of records from CSV files into
a new database. I have discover however, a small number of records ( a few
thousand) in one of the files that contain new data that needs to be added to
the database, but on rows that have a primary key and have
I have been using COPY FROM to do a mass import of records from CSV files into
a new database. I have discover however, a small number of records ( a few
thousand) in one of the files that contain new data that needs to be added to
the database, but on rows that have a primary key and have
On Dec 27, 2012, at 10:39 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
No. Some questions though.
Thanks for the reply Adrian.
What version pf Postgres?
9.1
Is that the actual UPDATE statement, I see no SET?
I was reading the docs but obviously don't understand the syntax of the
I have been using COPY FROM to do a mass import of records from CSV files into
a new database. I have discover however, a small number of records ( a few
thousand) in one of the files that contain new data that needs to be added to
the database, but on rows that have a primary key and have
I seem to be dealing with a data type issue when I try and import data into a
new and empty database.
Error Message: ERROR: date/time field value out of range: 1332471600:00:00
LINE 1: ...tc_avg25) values ('2012-03-22 21:00:00_B4WARM_A','133247160…
Here are the first two rows in the files I
I am using version 9.1 and have a large number of files to insert. I am trying
to use a simple COPY FROM command but have a couple questions.
1. There are a small number of instances where there are duplicate records that
are being caught by the primary key (as it is supposed to do). However,
Thanks for the reply Steve. These suggestions are new to me, so I'd like to
rephrase them back to you in order to make sure I understand the bits and
details.
On Dec 19, 2012, at 10:48 AM, Steve Crawford scrawf...@pinpointresearch.com
wrote:
I suppose you could use a trigger to check
I seem to be dealing with a data type issue when I try and import data into a
new and empty database.
Error Message: ERROR: date/time field value out of range: 1332471600:00:00
LINE 1: ...tc_avg25) values ('2012-03-22 21:00:00_B4WARM_A','133247160…
Here are the first two rows in the files I
17, 2012, at 4:05 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On 12/17/2012 01:53 PM, Kirk Wythers wrote:
I seem to be dealing with a data type issue when I try and import data
into a new and empty database.
Error Message: ERROR: date/time field value out of range:
1332471600:00:00
I am trying to move databases to another macine (and update from 8.2
to 8.4 along the way). I first tried pg_dumpall, but I found that one
of the data bases did not restore and data, just an empty db with no
tables. Since then I have tried pg_dump with the following:
bash-3.2$
On Oct 16, 2009, at 4:51 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Fri, Oct 16, 2009 at 11:25 AM, Kirk Wythers kwyth...@umn.edu
wrote:
Any ideas what the problem could be here?
Use the pg_dump from the target (i.e. newer) pgsql. I.e. if going
from 8.3.8 to 8.4.1, use
I need some help with rewriting a query. I have a fairly complicated
query (for me anyway) that dumps daily climate data, filling in
missing data with monthly averages (one line per day).
I want to output monthly averages (one line per month). I am having a
hard time wrapping my head
Here is a more elaborate version, I'm trying to add 'avgsol' to
your original FROM clause:
SELECT CASE WHEN w.station_id = site_near.station_id THEN
w.obs_id ELSE
s.obs_id END AS obs_id, site_near.station_id, site_near.longname,
w.year, w.doy, w.precip, w.tmin, w.tmax,
--replace missing
On May 9, 2007, at 10:41 AM, Erik Jones wrote:
On May 9, 2007, at 10:32 AM, Kirk Wythers wrote:
Here is a more elaborate version, I'm trying to add 'avgsol' to
your original FROM clause:
SELECT CASE WHEN w.station_id = site_near.station_id THEN
w.obs_id ELSE
s.obs_id END AS obs_id
On May 8, 2007, at 2:02 AM, Albe Laurenz wrote:
Kirk Wythers wrote:
I am struggling to get a CASE WHEN statement to work within another
CASE WHEN. Here is my original code:
SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id
ELSE
s.obs_id END AS obs_id
I am struggling to get a CASE WHEN statement to work within another
CASE WHEN. Here is my original code:
SELECT CASE WHEN w.station_id = site_near.station_id THEN w.obs_id ELSE
s.obs_id END AS obs_id, site_near.station_id, site_near.longname,
w.year, w.doy, w.precip, w.tmin, w.tmax,
I have an database (pg 8.1.0 on OS X) where a three table inner-join
gives the following errors:
psql(606) malloc: *** vm_allocate(size=8421376) failed (error code=3)
psql(606) malloc: *** error: can't allocate region
psql(606) malloc: *** set a breakpoint in szone_error to debug
out of memory
On Dec 11, 2006, at 1:43 PM, Tom Lane wrote:
Kirk Wythers [EMAIL PROTECTED] writes:
I have an database (pg 8.1.0 on OS X) where a three table inner-join
gives the following errors:
psql(606) malloc: *** vm_allocate(size=8421376) failed (error code=3)
psql(606) malloc: *** error: can't
51 matches
Mail list logo