Hopefully I'm understanding your question correctly. If so, maybe
this will do what you are wanting.
First, a couple of questions. Do you have this data in a table
already, and are looking to extract information based on the dates?
Or, are you basically wanting something like a for loop so
Alexander Korobov [EMAIL PROTECTED] writes:
We are having strange problem on production system with very slow
insert/delete commands and huge cpu and disk write activity spikes in
postgresql 7.2.4.
The first thing you should consider, if you are concerned about
performance, is adopting a less
This might be helpful,
select current_date + s.t as dates from generate_series(0,5) as s(t);
dates
2005-06-28
2005-06-29
2005-06-30
2005-07-01
2005-07-02
2005-07-03
(6 rows)
with regards,
S.Gnanavel
-Original Message-
From: [EMAIL PROTECTED]
Sent: 27 Jun 2005
Magnus Hagander [EMAIL PROTECTED] writes:
My editor strips empty lines at the end of a textfile and
sets EOF right after the last visible character so it dumps
the CRLF of the last line and even though it was a comment
Postmaster complaines about a syntax error and goes on strike.
Hmm.
Jeff Gold [EMAIL PROTECTED] writes:
Tom Lane wrote:
TRUNCATE and CLUSTER both rebuild indexes, so they'd also trigger the
leak.
Sorry to bug you again, but I have two quick followup questions: (1) is
the leak you discovered fixed on the 8.0 branch? and (2) would closing
the database
Hello.
I created a Windows XP schedule for backup, following your
instruction. Now I have a .bat file with this script:
cd D:\Program Files\PostgreSQL\8.0\bin
pg_dumpall D:\MYDATABASE_DUMPALL -U postgres pg_dumpall
D:\MYDATABASE_SHEMA -U postgres -s pg_dumpall
D:\MYDATABASE_GLOBALS -U
I've been using postgres off and on since about 1997/98. While I have
my personal theories about tuning, I like to make sure I stay
current. I am about to start a rather thorough, application specific
evaluation of postgresql 8, running on a Linux server (most likely
the newly release
On Mon, Jun 27, 2005 at 10:30:38AM -0700, [EMAIL PROTECTED] wrote:
I'd like to make a query that would return a list of every trunc'd
TIMESTAMPs between two dates. For example, I'd want to get a list of
every date_trunc('hour',whatever) between 6-1-2005 and 6-10-2005 and
get a list that
Hi,
hier the same for minutes.
Just change the intervall to 'hour' and the series-count to '24' :
select
current_date || ' ' || mytimequery.mytime
as dates
from
(select
(TIME '00:00:00' + myintervalquery.myinterval)::time as mytime
from
When I execute query, I've got error message.
test= SELECT to_timestamp('00:00:05.601 SAMST Tue Jun 28 2005',
'HH24:MI:SS.MS TZ Dy Mon DD ');
ERROR: TZ/tz not supported
How can I convert '00:00:05.601 SAMST Tue Jun 28 2005' (varchar type)
to timestamp with time zone?
On Mon, Jun 27, 2005 at 18:46:58 +0300,
Catalin Constantin [EMAIL PROTECTED] wrote:
Hello,
I have a pretty big database with about 200 000 rows.
This is the main table. Also some other tables with FKs to this main
table.
I have to calculate some numbers for each entry at a certain
Hi.
I can't find pgpass.conf file. It should be in Application Data
subdirectory, but there is no PostgreSQL subdirectory in Application Data
directory (!?). I couldn't find pgpass.conf even by searching the hard
disk..
Regards,
Zlatko
- Original Message -
From: Magnus Hagander
this is my schema for the table with the issue !
# \d url_importance
Table public.url_importance
Column | Type | Modifiers
---+--+-
url_id| bigint | default
That's because they don't exist. You need to create them. I did it on
WinXP and it works fine.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic
Sent: Tuesday, June 28, 2005 9:08 AM
To: Magnus Hagander; Andreas; pgsql-general@postgresql.org
Zlatko Matic schrieb:
I can't find pgpass.conf file. It should be in Application Data
subdirectory, but there is no PostgreSQL subdirectory in Application
Data directory (!?). I couldn't find pgpass.conf even by searching the
hard disk..
you have to create the sub-dir and the file yourself
Zlatko Matic [EMAIL PROTECTED] writes:
Hi.
I can't find pgpass.conf file. It should be in Application Data
subdirectory, but there is no PostgreSQL subdirectory in Application
Data directory (!?). I couldn't find pgpass.conf even by searching the
hard disk..
I'm pretty sure it's not created
I'm using postgresql with Drupal. I have one small problem. I would
like to keep the Drupal tables available only to a small group of users
(apache, root and myself). I've set these user up in a group.
The problem is that every time I want to add a new Drupal module to the
database, I need
Sergey Levchenko [EMAIL PROTECTED] writes:
How can I convert '00:00:05.601 SAMST Tue Jun 28 2005' (varchar type)
to timestamp with time zone?
Just casting it would work, except that SAMST is not one of the time
zone abbreviations known to Postgres. If you're desperate you could
add an entry
On Jun 27, 2005, at 8:42 PM, Bruno Wolff III wrote:
Google is your friend. There are places that sell very well kept
zipcode databases for under $50.
The US government gives it away for free. Look for tiger.
That is stale data.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
I would appreciate some example.
Thanks.
- Original Message -
From: Relyea, Mike [EMAIL PROTECTED]
To: Zlatko Matic [EMAIL PROTECTED];
pgsql-general@postgresql.org
Sent: Tuesday, June 28, 2005 3:56 PM
Subject: Re: [GENERAL] automating backup ?
That's because they don't exist. You
* Wayne Johnson ([EMAIL PROTECTED]) wrote:
Is there a way to do this automatically? Say, to make all new objects
accessible (or even owned) by a group? Something like the sticky bit in
a directory on UNIX.
8.1 is expected to have Roles support in it, which merges users and
groups into one
I'm trying to test a feature I see in the 8.1devel documentation. I
figured I'd checkout a cvs working copy. Following the doc I:
cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login
but that just hangs and eventually times out. I looked at CVSup, but I
found no binaries for CVSup at
1) Create the directory %APPDATA%\postgresql
in my case it's C:\Documents and
Settings\Administrator\Application Data\postgresql
2) Create the file %APPDATA%\postgresql\pgpass.conf
I created it with Notepad
3) Put the necessary information into %APPDATA%\postgresql\pgpass.conf
-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
Sent: Monday, June 27, 2005 6:55 PM
To: Dann Corbit
Cc: Ben-Nes Yonatan; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Populating huge tables each day
On Mon, Jun 27, 2005 at 01:05:42PM -0700, Dann Corbit wrote:
Will it be possible to use the out params to return more than one row?
will the params act as a composite type so they can be used in a set
returning function?
Thanks,
Tony
---(end of broadcast)---
TIP 8: explain analyze is your friend
Now I have pgpass.conf file in D:\Documents and Settings\Zlatko\Application
Data\postgresql
content of pgpass.conf is:
localhost:*:MONITORINGZ:postgres:tralalala
content of backup_script.bat is:
cd D:\Program Files\PostgreSQL\8.0\bin
pg_dumpall D:\MONITORINGZ_DUMPALL -U postgres
still prompts
Thanks for the replies! I've adopted the generate_series method, it's
absolutely perfect. I didn't have the dates in a table yet, I needed a
method to generate them from scratch, and this will do nicely.
Thanks again, and hopefully I'll be able to contribute back someday!
On Tue, Jun 28, 2005 at 10:36:58AM -0700, Dann Corbit wrote:
Nope, truncate is undoubtedly faster. But it also means you would have
downtime as you mentioned. If it were me, I'd probably make the
trade-off of using a delete inside a transaction.
For every record in a bulk loaded table?
On Tue, 2005-06-28 at 17:57 +, Matt Miller wrote:
Following the doc I:
cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login
but that just hangs and eventually times out.
...
What am I doing wrong?
I had a problem on my end. CVS checkout is now working.
Matt Miller [EMAIL PROTECTED] writes:
I'm trying to test a feature I see in the 8.1devel documentation. I
figured I'd checkout a cvs working copy. Following the doc I:
cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login
but that just hangs and eventually times out.
We were having
On Tue, 2005-06-28 at 18:35 -0400, Tom Lane wrote:
Matt Miller [EMAIL PROTECTED] writes:
I'm trying to test a feature I see in the 8.1devel documentation. I
figured I'd checkout a cvs working copy. Following the doc I:
cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot login
but
I've come into a situation where I will often need to merge two
primary keys, with numerous foreign keys hanging off of them. For
instance:
CREATE TABLE people (
peopleid SERIAL PRIMARY KEY,
firstname TEXT NOT NULL,
lastname TEXT NOT NULL
);
CREATE TABLE users (
username TEXT
Zlatko Matic schrieb:
Now I have pgpass.conf file in D:\Documents and
Settings\Zlatko\Application Data\postgresql
content of pgpass.conf is:
localhost:*:MONITORINGZ:postgres:tralalala
content of backup_script.bat is:
cd D:\Program Files\PostgreSQL\8.0\bin
pg_dumpall D:\MONITORINGZ_DUMPALL -U
Is there a way to create a variable specific to the current working
connection? Like a connection context or some such? I'm trying to take a
variable in a query, and allow it to be used by a rule.
Thanks,
Steve
---(end of broadcast)---
TIP 4:
I would like to write a postgres extension type which represents a btree of data
and allows me to access and modify elements within that logical btree. Assume
the type is named btree_extension, and I have the table:
CREATE TABLE example (
a TEXT,
b TEXT,
c
On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote:
I would like to write a postgres extension type which represents a btree of
data and allows me to access and modify elements within that logical btree.
Assume the type is named btree_extension, and I have the table:
CREATE TABLE
Alvaro Herrera [EMAIL PROTECTED] writes:
On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote:
If, for a given row, the value of c is, say, approximately 2^30 bytes
large, then I would expect it to be divided up into 8K chunks in an
external table, and I should be able to fetch
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote:
If, for a given row, the value of c is, say, approximately 2^30 bytes
large, then I would expect it to be divided up into 8K chunks in an
external table, and I should be
38 matches
Mail list logo