On Fri, 6 May 2005 13:34:26 +0530
Ramakrishnan Muralidharan [EMAIL PROTECTED] wrote:
SELECT abc.remote_host , c , abc.request_uri , a.t FROM abc , ( select
remote_host , count(*) as c , max( ts ) as t from abc group by remote_host )
as a
where a.remote_host = abc.remote_host and abc.ts =
On Fri, May 06, 2005 at 09:25:35 +0200,
Vortex [EMAIL PROTECTED] wrote:
Hi!!
Of course the following is possible in various ways but i wonder
if there is an elegant and simple solution:
A part from apache log:
remote_hostvarchar(50)
request_urivarchar(50)
ts
I am not clear on what makes this work?
I am going live Sunday and have thus far been using SQL_ASCHII.
I still have the feeling I should be using something else,
but when I used Unicode my conversion from MSSQL blew up on encoding error for
a char that wasnt plain ASCHII(IE French or
how do i get a dump of a postgresql database without the data?
--
regards
kg
http://www.livejournal.com/users/lawgon
tally ho! http://avsap.sourceforge.net
!
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
how do i get a dump of a postgresql database without the data?
% pg_dump --help
pg_dump dumps a database as a text file or to other formats.
Usage:
pg_dump [OPTION]... [DBNAME]
.
-s, --schema-onlydump only the schema, no data
-philip
---(end of
On Fri, 2005-05-06 at 04:14, Kenneth Gonsalves wrote:
how do i get a dump of a postgresql database without the data?
pg_dump -s for the schema
pg_dumpall -g for the globals, like usernames and all.
---(end of broadcast)---
TIP 4: Don't 'kill -9'
Good Day,
I'm hoping this is the right place to send this. I have a query that's
causing me some grief trying to optimize it. The query cost is fine
until I order the data set. Mind you after it's been ran and cached,
subsequent calls to it are near instant. The Query in question is:
while you weren't looking, Derek Buttineau|Compu-SOLVE wrote:
I'm hoping this is the right place to send this.
The PostgreSQL Performance list, pgsql-performance@postgresql.org
would be more appropriate. I'm copying my followup there, as well.
As for your query, almost all the time is actually
Thanks for the response :)
That's 50-ish ms versus 80-odd seconds.
It seems to me a merge join might be more appropriate here than a
nestloop. What's your work_mem set at? Off-the-cuff numbers show the
dataset weighing in the sub-ten mbyte range.
Provided it's not already at least that big, and
Hi,
I have a java app that uses hibernate to do queries.
One query on a 6.5 millions records takes about 15 seconds while the
same one (take from the sql that shows in the consol - I configured
hibernate to show_sql) takes about 50 ms when done with pgadmin3.
This is a simple select. Here is
Correct. A static binary is perfectly capable of dynamic-loading shared
objects; therefore -static should not shadow -E. I will forward this to
linker folks. In the meantime, if you guys can provide self-sufficient
conversion shared-objects by any chance in some future release perhaps, that
Hi
I have tried with the following example
CREATE TABLE products(
id serial primary key,
description varchar(50)
);
CREATE TABLE vendding(
id serial primary key,
date_ date,
product integer
);
insert into products values ( 1 , 'Test product 1' );
insert into products values (
Christophe Geers wrote:
I came as far as getting the first returned result by performing a
SELECT INTO mytype function calculate_cost(...)...etc., which is normal
I guess since a SELECT INTO only returns a single row according to the
manual. However is there a way to loop / iterate all of the
Hi, Thank for your answers.
I asked here because I had thought the problem was in
how Postgres manage connections.
Sorry
--- Yasir Malik [EMAIL PROTECTED] ha scritto:
Hi,
I need to connect to 2 differents Postgres 8.0.0
databases located in the same machine using the
same
PHP script with
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi, have a little problem with a trigger and the ON DELETE CASCADE
statement.
i'm working on a db that represents Users and Messages. A message can be
owned by more than a user. if i delete a user all his related objects
are deleted too (ON DELETE
Ragnar HafstaĆ° wrote:
another possibility (total guess) is a functional index
create index func_id_date on user_url(user_id,(add_date::date));
SELECT ... ORDER BY uu.user_id DESC ,uu.add_date::date DESC LIMIT 7;
I have no idea if this will work, or if the planner will use such
an index.
To make
Hi
create table abc
(
remote_hostvarchar(50),request_urivarchar(50),ts timestamp
);
SELECT abc.remote_host , c , abc.request_uri , a.t FROM abc , ( select
remote_host , count(*) as c , max( ts ) as t from abc group by remote_host ) as
a
where a.remote_host = abc.remote_host and
On Apr 30, 2005, at 15:10 , Cosimo Streppone wrote:
1) is it possible to know Pg backend uptime with
SQL queries? Or must I look at postmaster.pid file?
or even something else?
I think that the PID file is your best bet.
2) how can I know the ip addresses/hostnames of clients
connecting
Hi,
I have written the following trigger assuming the application can pass NULL
value on mtime and it can be handled on the server side by the trigger.
CREATE TABLE TEST3
( ID INT4 NOT NULL,
DDATE TIMESTAMP
)
CREATE OR REPLACE FUNCTION Updatemtime()
RETURNS TRIGGER AS $Updatemtime$
Quoting Mikey [EMAIL PROTECTED]:
What about this as a rule. It seems to work for me.
create or replace rule critter_counter as
on INSERT to critter do (
insert into zoostats
select distinct new.legs
where new.legs not in (select legs from zoostats);
update
Hi,
Your mail just contain the database studcture and project spec. and nothing
about the problem. Please let me know what help you are looking for?
Regards,
R.Muralidharan
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Lord Knight of the
Black
Hi,
I do not think there is a performance issue due trigger on a larger database.
For improving performance, it is better to write triggers for Insert, update,
delete separately rather than combining them on a single trigger.
Periodic VACCUM and RE-INDEX the table will improve the
Someone else suggested using coalesce to me as well. Isn't your function
equivilant to mine? If so, I think the most elegant of these solutions is
closer to the one I proposed (with Stephen's modification). Unfortunately,
we've still got the concurrency problems that Stephen pointed out.
I
I believe you can add partial unique indexes to cover the case where a
column is null, but if you have multiple nullable columns you need to
worry about you end up with a bunch of indexes.
Hmmm. I hadn't thought of that, thanks. Yes, the indexes would be unnessecary
though. Speaking of, should
The purpose of using static linking is to reduce dependencies to
shared-libraries (dependencies to different types and versions of Linux), so
an instance of postgreSQL, say built on Suse 9.0, would still work on
Mandrake 10.1. Yes it gets a bit bulky and have a number of disadvantages
over
CREATE or replace FUNCTION mean_and_variancepop(var_table varchar,
var_col varchar) RETURNS float8[] AS $$
DECLARE
a float;
b float;
BEGIN
a = 1.02;
b =3.05;
RETURN ARRAY[a,b];
END;
$$ LANGUAGE plpgsql;
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
26 matches
Mail list logo