Hi!!
Of course the following is possible in various ways but i wonder
if there is an elegant and simple solution:
A part from apache log:
remote_hostvarchar(50)
request_urivarchar(50)
ts timestamp with time zone
Now i want to see which host has done how many requests. This
is
On Fri, 6 May 2005 13:34:26 +0530
"Ramakrishnan Muralidharan" <[EMAIL PROTECTED]> wrote:
> SELECT abc.remote_host , c , abc.request_uri , a.t FROM abc , ( select
> remote_host , count(*) as c , max( ts ) as t from abc group by remote_host )
> as a
> where a.remote_host = abc.remote_host and ab
On Fri, May 06, 2005 at 09:25:35 +0200,
Vortex <[EMAIL PROTECTED]> wrote:
> Hi!!
>
> Of course the following is possible in various ways but i wonder
> if there is an elegant and simple solution:
>
> A part from apache log:
> remote_hostvarchar(50)
> request_urivarchar(50)
> ts
I am not clear on what makes this work?
I am going live Sunday and have thus far been using SQL_ASCHII.
I still have the feeling I should be using something else,
but when I used Unicode my conversion from MSSQL blew up on encoding error for
a char that wasn’t plain ASCHII(IE French or
how do i get a dump of a postgresql database without the data?
--
regards
kg
http://www.livejournal.com/users/lawgon
tally ho! http://avsap.sourceforge.net
àà à!
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
how do i get a dump of a postgresql database without the data?
% pg_dump --help
pg_dump dumps a database as a text file or to other formats.
Usage:
pg_dump [OPTION]... [DBNAME]
.
-s, --schema-onlydump only the schema, no data
-philip
---(end of broadcast
On Fri, 2005-05-06 at 04:14, Kenneth Gonsalves wrote:
> how do i get a dump of a postgresql database without the data?
pg_dump -s for the schema
pg_dumpall -g for the globals, like usernames and all.
---(end of broadcast)---
TIP 4: Don't 'kill -9'
Good Day,
I'm hoping this is the right place to send this. I have a query that's
causing me some grief trying to optimize it. The query cost is fine
until I order the data set. Mind you after it's been ran and cached,
subsequent calls to it are near instant. The Query in question is:
select
while you weren't looking, Derek Buttineau|Compu-SOLVE wrote:
> I'm hoping this is the right place to send this.
The PostgreSQL Performance list, pgsql-performance@postgresql.org
would be more appropriate. I'm copying my followup there, as well.
As for your query, almost all the time is actually
Thanks for the response :)
That's 50-ish ms versus 80-odd seconds.
It seems to me a merge join might be more appropriate here than a
nestloop. What's your work_mem set at? Off-the-cuff numbers show the
dataset weighing in the sub-ten mbyte range.
Provided it's not already at least that big, and yo
Hi,
I have a java app that uses hibernate to do queries.
One query on a 6.5 millions records takes about 15 seconds while the
same one (take from the sql that shows in the consol - I configured
hibernate to show_sql) takes about 50 ms when done with pgadmin3.
This is a simple select. Here is the
Correct. A static binary is perfectly capable of dynamic-loading shared
objects; therefore "-static" should not shadow "-E". I will forward this to
linker folks. In the meantime, if you guys can provide self-sufficient
conversion shared-objects by any chance in some future release perhaps, that
Hi
I have tried with the following example
CREATE TABLE products(
id serial primary key,
description varchar(50)
);
CREATE TABLE vendding(
id serial primary key,
date_ date,
product integer
);
insert into products values ( 1 , 'Test product 1' );
insert into products values (
Christophe Geers wrote:
I came as far as getting the first returned result by performing a
SELECT INTO mytype function calculate_cost(...)...etc., which is normal
I guess since a SELECT INTO only returns a single row according to the
manual. However is there a way to loop / iterate all of the re
Hi, Thank for your answers.
I asked here because I had thought the problem was in
how Postgres manage connections.
Sorry
--- Yasir Malik <[EMAIL PROTECTED]> ha scritto:
> > Hi,
> > I need to connect to 2 differents Postgres 8.0.0
> > databases located in the same machine using the
> same
> > PHP sc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi, have a little problem with a trigger and the ON DELETE CASCADE
statement.
i'm working on a db that represents Users and Messages. A message can be
owned by more than a user. if i delete a user all his related objects
are deleted too (ON DELETE CASC
Ragnar Hafstað wrote:
another possibility (total guess) is a functional index
create index func_id_date on user_url(user_id,(add_date::date));
SELECT ... ORDER BY uu.user_id DESC ,uu.add_date::date DESC LIMIT 7;
I have no idea if this will work, or if the planner will use such
an index.
To make sur
Hi
create table abc
(
remote_hostvarchar(50),request_urivarchar(50),ts timestamp
);
SELECT abc.remote_host , c , abc.request_uri , a.t FROM abc , ( select
remote_host , count(*) as c , max( ts ) as t from abc group by remote_host ) as
a
where a.remote_host = abc.remote_host and abc.t
On Apr 30, 2005, at 15:10 , Cosimo Streppone wrote:
1) is it possible to know Pg backend uptime with
SQL queries? Or must I look at postmaster.pid file?
or even something else?
I think that the PID file is your best bet.
2) how can I know the ip addresses/hostnames of clients
connecting to
Hi,
I have written the following trigger assuming the application can pass NULL
value on mtime and it can be handled on the server side by the trigger.
CREATE TABLE TEST3
( ID INT4 NOT NULL,
DDATE TIMESTAMP
)
CREATE OR REPLACE FUNCTION Updatemtime()
RETURNS TRIGGER AS $Updatemtime$
DE
Quoting Mikey <[EMAIL PROTECTED]>:
> What about this as a rule. It seems to work for me.
>
> create or replace rule critter_counter as
> on INSERT to critter do (
>
> insert into zoostats
> select distinct new.legs
> where new.legs not in (select legs from zoostats);
>
>u
Hi,
Your mail just contain the database studcture and project spec. and nothing
about the problem. Please let me know what help you are looking for?
Regards,
R.Muralidharan
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Lord Knight of the
Black Rose
Hi,
I do not think there is a performance issue due trigger on a larger database.
For improving performance, it is better to write triggers for Insert, update,
delete separately rather than combining them on a single trigger.
Periodic VACCUM and RE-INDEX the table will improve the performan
Someone else suggested using coalesce to me as well. Isn't your function
equivilant to mine? If so, I think the most elegant of these solutions is
closer to the one I proposed (with Stephen's modification). Unfortunately,
we've still got the concurrency problems that Stephen pointed out.
I sure
>I believe you can add partial unique indexes to cover the case where a
>column is null, but if you have multiple nullable columns you need to
>worry about you end up with a bunch of indexes.
Hmmm. I hadn't thought of that, thanks. Yes, the indexes would be unnessecary
though. Speaking of, shou
The purpose of using static linking is to reduce dependencies to
shared-libraries (dependencies to different types and versions of Linux), so
an instance of postgreSQL, say built on Suse 9.0, would still work on
Mandrake 10.1. Yes it gets a bit bulky and have a number of disadvantages
over dyna
CREATE or replace FUNCTION mean_and_variancepop(var_table varchar,
var_col varchar) RETURNS float8[] AS $$
DECLARE
a float;
b float;
BEGIN
a = 1.02;
b =3.05;
RETURN ARRAY[a,b];
END;
$$ LANGUAGE plpgsql;
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] B
27 matches
Mail list logo