Hi,
I'm trying to use substr() and position() functions to extract the full host
name (and later a domain) from a column that holds URLs.
This is what I'm trying, but it clearly doesn't do the job.
= select substr(href, position('://' in href)+3, position('://' in
href)+3+position('/' in
Hi,
Thanks, perfect! (though I'll have to look into the regex warning):
= select substring( href from '.*://\([^/]*)' ) as hostname from url where
id10;
WARNING: nonstandard use of escape in a string literal at character 29
HINT: Use the escape string syntax for escapes, e.g., E'\r\n'.
So
Ah, I figured out what to look for and found my uniq -c solution:
select substring( href from '.*://([^/]*)' ) as hostname, count(substring( href
from '.*://([^/]*)' )) from url where id10 group by hostname order by count
desc;
hostname | count
Hi Markus Tom,
Higher statistics for this column hm, I'd love to try changing it to see
how that changes things, but I'm afraid I don't know how to do that. How can I
change the statistics target value for this column?
Ah, I think I found the place:
= select * from pg_attribute where
Hello Andrew, Markus, and Tom - thanks for all the help! You've just helped a
large Simpy community! :) I'll try to post some performance charts to
http://blog.simpy.com/ shortly. In short, this immediately dropped the load
from 2-3-4-5-6-7+ to circa 0.25.
Thanks!
Otis
- Original
Hi,
I'm using PG 8.0.3 and recently spotted a query that was not using a
multi-column index I had created.
The index looks like:
CREATE INDEX . ON FooTable(fkColumnOne, fkColumnTwo);
The query that was not using the index was using:
SELECT a bunch of columns and joins WHERE
Hello,
I have a little 2-table JOIN, GROUP BY, ORDER BY query that does a sequential
scan on a multi-million row table. I _thought_ I had all the appropriate
indices, but apparently I do not. I was wondering if anyone can spot a way I
can speed up this query.
The query currently takes...
Hi,
Thanks Bruno. That was indeed a redundant DISTINCT. It did reduce the amount
of work, but as you said it doesn't get rid of the sequential scan, which is
the real problem with this query.
Otis
- Original Message
From: Bruno Wolff III [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc:
Aha! set hashjoin=off did the trick.
The PG version is: 8.0.3
NB: I removed that redundant DISTINCT after the SELECT.
EXPLAIN ANALYZE select userurltag0_.tag as x0_0_, COUNT(*) as x1_0_ from
user_url_tag userurltag0_, user_url userurl1_ where (((userurl1_.user_id=1
Hello,
I've got 2 tables, url (U), and bookmark (B), with bookmark pointing to
url via FK.
Somehow I ended up with some rows in B referencing non-existent rows in U.
This sounds super strange and dangerous to me, and it's not clear to me how/why
PG let this happen.
I'm using 8.0.3.
Here are
Hi,
I mistakenly swapped the tables in my email. Here they are, corrected:
Table url:
id SERIAL
CONSTRAINT pk_url_id PRIMARY KEY
Table bookmark:
url_id INTEGER
CONSTRAINT fk_url_id REFERENCES
Hello,
I need to run some SELECT queries that take a while (10+ minutes) to complete,
and I'm wondering about the isolation about the results I get. More precisely,
while my SELECT is running, the DB is being updated by another application, and
I am wondering which, if any, data changes my
Hello,
I need a bit of help with some SQL.
I have two tables, call them Page and Bookmark.
Each row in Page can have many Bookmarks pointing to it, and
they are joined via a FK (Page.id = Bookmark.page_id).
Page has a 'url' column: Page.url, which has a unique index on it.
My Page.url column
Hello,
I have 3 tables (2 tables + 1 lookup table that ties them) and running
a straight-forward aggregate count(*) query with a couple of joins
takes about 10 seconds (and I need it to be sub-second or so).
Also, I am wondering if this approach is scalable with my row-counts
and my hardware
I am restructuring my DB schema and need help migrating data from 1
column of an existing table to two new tables. I have some Java code
that can do this for me, but it's very slow, and I am now hoping I can
migrate this data with some clever SQL instead.
Here are my 3 tables:
user_data
Hello,
I need some basic SQL help. I added a new column to an existing table,
and now I need to copy values from one of the old columns to this new
columns.
I need something like this:
FOR pvId IN SELECT id FROM preference_value LOOP
update preference_value SET display_value = (select
Hello,
Thank you for all your help, Stijn.
date_part is a standard PG function.
While not the most elegant, the DESC, DESC, DESC solution suggested the
other day works okay for me, so I think I'll use that for now.
Thanks again!
Otis
--- Stijn Vanroye [EMAIL PROTECTED] wrote:
Indeed, it seems
Hello,
I'd love to be able to do that, but I cannot just ORDER BY uu.add_date,
because I do not have uu.add_date in the SELECT part of the statement.
The reason I don't have it there is because I need distinct MM DD
values back.
Is there a trick that I could use to make this more elegant?
Hello,
Hm, doesn't work for me:
[EMAIL PROTECTED] mydb= select distinct date_part('year', uu.add_date),
date_part('month', uu.add_date), date_part('day',
uu.add_date) from uus inner join ui on uus.user_id=ui.id inner join
uu on ui.id=uu.user_id where uus.subscriber_user_id=1 order by
Hello,
I am trying to select distinct dates and order them in the reverse
chronological order. Although the column type is TIMESTAMP, in this
case I want only , MM, and DD back.
I am using the following query, but it's not returning dates back in
the reverse chronological order:
SELECT
Thank you and Denis ([EMAIL PROTECTED]) - that was it. I needed
explicit DESC for each ORDER BY criterium.
Otis
--- Stijn Vanroye [EMAIL PROTECTED] wrote:
Hello,
I am trying to select distinct dates and order them in the reverse
chronological order. Although the column type is
21 matches
Mail list logo