[GENERAL] limits of constraint exclusion

2010-11-19 Thread Scott Ribe
Consider the following test setup:

create table t1 (
id int8 primary key,
name varchar not null unique
);

create table t2 (
id int8 primary key,
t1_id int8 not null references t1
);

create table t2a (
primary key(id),
check(t1_id = 1)
) inherits (t2);

create table t2b (
primary key(id),
check(t1_id = 2)
) inherits (t2);

insert into t1 values(1, 'foo');
insert into t1 values(2, 'bar');

Now a simple query shows constraint exclusion; the following shows only t2 and 
t2a being checked:

explain select * from t1, t2 where t1.id = t2.t1_id and t1.id = 1;

But the following shows t2, t2a, and t2b being checked:

explain select * from t1, t2 where t1.id = t2.t1_id and t1.name = 'foo';

And I tried to make the it only involves a single t1 and matches a single 
partition more explicit, but this didn't do it either:

explain with tbl as (select id from t1 where name = 'foo')
select * from t1, t2 where t1.id = t2.t1_id and t1.id = (select id from tbl);

Granted these are near-empty tables, but I'm seeing the same behavior with real 
data and a real (complicated, 6-way join) query, where the vast majority of 
time is spent scanning the indexes of tables that cannot possibly contain any 
matching values.

In that case, there's currently 55,000,000 rows spread over 87 partitions (the 
row count will grow steadily, the partition count will remain mostly the same). 
It's like this one, in that the constraint column is an integer and the check 
constraint is simple equality, not a range or list. And there is no index on 
the constraint column, since for every partition there is only a single value 
in that column--which means the planner winds up using a different index to 
scan the partitions (and it is a highly-selective index, so if it's going to 
scan non-matching partitions, it's not a bad index to use).

I do have a workaround, in that there's only 1 special case where the 
performance matters, and in that case it's easy to directly join with the 
single appropriate partition.

But I do wonder if I'm missing some way to encourage the planner to exclude 
partitions, or if this is forming into some sort of feature request, where 
potential exclusive constraints are passed through, so that before performing 
the index scan the executor can decide to skip the scan and return no matches?

One additional wrinkle is that though I'm mostly concerned about a single query 
that hits a single partition, I also have a view, and queries against that 
could hit any partition (usually only one, but sometimes multiples).

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits of constraint exclusion

2010-11-19 Thread Vick Khera
On Fri, Nov 19, 2010 at 1:41 PM, Scott Ribe scott_r...@elevated-dev.com wrote:
 And I tried to make the it only involves a single t1 and matches a single 
 partition more explicit, but this didn't do it either:

 explain with tbl as (select id from t1 where name = 'foo')
 select * from t1, t2 where t1.id = t2.t1_id and t1.id = (select id from tbl);


The exclusion you have is t1_id=1 so that's what the planner can look
for.  It is smart enough to deduce that  t1.id = t2.t1_id and t1.id =
1 implies t1_id=1.  However, it has no way to know t1.id = t2.t1_id
and t1.name = 'foo'; implies that t1.id is constant, nor what that
constant is, so cannot ever deduce that t1_id=1 is or is not going to
be true for the query.

That is, it does not evaluate your constraint expression, it proves
that the constraint is true or false based on the query, then proceeds
appropriately.

Your workaround to join with the specific table is your only real
option.  Either that or add an index that lets the executor exclude
your table quickly (rather than running a full sequence scan to find
something that is not there).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits?

2008-06-26 Thread Robert Treat
On Monday 23 June 2008 15:45:22 Kynn Jones wrote:
 On Mon, Jun 23, 2008 at 2:21 PM, Steve Atkins [EMAIL PROTECTED] wrote:
  In real use you're unlikely to hit any limits, theoretical or practical,
  but if you start to use a silly number of tables and so on you're likely
  to hit performance issues eventually. I'm not sure where that threshold
  would be, but it's higher than thousands.

 Actually, the DB I have in mind would certainly be approaching silly
 territory.  I'm looking at a schema with around 10 thousand tables (or
 views).  Unfortunately, as far as I can tell,
 http://www.postgresql.org/about/ says nothing about maximum number of
 tables.  I suppose I could always find what this limit is the hard way,
 by writing a script that just keeps creating empty tables and see where
 that goes, but I'd prefer not to do something like this...


http://people.planetpostgresql.org/greg/index.php?/archives/37-The-million-table-challenge.html

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] limits?

2008-06-23 Thread Kynn Jones
How can I find the limits (if any) on things such as the maximum number of
tables, views, indices, columns-per-table, size of database, etc.?

(At the moment I'm particularly interested any limits that my exist on the
numbers of tables and views that may exist in any one database.)

TIA!

Kynn


Re: [GENERAL] limits?

2008-06-23 Thread Steve Atkins


On Jun 23, 2008, at 10:56 AM, Kynn Jones wrote:






How can I find the limits (if any) on things such as the maximum  
number of tables, views, indices, columns-per-table, size of  
database, etc.?


(At the moment I'm particularly interested any limits that my exist  
on the numbers of tables and views that may exist in any one  
database.)


http://www.postgresql.org/about/ has some of the theoretical limits.

In real use you're unlikely to hit any limits, theoretical or  
practical, but if you start to use a silly number of tables and so on  
you're likely to hit performance issues eventually. I'm not sure where  
that threshold would be, but it's higher than thousands.


Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits?

2008-06-23 Thread Joshua D. Drake
http://www.postgresql.org/about/

On Mon, 2008-06-23 at 13:56 -0400, Kynn Jones wrote:
 
 
 
 
 
 
 
 How can I find the limits (if any) on things such as the maximum
 number of tables, views, indices, columns-per-table, size of database,
 etc.?
 
 
 (At the moment I'm particularly interested any limits that my exist on
 the numbers of tables and views that may exist in any one database.)
 
 
 TIA!
 
 
 Kynn
  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits?

2008-06-23 Thread Mark Roberts

On Mon, 2008-06-23 at 11:21 -0700, Steve Atkins wrote:
 
 
 http://www.postgresql.org/about/ has some of the theoretical limits.
 
 In real use you're unlikely to hit any limits, theoretical or  
 practical, but if you start to use a silly number of tables and so
 on  
 you're likely to hit performance issues eventually. I'm not sure
 where  
 that threshold would be, but it's higher than thousands.
 
 Cheers,
Steve

I'm just a developer (my DBA would have more details on the settings
that needed adjusting), but I will attest to it being reasonably easy to
hit database limits.

For us, it was related to having a reasonably small number of (large)
aggregate tables that are partitioned (inherited with check constraints)
by week.

The real problem wasn't the absolute limit of tables (IIRC) as much as
accessing the data in a parallel manner from the parent tables in bulk
update transactions (the parallel was probably what pushed it over the
top).

The limit is absurdly high, and by the time you hit it, you'll probably
have a really good idea of how to overcome it.  Really, there's lots of
challenges you'll overcome before that time (IMO).

Of course all this is anecdotal, and you should take it with a grain of
salt. :)

-Mark


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits?

2008-06-23 Thread Kynn Jones
On Mon, Jun 23, 2008 at 2:21 PM, Steve Atkins [EMAIL PROTECTED] wrote:


 In real use you're unlikely to hit any limits, theoretical or practical,
 but if you start to use a silly number of tables and so on you're likely to
 hit performance issues eventually. I'm not sure where that threshold would
 be, but it's higher than thousands.


Actually, the DB I have in mind would certainly be approaching silly
territory.  I'm looking at a schema with around 10 thousand tables (or
views).  Unfortunately, as far as I can tell,
http://www.postgresql.org/about/ says nothing about maximum number of
tables.  I suppose I could always find what this limit is the hard way, by
writing a script that just keeps creating empty tables and see where that
goes, but I'd prefer not to do something like this...

Anyway, thanks!  (And to Joshua too!)

Kynn


Re: [GENERAL] limits?

2008-06-23 Thread Tom Lane
Kynn Jones [EMAIL PROTECTED] writes:
 Actually, the DB I have in mind would certainly be approaching silly
 territory.  I'm looking at a schema with around 10 thousand tables (or
 views).  Unfortunately, as far as I can tell,
 http://www.postgresql.org/about/ says nothing about maximum number of
 tables.

There is no hard limit (at least not till you hit the 16TB size limit on
pg_class or pg_attribute...).  In practice this number tends to be
constrained by the operating system, not Postgres.  How well does your
OS do with tens of thousands of entries in a single directory?

Generally, though, schema designs like this amount to manually replacing
leading columns of an index key with separate tables, and they're
usually bad style and a PITA to work with.  If you have a desperate need
to partition the data then you might have to go that way, but you should
think twice.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits?

2008-06-23 Thread Rodrigo E. De León Plicet
On Mon, Jun 23, 2008 at 2:45 PM, Kynn Jones [EMAIL PROTECTED] wrote:
 Actually, the DB I have in mind would certainly be approaching silly
 territory.  I'm looking at a schema with around 10 thousand tables (or
 views).

What kind of app would require such a schema? Just curious...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits?

2008-06-23 Thread Thomas Kellerer

Steve Atkins wrote on 23.06.2008 20:21:
  In real use you're unlikely to hit any limits, theoretical or practical,

I imagine that the 1GB column-value limit is something that could be reached 
though. Especially for BLOB (aka bytea) or CLOB (aka text) columns.


Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limits?

2008-06-23 Thread Tino Wildenhain

Thomas Kellerer wrote:

Steve Atkins wrote on 23.06.2008 20:21:
  In real use you're unlikely to hit any limits, theoretical or practical,

I imagine that the 1GB column-value limit is something that could be 
reached though. Especially for BLOB (aka bytea) or CLOB (aka text) columns.


No, since they are stored out of band (toast tables). For *lob there is 
a lob type which stored the data completely separate, not to be confused

with bytea and text (varchar).

Tino.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] limits

2007-12-15 Thread Bruce Momjian
[EMAIL PROTECTED] wrote:
 Hi All.
 My question is simple and plain: Are there some limit in the number of
 database operations between a BEGIN statement and a COMMIT statement?

Yes, there is a command counter that is incremented for every command
between BEGIN and COMMIT.  It can't exceed 4 billion commands.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] limits

2007-12-15 Thread Erik Jones


On Dec 15, 2007, at 8:29 PM, Bruce Momjian wrote:


[EMAIL PROTECTED] wrote:

Hi All.
My question is simple and plain: Are there some limit in the  
number of

database operations between a BEGIN statement and a COMMIT statement?


Yes, there is a command counter that is incremented for every command
between BEGIN and COMMIT.  It can't exceed 4 billion commands.


I think the error message upon hitting that deserves some kind of  
easter egg status.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] limits

2007-12-04 Thread Gregory Stark
[EMAIL PROTECTED] writes:

 Hi All.
 My question is simple and plain: Are there some limit in the number of
 database operations between a BEGIN statement and a COMMIT statement?

The most relevant limitation is that the whole body has to fit in 1GB.

You can also only execute 4 billion statements even if it's in a loop. 
8.3 helps this by making most read-only operations not count.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] limits

2007-12-03 Thread luca . ciciriello
Hi All.
My question is simple and plain: Are there some limit in the number of
database operations between a BEGIN statement and a COMMIT statement?

Thanks in advance.

Luca


 
 --
 Email.it, the professional e-mail, gratis per te: http://www.email.it/f
 
 Sponsor:
 Un look da modella in pochi secondi, consigliato da Hunter Tylo


 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=7111d=20071204



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Limits of arrays

2006-09-19 Thread Harald Armin Massa
I found a rather strange but working method to compare the contents of two tables in different databases:

select 'otformularfeld' as which, md5(array_to_string(array(
select md5(id_pkff||id_formular||id_formfeld||id_bf) from otformularfeld where quarant=0 order by id_pkff
),''))

So:
- basically I take the relevant columns from a table
- cast them to text and concattenate the strings
- find the md5 hash of this row
- then take the md5 of all rows, concattenate them to string
- and finally find the md5 of this string

That works surprisingly fast and gives a quick check data in those 2 tables is the same or not. 

Now, the maximum rowcount in one table so far is 18; and still
there is no crash. Are there limits for the maximum rows in one
arrays?
Limits the maximum length of one string? 

The only near information I could find in the documentations was 1 GB
per field, which will propably be the size limit for the array and the
string. Are there more limits?

Or is that md5 / concattenation process done in an iterative manner,
that is: all the intermediate results are consumed and not cached in
memory?

Harald


-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Let's set so double the killer delete select all.
-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Let's set so double the killer delete select all.


[GENERAL] Limits

2005-11-30 Thread Bob Pawley



Are there any practical limits to the number of functions and triggers that 
can be applied to any particular table??

Bob


Re: [GENERAL] Limits

2005-11-30 Thread Chris Browne
[EMAIL PROTECTED] (Bob Pawley) writes:
 Are there any practical limits to the number of functions and
 triggers that can be applied to any particular table??

I'd expect it to be rather like Perlis' assertion about procedures
with lots of parameters...
-- 
let name=cbbrowne and tld=ntlug.org in String.concat @ [name;tld];;
http://cbbrowne.com/info/finances.html
If you have a procedure with ten parameters, you probably missed some.
-- Alan J. Perlis

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Limits of SQL

2005-06-05 Thread Joachim Zobel
Am Samstag, den 04.06.2005, 15:22 -0500 schrieb Bruno Wolff III:
 On Sat, Jun 04, 2005 at 21:53:24 +0200,
   Joachim Zobel [EMAIL PROTECTED] wrote:
  So WITH will allow recursion so I can walk the graph, right? Does this
  mean I can recursively join until a terminating condition is reached?
 
 It can be used to compute transitive closures, which I think is what
 you are really looking for.

There aren't any plans to implement grouping by a user defined
equivalence relation? This is the other thing I am missing. But OK, WITH
will keep me happy for some time :)

Sincerely,
Joachim



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Limits of SQL

2005-06-05 Thread Andreas Seltenreich
Joachim Zobel schrob:

 Am Samstag, den 04.06.2005, 15:22 -0500 schrieb Bruno Wolff III:
 On Sat, Jun 04, 2005 at 21:53:24 +0200,
   Joachim Zobel [EMAIL PROTECTED] wrote:
  So WITH will allow recursion so I can walk the graph, right? Does this
  mean I can recursively join until a terminating condition is reached?
 
 It can be used to compute transitive closures, which I think is what
 you are really looking for.

 There aren't any plans to implement grouping by a user defined
 equivalence relation? This is the other thing I am missing. But OK, WITH

Isn't this already possible by representing the relation through its
canonical mapping (i.e. f(a)=f(b) = a relates to b)? You could then
use GROUP BY f(x) to group data into its equivalence classes.

regards,
Andreas

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Limits of SQL

2005-06-04 Thread Joachim Zobel
Am Donnerstag, den 02.06.2005, 12:46 -0700 schrieb Ben:
 You mean, you want to be able to say something like:
 
 select isConnected(a,b)
 
 and get back a true/false, or maybe the path?
 
 That seems quite doable in SQL, assuming you either store those results 
 and simply use sql to retrieve them, or use a stored proc to compute the 
 result each time.

These are both things I want to avoid. I am not trying to solve a real
world problem, I want to understand the limits of SQL. And it seems that
a plain SELECT that tells me if a path exists is not possible. However I
just read nested sets (thx for the link, Sean). Maybe a tricky
representation does it. 

Sincerely,
Joachim



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Limits of SQL

2005-06-04 Thread Bruno Wolff III
On Sat, Jun 04, 2005 at 11:31:02 +0200,
  Joachim Zobel [EMAIL PROTECTED] wrote:
 
 These are both things I want to avoid. I am not trying to solve a real
 world problem, I want to understand the limits of SQL. And it seems that
 a plain SELECT that tells me if a path exists is not possible. However I
 just read nested sets (thx for the link, Sean). Maybe a tricky
 representation does it. 

When 'WITH' gets implemented then you should be able to do this. I think
there was some recent talk about that, but I don't know if it is going to
make it in to 8.1. We'll know in about a month though.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Limits of SQL

2005-06-04 Thread Joachim Zobel
Am Samstag, den 04.06.2005, 07:38 -0500 schrieb Bruno Wolff III:
 On Sat, Jun 04, 2005 at 11:31:02 +0200,
   Joachim Zobel [EMAIL PROTECTED] wrote:
  
  ... And it seems that
  a plain SELECT that tells me if a path exists is not possible...
 
 When 'WITH' gets implemented then you should be able to do this. I think
 there was some recent talk about that, but I don't know if it is going to
 make it in to 8.1. We'll know in about a month though.

So WITH will allow recursion so I can walk the graph, right? Does this
mean I can recursively join until a terminating condition is reached?

Sincerely,
Joachim



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Limits of SQL

2005-06-04 Thread Bruno Wolff III
On Sat, Jun 04, 2005 at 21:53:24 +0200,
  Joachim Zobel [EMAIL PROTECTED] wrote:
 Am Samstag, den 04.06.2005, 07:38 -0500 schrieb Bruno Wolff III:
  On Sat, Jun 04, 2005 at 11:31:02 +0200,
Joachim Zobel [EMAIL PROTECTED] wrote:
   
   ... And it seems that
   a plain SELECT that tells me if a path exists is not possible...
  
  When 'WITH' gets implemented then you should be able to do this. I think
  there was some recent talk about that, but I don't know if it is going to
  make it in to 8.1. We'll know in about a month though.
 
 So WITH will allow recursion so I can walk the graph, right? Does this
 mean I can recursively join until a terminating condition is reached?

It can be used to compute transitive closures, which I think is what
you are really looking for.
If you look at the TODO page (http://www.postgresql.org/docs/faqs.TODO.html)
you will see two entries for WITH under Exotic Features:
Add SQL99 WITH clause to SELECT
Add SQL99 WITH RECURSIVE to SELECT

There is a short example of this on pages 439-440 of SQL for Smarties
second edition.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Limits of SQL

2005-06-03 Thread Scott Ribe
 Is anybody else thinking about the limits of SQL? As often I am probably
 not the first to ask these questions. Any pointers?

Joe Celko (sp?) has a couple of books on this subject, SQL for Smarties. I
don't recall if he talks about graphs, but does discuss queries on tree
relationships.


-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 665-7007 voice



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Limits of SQL

2005-06-03 Thread Philip Hallstrom

Is anybody else thinking about the limits of SQL? As often I am probably
not the first to ask these questions. Any pointers?


Joe Celko (sp?) has a couple of books on this subject, SQL for Smarties. I
don't recall if he talks about graphs, but does discuss queries on tree
relationships.


I've got the 2nd edition and while I haven't made it that far, Ch 30 is on 
graphs.


-philip

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Limits of SQL

2005-06-02 Thread Joachim Zobel
Hi.

I am looking for a way to write a SELECT that finds connectivity
components of a graph or at least for one that given two nodes
determines if there is a path between them. It seems that this is not
possible, no matter what graph representation I choose. Which constructs
from set theory are missing in SQL? Set of all subsets is one I am
missing, or can it be done somehow?

Is anybody else thinking about the limits of SQL? As often I am probably
not the first to ask these questions. Any pointers?

Sincerely,
Joachim



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Limits of SQL

2005-06-02 Thread Ben
You mean, you want to be able to say something like:

select isConnected(a,b)

and get back a true/false, or maybe the path?

That seems quite doable in SQL, assuming you either store those results 
and simply use sql to retrieve them, or use a stored proc to compute the 
result each time.

On Thu, 2 Jun 2005, Joachim Zobel wrote:

 Hi.
 
 I am looking for a way to write a SELECT that finds connectivity
 components of a graph or at least for one that given two nodes
 determines if there is a path between them. It seems that this is not
 possible, no matter what graph representation I choose. Which constructs
 from set theory are missing in SQL? Set of all subsets is one I am
 missing, or can it be done somehow?
 
 Is anybody else thinking about the limits of SQL? As often I am probably
 not the first to ask these questions. Any pointers?
 
 Sincerely,
 Joachim
 
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org
 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Limits of SQL

2005-06-02 Thread Oleg Bartunov

I'm not sure if it's relevant to your question
http://www-2.cs.cmu.edu/~cache/pg_graph/

pg_graph provides a way of handling graph-based data structures within 
the relational database PostgreSQL. In particular, it provides a convenient 
means of inserting graphs as BLOB-like objects in the RDBMS. 
Primarily, however, it provides a mechanism for indexing the graphs to 
provide efficient means to perform nearest-neighbor queries over 
collections of graphs.


On Thu, 2 Jun 2005, Joachim Zobel wrote:


Hi.

I am looking for a way to write a SELECT that finds connectivity
components of a graph or at least for one that given two nodes
determines if there is a path between them. It seems that this is not
possible, no matter what graph representation I choose. Which constructs
from set theory are missing in SQL? Set of all subsets is one I am
missing, or can it be done somehow?

Is anybody else thinking about the limits of SQL? As often I am probably
not the first to ask these questions. Any pointers?

Sincerely,
Joachim



---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Limits of SQL

2005-06-02 Thread Sean Davis

A couple of links:

http://www.dbazine.com/ofinterest/oi-articles/celko24
http://www.dbmsmag.com/9603d06.html


On Jun 2, 2005, at 2:33 AM, Joachim Zobel wrote:


Hi.

I am looking for a way to write a SELECT that finds connectivity
components of a graph or at least for one that given two nodes
determines if there is a path between them. It seems that this is not
possible, no matter what graph representation I choose. Which 
constructs

from set theory are missing in SQL? Set of all subsets is one I am
missing, or can it be done somehow?

Is anybody else thinking about the limits of SQL? As often I am 
probably

not the first to ask these questions. Any pointers?

Sincerely,
Joachim



---(end of 
broadcast)---

TIP 6: Have you searched our list archives?

   http://archives.postgresql.org




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Limits on PostgreSQL

2000-05-25 Thread Marcos Barreto de Castro

Hi,

   I am developing an application that will use the
C API for postgreSQL and I need to know this:

1 - How many tuples can be returned as a result of a
query using a CURSOR? (Is it possible to do a SELECT
* on a table that has 2 million records and OPEN a
CURSOR for that SELECT and show all records'contents
using FETCH FORWARD, for example?)

 2 - When one uses a CURSOR for a SELECT is there a
big memory consumption or there is a memory buffer
limit and beyond that the result is written to a file
(virtual memory) and read from there when needed?

Marcos Castro
email: [EMAIL PROTECTED]


__
Do You Yahoo!?
Kick off your party with Yahoo! Invites.
http://invites.yahoo.com/



Re: [GENERAL] Limits on PostgreSQL

2000-05-25 Thread Tom Lane

Marcos Barreto de Castro [EMAIL PROTECTED] writes:
 1 - How many tuples can be returned as a result of a
 query using a CURSOR? (Is it possible to do a SELECT
 * on a table that has 2 million records and OPEN a
 CURSOR for that SELECT and show all records'contents
 using FETCH FORWARD, for example?)

You probably wouldn't want to fetch all 2 million rows in one FETCH,
because you'd risk running out of memory on the client side.  But as
long as you grab a reasonable number of rows per FETCH command, it works
fine.  This is in fact the recommended method for dealing with extremely
large SELECT results.

  2 - When one uses a CURSOR for a SELECT is there a
 big memory consumption or there is a memory buffer
 limit and beyond that the result is written to a file
 (virtual memory) and read from there when needed?

Cursors work just the same as plain evaluation of the query would,
except that the query execution is suspended after having fetched the
requested number of tuples.  There isn't any special memory requirement
on the backend side.

Some types of queries need temporary files (an explicit sort of a large
volume of data is an example).  But that'll be just the same whether you
run them via a cursor or a plain SELECT.

regards, tom lane



Re: [GENERAL] LIMITS in SELECTs

1999-01-09 Thread dustin sallings

On Sat, 9 Jan 1999, Fabrice Scemama wrote:

// Is there a way to select only a range of rows from a table, using a  command
// like: 
// 
// SELECT * FROM table WHERE where_statement LIMIT 50,100;

That doesn't make sense in a relational database by itself since
it's not guaranteed to return any particular dataset in any particular
order.  What you want is a cursor.

// I've tried, and then read the doc, but found no information so far.
// 
// Thanks in advance.
// 
// 
// 
// Fabrice Scemama
// 
// 

--
Principle Member Technical Staff, beyond.comThe world is watching America,
pub  1024/3CAE01D5 1994/11/03 Dustin Sallings [EMAIL PROTECTED]
|Key fingerprint =  87 02 57 08 02 D0 DA D6  C8 0F 3E 65 51 98 D8 BE 
L__ and America is watching TV. __




RE: [GENERAL] LIMITS in SELECTs

1999-01-09 Thread Neil Burrows

Hi,

 Is there a way to select only a range of rows from a table, using
 a  command
 like:

 SELECT * FROM table WHERE where_statement LIMIT 50,100;

Assuming you use a "Order By" clause to make sure you got the records in the
appropriate order there is a patch file in the "patches" directory of
ftp.postgresql.org which adds the "LIMIT" command to PostgreSQL 6.4.x

I've not tried it, so cannot say how well it works, or if it can do limits
between 2 values, but you can give it a shot.

Regards,

Neil