Re: [GENERAL] Oracle and Postgresql

2008-09-13 Thread Blazej
2008/9/9 0123 zyxw <[EMAIL PROTECTED]>:
> Kevin Hunter wrote:
>>
>> 1. Oracle was "first", and has vendor lock-in momentum.
>> 2. Oracle ...speed/performance/concurrency...
>> 3. Oracle has application lock-in as well.  ...
>> 4. Oracle is company-backed, so there is ostensibly "someone to blame"..
>> 5. ... individuals ... may prefer it *because* it's expensive...
>> 6. Mucho better advertising to the right people
>> 7. ...print-version...
>
> 8. Oracle salespeople will help B2B software companies help sell
> and bid on larger oracle-based products and projects.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

1. Maybe ...
2. Of course, like others DBMS, but how many people knows how it's
configure? Are you configure ORACLE DBMS? 1000+ strange parameters and
don't tell me that they are well document. The same with Oracle Apps.
3. Are you kidding? Are you work with Oracle AS, ifs, cmsdk etc? I've
never seen so "good", "fast" and "scalable" software.
4. Software "AS IS" - no comment.
5. ? Lets look:
with Oracle:
App = Oracle licences + yours work = X (and pay for everything - help,
support etc.),
with PostgreSQL:
App = 0 for licences + work = Y,
if X=Y who has more profits?

6. Agree :-((( Oracle marketing is realy the best.
7. ?
8. As I say in 5. - pay for everything - if you are money everybady
helps you (even I, and if you pay 1.000.000 $ I say Oracle is the
best, like other Oracle experts).

Regards,
Blazej

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] about partitioning

2008-09-13 Thread Blazej
8><
> For some more info, I've given at least one presentation on the topic, which
> seems to be missing from the omniti site, but I've uploaded it to
> slideshare...
> http://www.slideshare.net/xzilla/postgresql-partitioning-pgcon-2007-presentation
>
> HTH.
>
8><

Very nice presentation. I have 2 additional questions:

(1) Are you testing INSERT/UPDATE performance when is only one rule
script for all partition tables?
(2) Are you using "SET constraint_exclusion = on;"?

Regards,
Blazej

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] about partitioning

2008-09-13 Thread Robert Treat
On Thursday 11 September 2008 07:47:00 Joao Ferreira gmail wrote:
> Hello all,
>
> my application is coming to a point on which 'partitioning' seems to be
> the solution for many problems:
>
> - query speed up
> - data elimination speed up
>
> I'dd like to get the feeling of it by talking to people who use
> partitioning, in general..
>
> - good, bad,

good :-)

> - hard to manage, easy to manage,

I think the upfront costs for managing a partitioning setup are higher with 
postgres than other systems, but there is nothing that you shouldn't be able 
to automate in a cron script (at which point management becomes easy), plus 
postgres gives you some interesting flexibility that is harder to find in 
other setups. 

> - processing over-head during INSERT/UPDATE,

you can setup inserts to have relativly little overhead, but it requires more 
management/maintence work up front. Updates within a partition also have 
relativly little extra overhead, especially if you put in a little 
application logic to figure out how to work on a partition directly. Updates 
where you are changing the partition key value are always more problematic 
though.  

> - stability/compatibility of pg_dump and restore operations,

no real issues here as long as your on recent enough versions to do wildcard 
table matching for individual tables. 

> - how many partitions would be reasonable for read _and_ write  access
> optimal speed;
>

again, this depends on how exactly your working on the data. For example, we 
have tables with over a thousand partitions on them; in those scenarios all 
data is written into a single partition (with a new partition created daily), 
and the qeury patterns are really straightforward... last month gets a lot of 
queries, lasat three months not so much, last year barely any, and beyond 
that is pretty much just archive info. That said, we have other systems where 
that wouldnt work at all (for example, a static number of partitions, all of 
which are queried activly).  

For some more info, I've given at least one presentation on the topic, which 
seems to be missing from the omniti site, but I've uploaded it to 
slideshare... 
http://www.slideshare.net/xzilla/postgresql-partitioning-pgcon-2007-presentation

HTH. 

-- 
Robert Treat
http://www.omniti.com
Database: Scalability: Consulting:

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore filesystem backup

2008-09-13 Thread Robert Treat
On Saturday 13 September 2008 09:07:23 Patrik Strömstedt wrote:
> Hi,
>
> I have a big problem.
>
> The backup (done nightly with pg_dump) at one of our customers sites is
> broken (well, it's overwritten and is of no use anymore). What is left is a
> filesystem backup that incudes the postgresql directories.
>
> I'm trying to restore one of the tables from this "filesystem backup"
> (employee_pass (salaries..)), that has been deleted on the live system.
>
> How (if possible), can I use this filesystem backup to restore the database
> into a working setup (off-site), from where I can retreive data from the
> one table (employee_pass).
>
> This is on Windows 2000 Server, the Postgres version is 8.01 (I know, it's
> old...)
>

Honestly you have a mess on your hands on a number of different levels. I'd be 
tempted to just try copying the tables files directly between the two 
directories if you can matchup the files on disk correctly. Otherwise you 
might be forced to try and get some filesystem level tools going, but I'm not 
sure how feasible that is on windows, especially on such an old version.  
Good luck. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore parameters

2008-09-13 Thread Robert Treat
On Friday 12 September 2008 14:23:52 Kevin Duffy wrote:
> Hello:
>
> I am move to a new production server and am testing my backup and
> restore procedures.
>
> Given a backup created with the follow command
>
> C:\>C:\progFiles\PostgreSQL\8.2\bin\pg_dump -Fc -b -C -o -f
> E:\backupPostgres\benchxx_c20080912.backup -U postgres benchxx_c
>
> What is the best way to do a restore of the above backup?  I tinkered
> with a couple of scenarios.
>
> The one the gave a clean restore was
>
> a)   drop the database
>
> b)   restore using the following:  pg_restore -C -d template1  -U
> postgres  "E:\data\postgres\ benchxx_c20080912.backup"
>
> Is this the correct way to do a restore?
>

Well, you need to add the -Fc flags on the restore at a minimum, but otherwise 
TIAS.  

Also, the use of -o is kind of a warning sign to possible bad schema design, 
you shouldn't be using oids for anything, are you sure you need that flag? If 
you do you might want to think about factoring that out of your design, if 
not then verify you need the -b flag too. 

As a final though, if you're already going through the pain of a dump/restore, 
I'd suggest looking at upgrading to 8.3 during the process. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there bigintarray?

2008-09-13 Thread Oleg Bartunov

On Sat, 13 Sep 2008, Dmitry Koterov wrote:


Hello.

We have a good intarray contrib module which contains a lot of features:
additional functions, operators with GIN support etc.

Are there plans for bigintarray?


contrib/intarray has GiST index, not GIN, which has basic support
for bigint[].

We have item in our TODO list
http://www.sai.msu.su/~megera/wiki/TODO
GiSTarray - intarray for all built-in data types - 1 week


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fastest way to restore a database

2008-09-13 Thread Robert Treat
On Friday 12 September 2008 15:55:46 Tom Lane wrote:
> Scott Ribe <[EMAIL PROTECTED]> writes:
> >> The worry expressed upthread about the transaction being "too large" is
> >> unfounded, btw.  Unlike some other DBs, PG doesn't have a finite-size
> >> undo log.
> >
> > Sure, it won't fail. But would there be some point at which it would
> > become slower than multiple transactions? Or is it always faster (or at
> > least as fast)?
>
> I can't think of any reason it would be slower.
>
> There are certainly issues you could run into with very long
> transactions, like vacuum not being able to remove bloat elsewhere.
>

Which reminds me (and not seeing it elsewhere), on full restores you will 
probably want to disable autovacuum entirely, as it will compete for 
reasources and can lead to locking issues as well. Note, this can sometimes 
apply to more narrow restore scenarios, but it isnt as cut and dried.  (Ie, 
with multiple database in a cluster, you dont want to disable it for all 
databases, though it'd be nice to disable it for the one you're restoring)

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is there bigintarray?

2008-09-13 Thread Dmitry Koterov
Hello.

We have a good intarray contrib module which contains a lot of features:
additional functions, operators with GIN support etc.

Are there plans for bigintarray?


Re: [GENERAL] TSearch2: find a QUERY that does match a single document

2008-09-13 Thread Dmitry Koterov
>
> explain analyze
>> select * from test.test_tsq
>> where to_tsvector('40x40') @@ q
>>
>
> why do you need tsvector @@ q ? Much better to use  tsquery = tsquery
>
> test=# explain analyze select * from test_tsq where q =
> '40x40'::tsque>
>QUERY PLAN
>
> ---
>  Seq Scan on test_tsq  (cost=0.00..16667.01 rows=1 width=38) (actual
> time=129.208..341.111 rows=1 loops=1)
>   Filter: (q = '''40x40'''::tsquery)
>  Total runtime: 341.134 ms
> (3 rows)
>
M-mmm... Seems your understood me incorrectly.

I have to find NOT queries which are exactly equal to another query, BUT
queries which MATCH the GIVEN document. '40x40' was a sample only,
in real cases it will be 1-2K document.

Here is a more realistic sample:

explain analyze
select * from test.test_tsq
where to_tsvector('
  Here is a real document text. It may be long, 1-2K.
  In this sample it contains a lexem "40x40", so there is a tsquery
  in test_tsq.q which matches this document. I need to find all such queries
fast.
  Of course, in real cases the document text is unpredictable.
') @@ q



> QUERY PLAN
>> Seq Scan on test_tsq  (cost=0.00..17477.01 rows=800 width=36) (actual
>> time=68.698..181.458 rows=1 loops=1)
>>  Filter: ('''40x40'':1'::tsvector @@ q)
>> Total runtime: 181.484 ms
>>
>
> '800' is the number of estimated rows, which is not good, since you got
> only 1 row.
>
Why 800? The table contains 80 rows, and seqscan is used. Does it scan
the whole table or not? If yes, possibly there is a bug in explain output?
(No mater if I create GIST index on test_tsq.q or not, the number of rows is
still 800, so it seems to me that GIST index is not used at all.)


Re: [GENERAL] "Healing" a table after massive updates

2008-09-13 Thread Gregory Stark
Tom Lane <[EMAIL PROTECTED]> writes:

> Gregory Stark <[EMAIL PROTECTED]> writes:
>> Incidentally the visibility bugs are indeed entirely fixed in 8.3. In 8.2 and
>> before cluster and alter table rewrites can both cause tuples to not appear
>> for transactions which were started before the cluster or alter table such as
>> a long-running pg_dump.
>
> AFAIK that's true only for CLUSTER, not ALTER TABLE.  There would be a
> bunch of logical inconsistencies in altering rows and then pretending
> you hadn't.

Uh, what's true? That the bugs are fixed or that we ever had them to begin
with? 

Oh, are you saying that ALTER TABLE doesn't go through the rewrite code to
carry along old versions of the tuples because it can't alter the old
versions? That is a bit annoying. That means there's no safe way to do a table
rewrite without clustering the table?

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Healing" a table after massive updates

2008-09-13 Thread Tom Lane
Gregory Stark <[EMAIL PROTECTED]> writes:
> Incidentally the visibility bugs are indeed entirely fixed in 8.3. In 8.2 and
> before cluster and alter table rewrites can both cause tuples to not appear
> for transactions which were started before the cluster or alter table such as
> a long-running pg_dump.

AFAIK that's true only for CLUSTER, not ALTER TABLE.  There would be a
bunch of logical inconsistencies in altering rows and then pretending
you hadn't.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible limit on transaction size?

2008-09-13 Thread Tom Lane
[EMAIL PROTECTED] writes:
> About 150 million records into the import process, I get the following error:

> ERROR: lock AccessShareLock on object 51533/51769/0 is already held

What PG version?  Can you put together a self-contained test case?
(It's probably independent of the data, so you could make a script that
just processes a lot of dummy data the same way you are doing.)

There was a similar report a couple of weeks ago:
http://archives.postgresql.org/pgsql-bugs/2008-08/msg00264.php
http://archives.postgresql.org/pgsql-bugs/2008-09/msg00021.php
but that person hasn't gotten back to us on determining exactly
what happened.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] detecting recurring appointment conflicts

2008-09-13 Thread Ben Chobot
I'd like to store recurring appointments in my database, and be pretty  
accepting in the scheduling of those appointments. For instance, I  
want to accept both "every other Tuesday starting 2008-11-04" as well  
as "every 3rd October 13th starting 2009." Storing those appointments  
isn't that hard, but what isn't at all obvious to me (and what I'm  
hoping is obvious to somebody else) is how to efficiently detect if  
and when overlaps will occur.


I can see a few ways to look for schedule overlaps in a defined  
window, but ideally, I'd like to know if there will be overlaps at any  
time, not just in, say, the next 10 years. (Though, admittedly, the  
next 10 years is probably sufficient.) Is this something somebody has  
tackled before?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Healing" a table after massive updates

2008-09-13 Thread Gregory Stark
"Scott Marlowe" <[EMAIL PROTECTED]> writes:

> On Thu, Sep 11, 2008 at 8:56 AM, Bill Moran
> <[EMAIL PROTECTED]> wrote:
>> In response to Alvaro Herrera <[EMAIL PROTECTED]>:
>>
>>> Bill Moran wrote:
>>> > In response to "Gauthier, Dave" <[EMAIL PROTECTED]>:
>>> >
>>> > > I might be able to answer my own question...
>>> > >
>>> > > vacuum FULL (analyze is optional)
>>> >
>>> > CLUSTER _may_ be a better choice, but carefully read the docs regarding
>>> > it's drawbacks first.  You may want to do some benchmarks to see if it's
>>> > really needed before you commit to it as a scheduled operation.
>>>
>>> What drawbacks?
>>
>> There's the whole "there will be two copies of the table on-disk" thing
>> that could be an issue if it's a large table.
>
> I've also found cluster to be pretty slow, even on 8.3.  On a server
> that hits 30-40Megs a second write speed for random access during
> pgbench, it's writing out at 1 to 2 megabytes a second when it runs,
> and takes the better part of a day on our biggest table.  vacuumdb -fz
> + reindexdb ran in about 6 hours which means we could fit it into our
> maintenance window.  vacuum moves a lot more data per second than
> cluster.

Alternative you can do

ALTER TABLE tab ALTER col TYPE  USING col;

which will rewrite the table without using an index. This is much faster but
has the same other disadvantages.


Incidentally the visibility bugs are indeed entirely fixed in 8.3. In 8.2 and
before cluster and alter table rewrites can both cause tuples to not appear
for transactions which were started before the cluster or alter table such as
a long-running pg_dump.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting rows in a very specific order

2008-09-13 Thread Martin Gainty

Gordon-

without disclosing company proprietary specifics can you provide a pared down 
schema of the affected
tables and their column names
e.g.
Table1
{
 column1 int primary key;
};

Table2
{
 int column2;
 int column1 references table1(column1)
}

thanks
Martin 
_ 
Disclaimer and confidentiality note 
Everything in this e-mail and any attachments relates to the official business 
of Sender. This transmission is of a confidential nature and Sender does not 
endorse distribution to any party other than intended recipient. Sender does 
not necessarily endorse content contained within this transmission. 


> From: [EMAIL PROTECTED]
> Subject: [GENERAL] Getting rows in a very specific order
> Date: Mon, 8 Sep 2008 09:29:44 -0700
> To: pgsql-general@postgresql.org
> 
> I'm considering using an array of ints column in a table which lists a
> row's ancestry.  For example, if item 97 is contained within itme 68
> and that item is contained with in 31 and that item is contained
> within item 1 then the value of the hierachy column would be
> {1,31,68,97}, the numbers refer to the primary keys of the rows.
> 
> If I use the value of the hierarchy column in a query I can get all
> the rows that a given row is an descendant of.  (SELECT * FROM items
> WHERE itm_id IN (1,31,68,97), for example.  However, I need the rows
> to be in the correct order, ie the root node first, child second,
> grandchild third etc.  I'm guessing the order can be set with an ORDER
> BY, but I've only ever used ORDER BY to order by ascending or
> descending order of a single column.  Is there a way to retrieve the
> rows explicitly in the order they are listed in the hierarchy array?
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

_
Get more out of the Web. Learn 10 hidden secrets of Windows Live.
http://windowslive.com/connect/post/jamiethomson.spaces.live.com-Blog-cns!550F681DAD532637!5295.entry?ocid=TXT_TAGLM_WL_domore_092008

[GENERAL] Restore filesystem backup

2008-09-13 Thread Patrik Strömstedt
Hi,

I have a big problem. 

The backup (done nightly with pg_dump) at one of our customers sites is broken 
(well, it's overwritten and is of no use anymore). What is left is a filesystem 
backup that incudes the postgresql directories.

I'm trying to restore one of the tables from this "filesystem backup" 
(employee_pass (salaries..)), that has been deleted on the live system.

How (if possible), can I use this filesystem backup to restore the database 
into a working setup (off-site), from where I can retreive data from the one 
table (employee_pass).

This is on Windows 2000 Server, the Postgres version is 8.01 (I know, it's 
old...)

Patrik Strömstedt

Re: [GENERAL] Possible limit on transaction size?

2008-09-13 Thread Martin Gainty

Good Morning Brian-

sounds like a very nasty bug first discovered in 07
http://archives.postgresql.org/pgsql-bugs/2007-04/msg00075.php

the bug was supposed to be eradicated post 8.3.3

which version are you using which exhibits this behaviour?
thanks/
Martin 
__ 
Disclaimer and confidentiality note 
Everything in this e-mail and any attachments relates to the official business 
of Sender. This transmission is of a confidential nature and Sender does not 
endorse distribution to any party other than intended recipient. Sender does 
not necessarily endorse content contained within this transmission. 


> Date: Mon, 8 Sep 2008 09:17:49 -0400
> From: [EMAIL PROTECTED]
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Possible limit on transaction size?
> 
> Hello,
> 
> I was curious if there was a known size limit for Postgres transactions. In
> order to import data into my database, my Java application begins a 
> transaction,
> imports the data (into several different tables), and then commits the
> transaction on success. It works great on small data sets, but on the large
> ones, it doesn't work so well.
> 
> About 150 million records into the import process, I get the following error:
> 
> ERROR: lock AccessShareLock on object 51533/51769/0 is already held
> CONTEXT: SQL statement "INSERT INTO table_name (col1, col2, col3, col4) VALUES
> (val1, val2, val3, val4)"
> 
>  PL/pgSQL function "create_import" line 19 at SQL statement
> STATEMENT: select * from create_import($1,$2,$3,$4,$5,$6) as result 
> 
> I know my server can handle this much data (24GB RAM, 2 TB SAS disks, etc.), 
> but
> it doesn't seem like Postgres likes the large transactions.
> 
> Any thoughts?
> 
> Thank you for your time,
> 
> Brian Beyer
> Purdue University
> [EMAIL PROTECTED]
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

_
See how Windows Mobile brings your life together—at home, work, or on the go.
http://clk.atdmt.com/MRT/go/msnnkwxp1020093182mrt/direct/01/

Re: [GENERAL] Getting rows in a very specific order

2008-09-13 Thread Merlin Moncure
On Mon, Sep 8, 2008 at 12:29 PM, Gordon <[EMAIL PROTECTED]> wrote:
> I'm considering using an array of ints column in a table which lists a
> row's ancestry.  For example, if item 97 is contained within itme 68
> and that item is contained with in 31 and that item is contained
> within item 1 then the value of the hierachy column would be
> {1,31,68,97}, the numbers refer to the primary keys of the rows.
>
> If I use the value of the hierarchy column in a query I can get all
> the rows that a given row is an descendant of.  (SELECT * FROM items
> WHERE itm_id IN (1,31,68,97), for example.  However, I need the rows
> to be in the correct order, ie the root node first, child second,

convert that to

select * from items where itm_id = 1
  union all
select * from items where itm_id = 31
...

If this is a recursive table you should probably thinking about
writing recursive functions that access the structure or possibly use
a materialized path approach.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Possible limit on transaction size?

2008-09-13 Thread bbeyer
Hello,

I was curious if there was a known size limit for Postgres transactions. In
order to import data into my database, my Java application begins a transaction,
imports the data (into several different tables), and then commits the
transaction on success. It works great on small data sets, but on the large
ones, it doesn't work so well.

About 150 million records into the import process, I get the following error:

ERROR: lock AccessShareLock on object 51533/51769/0 is already held
CONTEXT: SQL statement "INSERT INTO table_name (col1, col2, col3, col4) VALUES
(val1, val2, val3, val4)"

 PL/pgSQL function "create_import" line 19 at SQL statement
STATEMENT: select * from create_import($1,$2,$3,$4,$5,$6) as result 

I know my server can handle this much data (24GB RAM, 2 TB SAS disks, etc.), but
it doesn't seem like Postgres likes the large transactions.

Any thoughts?

Thank you for your time,

Brian Beyer
Purdue University
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle and Postgresql

2008-09-13 Thread 0123 zyxw

Kevin Hunter wrote:

1. Oracle was "first", and has vendor lock-in momentum.
2. Oracle ...speed/performance/concurrency...
3. Oracle has application lock-in as well.  ...
4. Oracle is company-backed, so there is ostensibly "someone to blame"..
5. ... individuals ... may prefer it *because* it's expensive...
6. Mucho better advertising to the right people
7. ...print-version...


8. Oracle salespeople will help B2B software companies help sell
and bid on larger oracle-based products and projects.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] weekday from timestamp?

2008-09-13 Thread Artacus

..::rDk::.. wrote:

im strugling with my dataset..

got a small pgsql db with a timestamp column in format :MM:DD
HH:MM:SS for each record


Use to_char

to_char(tscol, 'dy')  -> mon
to_char(tscol, 'Day') -> Monday
to_char(tscol, 'D')   -> 2

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting rows in a very specific order

2008-09-13 Thread Artacus



If I use the value of the hierarchy column in a query I can get all
the rows that a given row is an descendant of.  (SELECT * FROM items
WHERE itm_id IN (1,31,68,97), for example.  However, I need the rows
to be in the correct order, ie the root node first, child second,
grandchild third etc.  I'm guessing the order can be set with an ORDER
BY, but I've only ever used ORDER BY to order by ascending or
descending order of a single column.  Is there a way to retrieve the
rows explicitly in the order they are listed in the hierarchy array?


If you've got the intarray contrib module installed, you can use

idx(array[1,31,68,97], i)

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autocommit, isolation level, and vacuum behavior

2008-09-13 Thread Tomasz Ostrowski

On 2008-09-12 15:52, Jack Orenstein wrote:

Sorry, I misspoke. I have an index, but preferred doing a scan without 
the index in this case.


Why?

The only reason I can think of is that you'd like to avoid disk seeking. 
But you get at most 1 row in 30 seconds, so disk latency (only several 
milliseconds) can be ignored.


Regards
Tometzky
--
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general