Stephan Szabo wrote:
On Wed, 20 Aug 2003, Rod Taylor wrote:
...
Is the temp table version any faster? I realize it has a higher limit
to the number of items you can have in the list.
Within the scope of the new hashed IN stuff I believe so in at least some
cases. I have a few million row
Rudi Starcevic wrote:
Hi,
I'd like to learn about Data Warehousing - using PostgreSQL of course.
I've been looking around for some good starting info. on this subject
without
a lot of joy so I'd like to ask if anyone could point me to a good
starting off doco. or tutorial.
I have found some da
Erik Thiele wrote:
hi,
i have a table consisting of 4 integers.
seq is for making the table ordered. (ORDER BY SEQ ASC)
a,b,c maybe null
seq | a | b | c
-+++---
0 | 1 | 2 | 3
1 | 1 | 2 |
2 | 5 | 7 |
3 | -2 | -4 |
i am needing a sql statement to do
c=a+b+"the
Bruno Wolff III wrote:
...
It shouldn't be too difficult to write some triggers that make something
closer to autoincrement. It probably won't work very well if there are
lots of concurrent updates though. You can either lock the table with
the column exclusively and then find the largest value a
Randall Lucas wrote:
Wow, I had never actually faced this problem (yet) but I spied it as a
possible stumbling block for porting MySQL apps, for which the
standard practice is inserting a NULL. As I have made a fairly
thorough reading of the docs (but may have not cross-correlated every
piece
Hi all,
I am struggling hard with a badly written piece of code.
It has such statements all over the place:
INSERT INTO TABLE A (NULL, Value1, Value2...).
It was written for MySQL, which can take NULL and then assign an
auto_increment.
However, in PostgreSQL I am getting problems, because it woul
Nicolas JOUANIN wrote:
Hi,
Thanks for your help. In fact that means 2 solutions for this:
1) select * from pdi where rtrim(pdi) = '100058'
or
2) Use VARCHAR instead of CHAR
I don't which is the best , but both are working.
Nicolas.
Do you have a specific reason why to use CHAR?
I use
Denis Arh wrote:
How to delete "real" duplicates?
id | somthing
---
1 | aaa
1 | aaa
2 | bbb
2 | bbb
(an accident with backup recovery...)
In these cases, its certainly the best to rebuild your table
using a
CREATE TABLE new AS
SELECT col1,col1..
FROM old
GROUPY BY col1,col2..