The link is not there, but the doc sure is :
http://www.postgresql.org/docs/8.0/static/
Is it the right version ?
I know they can be accessed at developer.postgresql.org, but I didn't
see a link to the docs for postgresql 8 on the new website, did I miss
it somewhere?
Chris
That's not what I meant...
I meant, what does 'c1c1C(=O)N' means ?
If the search operation is too slow, you can narrow it using standard
postgres tools and then hand it down to your C functions. Let me explain,
I have no clue about this 'c1c1C(=O)N' syntax, but I'll suppose you
Example :
psql
create table test (id serial primary key, data10 varchar(10), data20
varchar(20), data text );
insert into test (data10, data20, data) values ('ten','twenty','all i
want');
python
import psycopg
db = psycopg.connect("host=localhost dbname=.")
c = db.cursor()
c.execute( "SELE
s so the sql user is unaware of this.
My thought was that an index is like that and I might borrow some
of the internal uses of indexing for my purposes.
TJ O'Donnell
[EMAIL PROTECTED]
Pierre-Frédéric Caillaud wrote:
I gather your program uses two steps, let's call them :
- p
due to the complicated nature of the database, and inability of zope
Well, I've found that Zope is very good to do a few things, and very bad
at the rest.
to do what we need, and the problems with overhead we've been
experiencing with rails due to the size of the database. Hope this
I like p
completely proprietary front end written in python. Any help finding
useful
What does "a completely proprietary front-end in python" means ?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
(The example is really count(pkey) because count(*) is always going to
do a seq scan I reckon - and could probably never use an index).
postgres knows that count(*) is just "count the rows", you can use
count(1), it makes no difference...
Alex Turner
NetEconomist
On Fri, 07 Jan 2005 11:17:32 -
Never used ODBC with Python, but if you want to use Postgres, I'd
strongly recommend psycopg which I find the nicest and fastest postgres
adapter (certainly a lot better than Pygresql and pypgsql)
Regards
On Sat, 8 Jan 2005 18:01:01 -0600, <[EMAIL PROTECTED]> wrote:
We're looking in
I gather your program uses two steps, let's call them :
- parse( smiles ) -> data
- search( data ) -> result
You can create a functional index on your smiles column, but I don't know
if this will help you ; you can do things like CREATE INDEX ... ON
mytable( lower( myfi
To speed up load :
- make less checkpoints (tweak checkpoint interval and other parameters
in config)
- disable fsync (not sure if it really helps)
- have source data, database tables, and log on three physically
different disks
- have the temporary on a different disk too, or in ramdisk
Not quite, see here for more information:
http://www.thenoodleincident.com/tutorials/typography/index.html
Unfortunately, on my Opera 7 for Linux, this page is illegibly small
(something like font-size: 7px) ; so I did not read it.
I always use sizes in pixels. It works in IE6, Mozilla, and Op
gives me (and those on high) the warm-fuzzies. If I store files (PDFs
of varying sizes by the way, say from 500k to 50M) as large objects,
will I still be able to restore the _whole_ database from a single
pg_dump tar file?
Don't forget a thing :
If you put a webserver in front of this
emorizes whereas the key was generated or not.
On Jan 3, 2005, at 12:25 PM, Pierre-Frédéric Caillaud wrote:
SELECT max, then treat the string as a sequence of characters and
increment the last character, rippling the carry if there is one :
carry = 1
l = len(s)-1
while carry and l
You could also say you can have 2^N databases and 2^(63-N) records in
each database, and use a BIGSERIAL with the N higher bits pointing to the
DB number, and the 63-N lower bits being the actual serial... faster than
strings for indexing, and you init the serial to start at the first value
SELECT max, then treat the string as a sequence of characters and
increment the last character, rippling the carry if there is one :
carry = 1
l = len(s)-1
while carry and l>=0:
c = s[l]
c += carry
if c>max_allowed_char:
c = min_allowed_char
What do you suggest?
In your reporting tool, I suggest you to use (branch_id,
local_primary_key) as a primary key. This way they are guaranteed unique.
Thanks.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
ht
I didn't realize that the order made a difference. A sign of how much
learning I need to do. :p For reference, I think 'file_parent_dir' and
'fs_parent_dir' are the most important because I do an 'ORDER BY
[fs|file]_parent_dir ASC' on most queries. I've made the changes, thank
you again
select field1,field2,field3 from mytable where id=XX;
For instance, on my machine :
SELECT * FROM bigtable with 2M rows WHERE id IN (list of 500 values)
takes 10 ms.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL
Maybe you could use arrays as some function parameters ?
Can you explain why you need so many parameters ?
On Sat, 1 Jan 2005 22:25:02 -0700, Michael Fuhr <[EMAIL PROTECTED]> wrote:
On Sun, Jan 02, 2005 at 01:31:22AM +, Oluwatope Akinniyi wrote:
I tried to create a function with abo
I use a bigger psql-table to store information and keep an id-value of
how big ?
each row in memory of my application for faster access.
related to the previous question : are you sure there won't be a day
where it won't fit ?
My applications is able to calculate a list of needed id's in v
Is your authentication set to "Trust" in the config files ?
On Fri, 31 Dec 2004 08:45:34 -0600, Eric Scott <[EMAIL PROTECTED]>
wrote:
Heya;
I have PostGreSQL 7.3.4 on Mandrake Linux 9.2. For some reason Webmin,
when I tell it to change the password of a pgsql user, acts like it's
workin
You get infinite recursion because your ON UPDATE rule does another
UPDATE which of course calls the rule ; so no, it's not a bug ; also your
UPDATE updates almost the whole table so it won't do what you had in mind
in the first place. You should rather change the NEW row in your update so
Don't use OIDs for primary keys, please, please.
Use a Sequence instead, and replace your SELECT oid by SELECT
currval('sequence_name')
On Tue, 28 Dec 2004 15:44:56 +0100, Nefnifi, Kasem
<[EMAIL PROTECTED]> wrote:
dear all,
it is okay,
the import by psql is faster and quit, it is running n
You could convert your timestamp into an integer (number of seconds since
the epoch).
Also, under certain conditions, you can cheat and use a sequence instead
of a timestamp.
Hi,
I need an index on a postgis-point and a timestamp.
I'm using an GiST index on the geometry. But the creation of
I thought the planner had an automatic rewriter for these situations.
No. There was a prior discussion of this, saying that we really ought
to support the SQL-spec row comparison syntax:
What I meant was that I thought the planner could rewrite :
(A and C) or (A AND B) as A and (B or
SELECT * FROM test WHERE (name='b' and name2>'a') OR (name>'b') ORDER BY
name,name2 ASC LIMIT 1;
Write that WHERE clause instead as:
WHERE name>='b' AND (name>'b' OR (name='b' AND name2>'a'))
This is logically equivalent, but it gives the planner a better handle on
how to use an index scan to sat
sorry for being unclear.
but you guessed right. ID is UNIQUE and and I want to select a row by
its ID and also get the previous and next ones in the name, name2-order.
For the selected row I need all datafields and for the next and previous
I need only the ID (to have it referenced on the dat
select id from mytable where id=45 order by name,name2;
Why do you want to select id if you already know it ?
Do you not want to specify a starting value for name and name2 ?
I'll presume you want to select a row by its 'id' and then get the
previous and next ones in the name, name2
The planner is smarter with GROUP BY than with DISTINCT, so you can
rewrite your query as the following, whihc will probaly use a
HashAggregate, and be a lot faster :
SELECT service_id FROM five_min_stats_200408 GROUP BY service_id;
This won't avoid the Seq Scan however.
If y
Now, since I'm actually interested in unique domain names rather than
unique users, I need to get all the unique domain names corresponding to
users who have acted on a message. That's what the part of the query
after the EXCEPT is.
I don't understand this part at all. What does it mean ?
> SELECT * is almost always bad style. It shouldnt be so hard to
This is another religious issue you'll find people pretty adamant on both
sides.
Seems so.
I tend to prefer to use "SELECT *" because it reduces repetition and
improves
modularity. There are fewer places in the code that need t
Hasn't anybody read the other threads I posted links to?
(That's a rhetorical question, because the answer clearly is "no" :-()
You mean this one :
http://archives.postgresql.org/pgsql-hackers/2004-11/msg00985.php
In which you write :
rel->pages = RelationGetNumberOfBlocks(relation);
if
SELECT * is almost always bad style. It shouldnt be so hard to
Why ?
Many languages, including PHP, have associative arrays, so you should
just use array[column_name] instead of array[column_number]. This is what
I do, all the time.
For instance, in Python :
* The wrong w
4. Isn't ANALYZE on a totally empty table really a special case? The
presumption should be that the table will not remain empty. To optimize
the performance assuming that there will be zero (or close to zero) rows
seems somewhat pointless. However, there are valid reasons why a table
might be e
It seems this issue has been mentionned several times lately...
I'd propose something to do to avoid it :
* When TRUNCAT'ing a table :
- the row-count is reset to 0 (of course !)
- however, the column stats are kept, on the basis that the data which
will be inserted later in the
Suppose I have vacuum_values() function, which removes all
"no longer referenced" by parent column. Kind of function
to be run from time to time to clean table from crud.
It looks like this:
I suppose you have a good reason to not use a foreign key with "ON DELETE
CASCADE" ?
F
can do this fine with small files But if I get above a 1000 rows it
takes so long it time out.
PHP is slow, but not *that* slow, you have a problem somewhere !
I can upload a 10,000 row equivalent file using COPY from psql in 2
seconds,
so the time is down to the PHP processing (really a
explain select notificationID from NOTIFICATION n, ITEM i where
n.itemID = i.itemID;
QUERY PLAN
--
Hash Join (cost=47162.85..76291.32 rows=223672 width=44)
Hash Cond: ("outer".itemid
You can't express it directly with a CHECK constraint but you can do this
:
- add CHECK( test_array( yourcolumn )) in your table definition
- create function test_array which takes an array and looks if all its
elements are in your table T2, I do something like comparing the length of
th
Use python's (or another language) CSV reader module which will parse the
quotes for you and write the values in a tab-delimited file. Don't forget
to escape the tabs in the strings... it should be less than 10 lines of
code.
On Mon, 25 Oct 2004 14:45:57 -0700 (PDT), CSN
<[EMAIL PROTECTED
problem is that '1 months':: interval does not have the same value if you
add it to a date or another :
=> SELECT '2004-02-01'::timestamp+'1 month'::interval,
'2004-03-01'::timestamp+'1 month'::interval;
?column? | ?column?
-+-
2004-0
Create a different trigger function for each table, then each trigger can
be customized to know the column names.
You can generate the triggers from a little script which queries the
system tables to get at the column names. It would spit code like 'IF
NEW.fieldname != OLD.fieldname THEN (
Use a cursor...
I need to select all tuples from a table, but need them to be fetched
with a
constant delay (say 1 sec) between every consecutive tuples.
The first idea that came up to my mind is to create a DelayedSeqScan
operator, and put delay before returning the scanned tuple.
Can
I used to do it this way :
Suppose you have a web form to edit data from a table... you add a field
in your table which contains a version identifier for that data, then you
UPDATE ... WHERE ... AND version_id = the old version id. The version_id
is passed around in a session variable or
interface to postgresql. A mailing list would be ideal. I've
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
mes
Right, I *can* do this. But then I have to build knowledge into that
script so it can find each of these date fields (there's like 20 of them
across 10 different files) and then update that knowledge each time it
changes.
In your case that's a reasonable argument against filtering the
data with a
test=> insert into bench (id,data) select id, 'text_item_'||id::text from
dummy where id<=10 order by id;
INSERT 0 11
test=> CREATE INDEX bench_data_index ON bench (data);
CREATE INDEX
test=> explain select * from bench where data = 'test_item_1';
Index Scan using bench_data_index on ben
I'd advise psycopg as the fastest one (by a factor of 10x on large
selects).
On Wed, 6 Oct 2004 21:13:02 -0700, Scott Frankel <[EMAIL PROTECTED]>
wrote:
I'd like to know if anyone has recommendations for which Python DB-API
2.0
interface to use with PostGreSQL-7.4.5.
The database and tool
Trying to build a gist index on a column in a table.
The table contains 100k rows.
The column is an integer[]. Each row contains about 20-30 distinct values
chosen between 1 and 437.
Aim : search the arrays with the gist integer array operators @ etc.
Creating the index with gist__int_ops t
=> show client_encoding ;
client_encoding
-
UNICODE
(1 ligne)
=> select char_length('a'), bit_length('a');
char_length | bit_length
-+
1 | 8
(1 ligne)
# that's an accented "e"
=> select char_length('é'), bit_length('é'); ;
char_length
Hello,
* On the size of arrays :
I did not find any info in the Docs on this.
How many bytes does an array take on disk ?
My bet would be 4 bytes for number of dimensions, 4 per dimension for the
length, and then the storage requirements of whatever's in the array.
If the a
Use psycopg, it's a hell of a lot faster anyway.
On Tue, 31 Aug 2004 23:25:35 -0400, Jerry LeVan <[EMAIL PROTECTED]>
wrote:
Is it possible to build the python postgresql support library
on Mac OSX 10.3.5 with the default python install?
Adding "--with-python" gets an error message about libp
Your query looks suspiciously complicated...
Why not process all 12 months in one shot with something like this :
- only one subquery
- no join
- date between can make an index scan on date
select category, sum(amount) as sum_amount, extract (month from date) as
You forgot the ";"
CREATE SCHEMA joe;
CREATE SEQUENCE joe.joe_seq start 1;
CREATE TABLE joe.joe_table (int id, varchar name);
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
By "weak", I meant "lack of expressive power", not "weakly supported".
That PHP has a huge community is obvious. It's everywhere. The list of
broken things in PHP is too long to mention, just think about namespaces
for instance. I have no bad feelings towards people who use PHP, rather I
w
Suppose your table is like :
key1key2
1 1
1 2
2 1
To get the next value to insert for key1=1 you can do this :
SELECT key2 FROM ... WHERE key1=1 ORDER BY key2 DESC LIMIT 1
Of course a UNIQUE INDEX on key1, key2 helps.
You could use apache mod_auth_tkt :
http://www.openfusion.com.au/labs/mod_auth_tkt/
Its main advantage is that it'll authentify a user, hence your script
gets the user ID, which you can use as a key in your session table for
instance.
Cut & paste for the lazies :
mod_auth_tkt is a lig
catalog_type varchar(100), pushed int, delivered int, clicks int, opened
int, month varchar(100), type1 int, type2 int, type3 int, type4 int,
type5
int);
You could use the TEXT type (see postgresql doc).
---(end of broadcast)---
TIP 6: Have you
IMHO, no on both questions. There's always danger on relying on the
value of session variables in general in that an application must either
And what if you use a connection sharing/pooling software ? What happens
with the session vars ?
---(end of broadcast)--
create a functional index on lower case value of your column.
ORDER BY lower case value of your column.
in oracle 10g, you can issue:
ALTER SESSION SET NLS_COMP = ansi;
ALTER SESSION SET NLS_SORT = binary_ci;
do you think this is an elegant solution for case insensitive sorting &
I use stored procedures :
create function insertorupdate()
UPDATE mytable WHERE ... SET ...
IF NOT FOUND THEN
INSERT INTO mytable ...
END IF;
You lose flecibility in your request though.
I wish Postgresql had an INSERT OR UPDATE like MySQL does. So f
Idea :
Create a function with the same name as your function, but which takes a
timestamp as an argument, converts it to a string according to your
specifications, then calls your function which needs a string.
Postgresql will decide which function to call according to the types of
the
update tablea set a=10-$inputstring where key=1;
Add parenthesis:
update tablea set a=10-($inputstring) where key=1;
Thus you get :
update tablea set a=10-(-1) where key=1;
instead of :
update tablea set a=10--1 where key=1;
You'd have a problem because -- is the Commen
Display all headersTo: "R.Welz" <[EMAIL PROTECTED]>
Subject: Re: [GENERAL] Discussion wanted: 'Trigger on Delete' cascade.
Date: Wed, 28 Jul 2004 13:24:26 +0200
From: Pierre-Frédéric Caillaud <[EMAIL PROTECTED]>
Organization: La Boutique Numérique
From what you sa
I created a table to hold some forum messages :
table messages (
id serial primary key,
parent_folder integer not null
references folders(id), --in another table
-- for replies, this stores the message which we reply
Python has an interface like this :
params = { 'mystrfield': 'hello', 'myintfield': 5 }
cursor.execute( "SELECT myfield FROM mytable WHERE
mystrfield=%(foo)s AND myintfield=%(bar)d;" , params )
It has the following advantages :
- separation of sql from data
- named parameters
66 matches
Mail list logo