[GENERAL] REVOKE CONNECT doesn't work in 8.3.5

2008-12-19 Thread Zoltan Boszormenyi
Hi,

It seems REVOKE CONNECT doesn't work as advertised.
I have trust entries in pg_hba.conf because my machine is closed.
I added some PG users, and one of them was used in:

REVOKE CONNECT ON DATABASE zozo FROM hs;

However, user hs can happily connect to database zozo
despite the REVOKE. Documentation says at
http://www.postgresql.org/docs/8.3/interactive/sql-grant.html :

CONNECT
Allows the user to connect to the specified database.
This privilege is checked at connection startup (in addition to checking
any restrictions imposed by pg_hba.conf).

To me, this means that REVOKE CONNECT is a veto over trust.
Is it not?

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proofs:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superstitious description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] REVOKE CONNECT doesn't work in 8.3.5

2008-12-19 Thread Zoltan Boszormenyi
Tom Lane írta:
 Zoltan Boszormenyi z...@cybertec.at writes:
   
 I have trust entries in pg_hba.conf because my machine is closed.
 I added some PG users, and one of them was used in:
 

   
 REVOKE CONNECT ON DATABASE zozo FROM hs;
 

   
 However, user hs can happily connect to database zozo
 despite the REVOKE.
 

 Unless you had previously done a specific GRANT CONNECT TO hs,
 the above command doesn't do a darn thing.  The privilege that
 actually exists by default is a grant of connect to PUBLIC.
 What you need to do is REVOKE FROM PUBLIC, and then GRANT to
 whichever users/groups you want to allow to connect.

   regards, tom lane
   

Thanks very much for the clarification. The documentation
doesn't spell it out as clearly. Another possibility is that
I can't read and interpret correctly. :-)

-- 
Bible has answers for everything. Proofs:
But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil. (Matthew 5:37) - basics of digital technology.
May your kingdom come - superstitious description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Favorite Tom Lane quotes

2008-12-02 Thread Zoltan Boszormenyi
Klint Gore írta:
 Scott Marlowe wrote:
 On Mon, Dec 1, 2008 at 10:42 PM, Joshua D. Drake
 [EMAIL PROTECTED] wrote:
  On Tue, 2008-12-02 at 00:37 -0500, Greg Smith wrote:
  Tom on things that might ruin his tape backups:
 
  Then of course there are the *other* risks, such as the place
 burning to
  the ground, or getting drowned by a break in the city reservoir
 that's a
  couple hundred yards up the hill...Or maybe being burgled by Oracle
  employees who are specifically after my backup tapes.
 
  What is a tape?

 Apparently something to do with backups.  But I don't know what those
 are either... :-P
   

 Never underestimate the bandwidth of a station wagon full of tapes
 hurtling down the highway. —Tanenbaum, Andrew S. (1996). Computer
 Networks. New Jersey: Prentice-Hall, 83. ISBN 0-13-349945-6.

But the latency of a truck is awful :-)

 A modern chrysler town and country(1) has a cargo capacity of 140.1
 cubic feet(2) letting it carry 17163 LTO4 (3) tapes at 800GB each. 
 Thats 13730TB. Say it has to get from San Francisco to LA (about 6 hrs
 according to google maps directions(4)), that gives 2288TB/hour,  or
 5.1 terabit/second.

 klint.

 1. its more of a minivan than a station wagon these days but close
 enough.  halve the number of tapes if you're thinking of the classic
 woodie http://www.allpar.com/old/townandcountry.html
 2.
 http://www.chrysler.com/hostc/vsmc/vehicleSpecModels.do?modelYearCode=CUC200908

 3. dimensions from http://en.wikipedia.org/wiki/Linear_Tape-Open
 4.
 http://maps.google.com/maps?f=dsaddr=san+franciscodaddr=los+angeleshl=engeocode=mra=lssll=37.0625,-95.677068sspn=58.598104,116.542969ie=UTF8ll=35.939855,-120.330885spn=7.601811,14.567871z=7




-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIAL datatype

2008-08-26 Thread Zoltan Boszormenyi
Mark Roberts írta:
 On Mon, 2008-08-25 at 11:02 -0600, Scott Marlowe wrote:
   
 Well, of course a 64 bit int is gonna be bigger than a 32 bit, but
 with alignment issues and on 64 bit hardware, I'm guessing the
 difference isn't exactly twice as slow / twice as much storage.  And
 it's way faster than a GUID which was what I think started this
 thread.
 
 ...
 The integer version is 599752704 bytes, and the bigint version is
 673120256 bytes (a ~12% size increase).  When joining the table to
 itself (keys = 1 date, 5 (big)ints, no indexes), the bigint version
 performs a join to itself with an average of 44.1 sec, and the integer
 version in 29.6 sec (a 48% performance hit).

 While granted that it's not twice as big and twice as slow, I think it's
 a fairly valid reason to want to stay within (small)int ranges.
 Sometimes the initial performance hit on insert would really be worth
 the continuing space/performance savings down the road.
   

The development version of PostgreSQL (to-be 8.4)
was modified in a way so on 64-bit hardware 64-bit types
(bigint, date, timestamp, etc.) are compile-time configurable
to be passed as value instead of as reference. This way, most of the
performance hit disappears because there is no malloc() overhead
in passing bigints back and forth. Of course, the on-disk size
difference will be the same.

 Of course, this wasn't very scientific and the benchmarks aren't very
 thorough (for instance I assumed that bigserial is implemented as a
 bigint), but it should remain a valid point.

 Of course, it probably has no bearing on the OP's problem.  So my advice
 to the OP: have you considered not keying such a volatile table on a
 serial value?

 -Mark
   


-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs FreeBSD 7.0 as regular user

2008-07-26 Thread Zoltan Boszormenyi
Zoltan Boszormenyi írta:
 Joshua D. Drake írta:
   
 On Fri, 2008-07-25 at 22:39 +0200, Zoltan Boszormenyi wrote:
   
 
 Hi,

 is there anyone using PostgreSQL on FreeBSD 7.0 starting from scratch?
 I compiled 8.3.3 and wanted to run initdb in my home directory but
 it fails with the error below.
 
   
   
 
 I configured the shared memory settings in advance according to
 the PostgreSQL 8.3 online docs:

 $ cat /etc/sysctl.conf
 ...
 kern.ipc.shmall=32768
 kern.ipc.shmmax=134217728
 kern.ipc.semmap=256

 These three settings were also set with sysctl -w ... to take effect
 immediately.
 
   
 Are you buy chance in a jail?

 Joshua D. Drake
  
 

 I don't know. How to determine? Running this as my own user:
 $ sysctl -a | grep ^kern.ipc
 shows the same settings as above.

 Thanks.
   

Rebooting FreeBSD solved it. Huh? Is it really like W#@$#?
Anyway, thanks for the help.

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL vs FreeBSD 7.0 as regular user

2008-07-25 Thread Zoltan Boszormenyi
Hi,

is there anyone using PostgreSQL on FreeBSD 7.0 starting from scratch?
I compiled 8.3.3 and wanted to run initdb in my home directory but
it fails with the error below.


[EMAIL PROTECTED] ~]$ PGDATA=/home/zozo/pgd833
PATH=/home/zozo/pgc833/bin:$PATH initdb
The files belonging to this database system will be owned by user zozo.
This user must also own the server process.

The database cluster will be initialized with locale C.
The default database encoding has accordingly been set to SQL_ASCII.
The default text search configuration will be set to english.

creating directory /home/zozo/pgd833 ... ok
creating subdirectories ... ok
selecting default max_connections ... 10
selecting default shared_buffers/max_fsm_pages ... 400kB/2
creating configuration files ... ok
creating template1 database in /home/zozo/pgd833/base/1 ... FATAL: 
could not create shared memory segment: Cannot allocate memory
DETAIL:  Failed system call was shmget(key=16, size=1785856, 03600).
HINT:  This error usually means that PostgreSQL's request for a shared
memory segment exceeded available memory or swap space. To reduce the
request size (currently 1785856 bytes), reduce PostgreSQL's
shared_buffers parameter (currently 50) and/or its max_connections
parameter (currently 13).
The PostgreSQL documentation contains more information about shared
memory configuration.
child process exited with exit code 1
initdb: removing data directory /home/zozo/pgd833


I configured the shared memory settings in advance according to
the PostgreSQL 8.3 online docs:

$ cat /etc/sysctl.conf
...
kern.ipc.shmall=32768
kern.ipc.shmmax=134217728
kern.ipc.semmap=256

These three settings were also set with sysctl -w ... to take effect
immediately.
So, there is enough shared memory, especially for the downsized
configuration
determined at initdb time. Why can't I get some shared memory as a
regular user?
It's a fresh install of FreeBSD 7.0, only bison and gmake were added
from the ports
repository and the above sysctls were set in the system. No matter if I
log in from
the console or via ssh, I get the same error above. Is there a magic to
enable a user
to allocate shared memory?

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs FreeBSD 7.0 as regular user

2008-07-25 Thread Zoltan Boszormenyi
Joshua D. Drake írta:
 On Fri, 2008-07-25 at 22:39 +0200, Zoltan Boszormenyi wrote:
   
 Hi,

 is there anyone using PostgreSQL on FreeBSD 7.0 starting from scratch?
 I compiled 8.3.3 and wanted to run initdb in my home directory but
 it fails with the error below.
 

   
 I configured the shared memory settings in advance according to
 the PostgreSQL 8.3 online docs:

 $ cat /etc/sysctl.conf
 ...
 kern.ipc.shmall=32768
 kern.ipc.shmmax=134217728
 kern.ipc.semmap=256

 These three settings were also set with sysctl -w ... to take effect
 immediately.
 

 Are you buy chance in a jail?

 Joshua D. Drake
   

I don't know. How to determine? Running this as my own user:
$ sysctl -a | grep ^kern.ipc
shows the same settings as above.

Thanks.

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs FreeBSD 7.0 as regular user

2008-07-25 Thread Zoltan Boszormenyi
Francisco Reyes írta:
 On 4:53 pm 07/25/08 Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
   
 I don't know. How to determine? Running this as my own user:
 

 Is this your own machine or at an ISP?
   

It's my own machine, FreeBSD is installed as a VMWare guest.

 If it is your own machine, then most likely you are not in a jail. You
 would know if you were since you would have had to do it.

 If at an ISP once way to know if you are in a jail I think is to try to
 ping and traceroute.

 I think by default you can't do one of those within a jail.
 Also try ifconfig. A jail will show you a single IP. A real machine will
 show you usually at least two. 127.0.0.1 and some other address.
   


-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Heavily fragmented table and index data in 8.0.3

2008-06-12 Thread Zoltan Boszormenyi
Gregory Stark írta:
 Zoltan Boszormenyi [EMAIL PROTECTED] writes:

   
 Also, VACUUM FULL also takes too much time, on an otherwise idle database, I
 worked on a copy of their live database. During VACUUM, _bt_getbuf() was
 also called repeatedly with the block number jumping up and down.
 

 VACUUM or VACUUM FULL?

Read the first line, it was VACCUUM FULL. When I mentioned it second time
I didn't spell it out again, sorry.

  VACUUM should only read the table sequentially but
 VACUUM FULL behaves exactly as you describe which is one of the reasons it
 sucks so much.

 That said I think 8.0's VACUUM does jump around when cleaning indexes. That's
 a big reason to upgrade to a more modern version. More recent VACUUM's (but
 not VACUUM FULL) do only sequential scans of both the table and indexes.

 VACUUM FULL also tends to bloat indexes. If you've been running VACUUM FULL
 regularly on this table you may want to REINDEX this table.

   
 I know, 8.0.3 is quite old. But nothing jumped out from the changelog
 up to 8.0.15 that would explain this excessive slowness. SELECTs are
 pretty fast on any of the tables I tried, but INSERT hangs on this table.
 How does this fragmentation happen and how can we prevent this situation?
 

 I'm not sure fragmentation has a direct analogy since tuples lie entirely on
 one page. Unless perhaps you have a problem with TOAST data being laid out
 poorly. Are any individual rows in tables over 2k?
   

Let me reiterate my original question: the problem was with INSERT hanging,
SELECT COUNT(*) on the table returned immediately telling the nr of rows
==  0.
What I gathered from truss -d -u a.out::* -v pollsys -p pid of
backend output was:


The realtime trace I captured from the hung INSERT shows that it
enters two functions repeatedly: _bt_relandgetbuf() and _bt_compare().
The pattern in which these functions entered match either _bt_moveright() or
_bt_insertonpg().



So, it wasn't actually hung, but was very slow. The page numbers jumped
wildly in the trace. It's true that tuples lie inside one page, but
tuple chains
can exceed one page, right? One dead tuple points the next in the chain
which
can be on another page. Consecutive or monotonically increasing page numbers
aren't guaranteed: 8.0.x doesn't have FILLFACTOR, dead space can be
anywhere,
new tuples in the chain can be written on pages much earlier in the
file, causing
seek-dominated load. So, can I call it fragmentation?

This is so embarrassing that the file and its only index used up only 3.3MB,
at the time of my testing no one else used the database, shared_buffers is
large enough to hold both the index and the table data totally:

shared_buffers = 4000 # it's about about 31MB on 8.0.x

So, how comes INSERT gets so busy on an empty table (no live records)
that is so small that it fits into the cache and SELECT COUNT(*) returns
immediately?

 The problems people do run into are either 

 a) lots of dead space because either vacuum (plain old vacuum, not full)
 wasn't run regularly or because large batch updates or deletes were run which
 later activity could never reuse
   

Lots of dead space. Indeed. But on a table that's 2.7MB plus its index
is only 600K?

 b) indexes with bloat either due to the above or due to deleting many but not
 all tuples from a range and then never inserting into that range again.
 indexes can only reuse tuples if you insert in the same page again or if you
 delete all the tuples on the page.

 One trick you could use if you can stand the downtime is to periodically
 CLUSTER the table. Older versions of Postgres had a concurrency bugs in
 CLUSTER to watch out for, but as long as you don't run it at the same time as
 a very long-running transaction such as pg_dump it shouldn't be a problem.
   

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Heavily fragmented table and index data in 8.0.3

2008-06-12 Thread Zoltan Boszormenyi
Tom Lane írta:
 Zoltan Boszormenyi [EMAIL PROTECTED] writes:
   
 The realtime trace I captured from the hung INSERT shows that it
 enters two functions repeatedly: _bt_relandgetbuf() and _bt_compare().
 The pattern in which these functions entered match either _bt_moveright() or
 _bt_insertonpg().
 

 What that sounds like to me is a corrupt index (infinite loop of
 right-links, perhaps).  Have you tried REINDEXing the table?

   regards, tom lane
   

No, TRUNCATE was faster because there were no live records in the table.

How can such an infinite loop sneak into an index?
Hardware is from Sun, not a grocery store PC, so I don't suppose it to
be faulty.
Is there anything in the 8.0.x series that fixes this (or a similar) bug?
If I could point to something in the release notes, I may get them to
upgrade
and they may upgrade to a newer generation even.

Thanks.

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Heavily fragmented table and index data in 8.0.3

2008-06-06 Thread Zoltan Boszormenyi
Hi,

we have a customer with PostgreSQL 8.0.3 with a quite interesting problem.
They have around 24 identical databases and all but one is working nicely.
The one that doesn't work nicely show this problem: INSERT hangs
on an apparently empty table where select count(*) returns 0 quite
quickly.

The relfilenodes of the table and its only (non-unique) index are below:

 ls -l ./17230/20387 ./17230/20382
-rw---   1 postgres postgres 2727936 Jun  6 03:31 ./17230/20382
-rw---   1 postgres postgres  630784 May 24 13:18 ./17230/20387

The machine is:

 uname -a
SunOS ihds00 5.10 Generic_125100-10 sun4u sparc SUNW,Netra-T12

The realtime trace I captured from the hung INSERT shows that it
enters two functions repeatedly: _bt_relandgetbuf() and _bt_compare().
The pattern in which these functions entered match either _bt_moveright() or
_bt_insertonpg(). Also, VACUUM FULL also takes too much time,
on an otherwise idle database, I worked on a copy of their live database.
During VACUUM, _bt_getbuf() was also called repeatedly with the
block number jumping up and down. Obviously the table data is very
fragmented. The total database size is around 366MB, the only client
at the time was VACUUM, both the table and the index fit easily into
shared_buffers at the same time.

I know, 8.0.3 is quite old. But nothing jumped out from the changelog
up to 8.0.15 that would explain this excessive slowness. SELECTs are
pretty fast on any of the tables I tried, but INSERT hangs on this table.
How does this fragmentation happen and how can we prevent this situation?

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to tell if 64 bit vs 32 bit engine?

2008-04-18 Thread Zoltan Boszormenyi

Keaton Adams írta:
I did search the archives for this But didn’t see a posting that 
directly answered the question.


How do I tell if the 32 bit version of PostgreSQL is running on a 64 
bit machine, or if the 64 bit version was installed? Is there a pg_ 
table that I can query or a config file I can look in to find out? I 
found nothing in postgresql.conf that would indicate this.


Thanks,

Keaton



select version();

It will tell you the compiler version and arch as well. You can deduce 
from there.


--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] count(*) and bad design was: Experiences with extensibility

2008-01-09 Thread Zoltan Boszormenyi

Ivan Sergio Borgonovo írta:

On Wed, 9 Jan 2008 13:04:39 +0100
Harald Armin Massa [EMAIL PROTECTED] wrote:

  

Ivan,



  

Please forgive my naiveness in this field but what does it mean an
exact count and what other DB means with an exact count and
how other DB deal with it?
  


  

PostgreSQL will give you an exact count of the contents of the
database as it is in the moment you begin your count. (i.e. the
transaction starts)



  

BUT as the table is not locked, in parallel somebody can bulkload
MANY items into the database, so at the moment (start of your
transaction)
+ 1msec your count may be invalid allready.



That's reasonable. What other DB do and what is the SQL standard
requirement for count(*)?

  

I'd expect it perform as good or better than other DB since now
the bottleneck should be how efficiently it can filter records...
but still a count(*) with a where clause will incur in the same
problem of what exact means.
  

I know of 3 concepts to answer count() faster then PostreSQL:

1) just lie, present an estimate
2) do not have MVCC
3) store record deleted info in index, so you can answer count()
with only scanning the index



Sorry if I insist but I think this is a beaten selling point against
postgresql.
One of the most recurring argument about why someone else db is
better than pg is count is slow.

Who lies?
If it is possible to do otherwise to have count run faster what are
the trade off that make it unreasonable to implement it in pg?
  


The decision to use MVCC in PostgreSQL makes the point moot.
What your transaction cannot see (e.g. another transaction inserted
a new record but didn't COMMITted yet) cannot be counted as visible.
It's not only a theoretical but also practical. Compare that to the
MaxDB way. At the very lowest level MaxDB keeps the records
in a balanced b-tree based on its primary key. Look at this URL:
http://blog.ulf-wendel.de/?p=76
MaxDB keeps how many records are in the tree at all times which is
handy if  your database is close to idle. Consequences:

- very fast COUNT(*) when idle
- records unCOMMITted by other transactions can be seen by COUNT(*)
 if your transaction is in read committed isolation level but not the
 actual records
- COUNT(*) waits for all other transactions that modified the table
 in question to finish if your transaction   is in repeatable read or
 serializable isolation levels. Consider that transactions can take
 a lng time to finish if they process many things. This way your
 SELECT COUNT(*) doesn't respond instantly but doesn't slow down
 your server either. But the end user perception is the same:
 COUNT(*) is slow!

In PostgreSQL, COUNT(*) responds closely at the same speed regardless
of other transactions. Which way do you prefer?


This is not very useful question but 3) imply that select scan the
index return the rows and just later check if they are still there.
Is it?
And since indexes aren't updated on the fly you may get back a
larger number than what is actually the real value.

Let me consider an everyday use where count() looks as the most
obvious solution: paging.

I search trough a table and I need to know which is the last page.
Concurrently someone is deleting a ton of records.
No matter if count() is fast or not when I output the pager it will
be wrong. But still I'll need an estimate of the number of pages,
it is not fun if that estimate is wrong *and* slow.
And once you add the where clauses there is no way to cache the count.
Is there a way to count based on indexes without taking into account
deleted rows so to count faster?

I can make the search faster using indexes as Dann Corbit suggested,
but as you imply that won't make count as fast as the
concurrence[1] that lie or don't use MVCC or store deleted info in
indexes.

SELECT reltuples FROM pg_class WHERE relname = table_name;

doesn't apply since you can't add conditions.

Please be patient. I hear this over and over and over. Postgresql is
faulty because it can't count fast.
And the only reply I've seen are: it is bad design to use count,
you're a dumb ass. I admit I may be a dumb ass, but it is hard to
sell Postgres if I continue to be a dumb ass ;)

- What count(*) really does?
Now I understood that count(*) return the # of rows as it sees them
at the moment it was invoked. That should be true for other DB as
well. That means that unless other DB lock the table they can't take
into account records that are going to be deleted once the count has
been issued.

- When count can be avoided?
Well since even other DB may report the wrong number, this makes
count() look less a Sacre Graal. But still if you need an estimate,
wouldn't it be better to have it quickly?
How does postgresql compare to other DB when you run:
select count(*) from table where conditions
once you use indexes?
If such kind of query will have anyway to scan the results to see if
they are still there since info about deleted records aren't stored
in indexes, 

Re: [GENERAL] count(*) and bad design was: Experiences with extensibility

2008-01-09 Thread Zoltan Boszormenyi

Ivan Sergio Borgonovo írta:

On Wed, 09 Jan 2008 20:29:39 +0100
Zoltan Boszormenyi [EMAIL PROTECTED] wrote:

  

The decision to use MVCC in PostgreSQL makes the point moot.



...

thanks.

  

In PostgreSQL, COUNT(*) responds closely at the same speed
regardless of other transactions. Which way do you prefer?



Considering the relative value of count my interest was for something
that is even less precise than the usual count but performs better.
I'm not proposing to turn Postgres into MySQL.
  


This below might be a good compromise.
Although every INSERT/DELETE will be a bit slower
because of the additional UPDATE on the administrative table.

create table rec_count (tablename text unique, rec_cnt bigint) with 
(fillfactor 50);


Add any tables you want to it with their current record count and
for any tables you want to watch:

create or replace function inc_tablename_rec_cnt()
returns trigger as $$
begin
   update rec_count set rec_cnt = rec_cnt + 1 where tablename = 
'tablename';

   return new;
end; $$ language plpgsql;

create or replace function dec_tablename_rec_cnt()
returns trigger as $$
begin
   update rec_count set rec_cnt = rec_cnt - 1 where tablename = 
'tablename';

   return new;
end; $$ language plpgsql;

create trigger ins_tablename_rec_cnt after insert on tablename for each 
row execute procedure inc_tablename_rec_cnt();
create trigger del_tablename_rec_cnt after insert on tablename for each 
row execute procedure dec_tablename_rec_cnt();


The administrative table will be a highly updated one if you want
to watch a high-traffic table, hence the FILLFACTOR setting.
You may need to adjust max_fsm_pages. Later, you can do a

SELECT rec_cnt FROM rec_count WHERE tablename = 'tablename';

which will be fast and depending on the initial value of COUNT(*)
it will be very close to the exact figure. You can extend the example
with more columns if you know your SELECT COUNT(*) ... WHERE
conditions in advance but this way you have to keep several administrative
tables for different monitored tables. Again, this trades some disk space
and INSERT/DELETE operation speed on the monitored tables for
quicker count.

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select using an index

2007-11-27 Thread Zoltan Boszormenyi

Pau Marc Munoz Torres írta:

Hi

 Recently i created an index in a table using a function (not a 
column) as following


create index H2IAb on precalc (idr(p1,p4,p6,p7,p9,'H-2*IAb'));,

where idr is a function that returns a real number,

as a result i got the following table

mhc2db= \d precalc;
Table public.precalc
 Column | Type|  Modifiers
-+---+-- 

 id  | integer | not null default 
nextval('precalc_id_seq'::regclass)

 p1 | character(1)  |
 p4 | character(1)  |
 p6 | character(1)  |
 p7 | character(1)  |
 p9 | character(1)  |
Indexes:
h2iab btree (idr(p1, p4, p6, p7, p9, 'H-2*IAb'::character varying)),

now, i would like to perform a query using this index, something like

 
Select * from precalc where h2iab2


but obviously h2iab  is not a column...

some of you knows what i should do?

thanks


Formulate your WHERE clause so it matches
your index's expression. E.g.

SELECT * FROM precalc WHERE
idr(p1, p4, p6, p7, p9, 'H-2*IAb'::character varying)  2;

Best regards,
Zoltán Böszörményi


--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] What's the difference between SET STORAGE MAIN and EXTENDED?

2007-09-07 Thread Zoltan Boszormenyi

Tom Lane írta:

Zoltan Boszormenyi [EMAIL PROTECTED] writes:
  

At the end of the day, the behaviour is the same, isn't it?



No, there's a difference in terms of the priority for pushing this
column out to toast storage, versus pushing other columns of the row
out to toast.  Normally we push the widest (remaining) column out,
and repeat, until the tuple is small enough.  But MAIN columns stay
in, until there are no EXTENDED columns left.

regards, tom lane
  


Thanks very much for clarifying.

I was thinking of a binary data that wouldn't fit
into the maximum inline tuple size. In this case
both MAIN and EXTENDED end up compressed
and out-of-line. I didn't consider having multiple
bytea or text columns filled with small amount of data.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] What's the difference between SET STORAGE MAIN and EXTENDED?

2007-09-07 Thread Zoltan Boszormenyi

Hi,

according to 
http://www.postgresql.org/docs/8.2/interactive/storage-toast.html:


EXTENDED allows both compression and out-of-line storage.
This is the default for most TOAST-able data types.
Compression will be attempted first, then out-of-line storage if the row 
is still too big.


MAIN allows compression but not out-of-line storage.
(Actually, out-of-line storage will still be performed for such columns,
but only as a last resort when there is no other way to make the row 
small enough.)


At the end of the day, the behaviour is the same, isn't it?

--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL and Crystal Report

2007-09-05 Thread Zoltan Boszormenyi

[EMAIL PROTECTED] írta:


Hi All.
I've the necessity to use Crystal Report in my C++ project to report 
(in PDF) some PostgreSQL table.
Any idea how to implement this functionality in my C++ project, or 
where I can find some useful CR documentation?
All the Crystal Report documentation I found is about VisualBasic or 
C# API.


Thanks in advance.

Luca.



I successfully used Crystal Reports 9 with PostgreSQL via psqlODBC about 
2 years ago.


--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Zoltan Boszormenyi

Phoenix Kiula írta:

I am getting this message when I start the DB:


psql: FATAL:  could not access status of transaction 0
DETAIL:  Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.


What is this about and how do I solve this? A df -h on my system shows this:


FilesystemTypeSize  Used Avail Use% Mounted on
...
/dev/sda2 ext39.9G  9.5G 0 100% /var
  


This is the problem. Free up some space under /var or move either
the whole partition or PostgreSQL's data directory to a new disk.
The data directory lives under /var/lib/postgresql (mainstream) or
/var/lib/pgsql (RedHat speciality).

--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Moved postgres, now won't start

2007-07-16 Thread Zoltan Boszormenyi

Hi,

Madison Kelly írta:

Hi all,

  I've created a small 2-node (Debian Etch, PgSQL8.1) cluster using a 
(shared) DRBD8 partition formatted as ext3 running in 
Primary/Secondary mode.


  I shut down postgresql-8.1, moved '/etc/postgresql' and 
'/etc/postgres-commin' to '/ha/etc' (where '/ha' is the DRBD 
partitions mount point). Then I created symlinks to the directories 
under '/ha' and then restarted PostgreSQL. Everything *seemed* okay, 
until I tried to connect to a database (ie: 'template1' as 
'postgres'). Then I get the error:


$ psql template1
psql: FATAL:  could not open file global/pg_database: No such file 
or directory


  When I tried connecting to another DB as a user with a (md5) 
password it recognizes if the password is right or not. Also, the file:


# cat /var/lib/postgresql/8.1/main/global/pg_database
postgres 10793 1663 499 499
template1 1 1663 499 499
template0 10792 1663 499 499

  Exists, and is readable as you can see.

  Any idea what's wrong? Does it not like that '/var/lib/postgres - 
'/ha/var/lib/postgres'?


  Thanks!

Madison


Do you use SELinux?
Look for avc denied messages in the logs to see if it's the case.

--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] What's the correct way to use FunctionCallInvoke()?

2007-06-07 Thread Zoltan Boszormenyi

Hi,

I have a problem calling a C function
from another C function - DirectFunctionCall*
cannot be used since some parameters may be NULL.
This dies:

 FunctionCallInfoData  fcinfo1;

 InitFunctionCallInfoData(fcinfo1, NULL, 7, NULL, NULL);
/*  arg[] and arnull[] are filled with correct values */
 result = myfunc(fcinfo1);

Also this also:

 FunctionCallInfoData  fcinfo1;
 FmgrInfo  flinfo1;

 MemSet(flinfo1, 0, sizeof(flinfo1));
 flinfo1.fn_addr = myfunc;
 flinfo1.fn_nargs = 7;
 InitFunctionCallInfoData(fcinfo1, flinfo1, 7, NULL, NULL);
/*  arg[] and arnull[] are filled with correct values */
 result = FunctionCallInvoke(fcinfo1);

How to do it correctly? I am on PostgreSQL 8.1.9 currently.

--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: NULLS and User Input WAS Re: [GENERAL] multimaster

2007-06-04 Thread Zoltan Boszormenyi

Alexander Staubo írta:

On 6/4/07, Andrew Sullivan [EMAIL PROTECTED] wrote:

On Mon, Jun 04, 2007 at 12:37:42AM +0200, PFC wrote:
 NULL usually means unknown or not applicable

Aaaargh!  No, it doesn't.  It means NULL.  Nothing else.

If it meant unknown or not applicable or anything else, then

SELECT * FROM nulltbl a, othernulltbl b
WHERE a.nullcol = b.nullcol

would return rows where a.nullcol contained NULL and b.nullcol
contained NULL.  But it doesn't, because !(NULL = NULL).


I don't disagree with the principle, but that's a specious argument.
Who says (unknown = unknown) should equal true?


NULL means value doesn't exist and for your amusement,
here's an analogy why !(NULL = NULL).
Prove the following statement: every fairy has black hair.
For proving it, let's suppose that there exists a fairy that's hair
isn't black. But fairies don't exist. QED.
Now replace the above statement with another one,
possibly with one that contradicts with the statement above.
Along the same lines, every statements can be proven about
non-existing things, even contradicting ones.

Best regards

--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] why postgresql over other RDBMS

2007-05-26 Thread Zoltan Boszormenyi

If you ask me, yes. When I had to choose between MySQL 3.x and
PostgreSQL 6.5 a long ago and I was able to exclude the DB superuser
with REVOKE CONNECT from MySQL, I said no, thanks.
I did it on purpose to prove that you can the external configuration
is better in this case.

And apart from fixing pg_hba.conf after you move the machine,
PostgreSQL is quite location agnostic network-wise.
You can modify the IP address[es] and FQDN of  the machine,
which is not easily doable if you use e.g. Informix where the hostname
is stored deep inside the DB and some subsystems break if it changes.

Harpreet Dhaliwal írta:
is the host base configuration methodology in postgres superior to 
other RDBMS.

is this something novel that postgres has come up with?

~Harpreet

On 5/26/07, * Tom Lane* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
wrote:


Stefan Kaltenbrunner  [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] writes:
 Tom Lane wrote:
 A more interesting question is what sort of hardware you need
for that
 actually to be a win, though.  Loading a few tables in parallel
sounds
 like an ideal recipe for oversaturating your disk bandwidth...

 you don't actually need that much of disk bandwidth both COPY
and CREATE
 INDEX are CPU bottlenecked on modern boxes and reasonable disk
 subsystems - spreading their work over multiple cores/processes
can give
 big benefits.

Hmm ... I wonder if that's true for COPY BINARY ...

regards, tom lane

---(end of
broadcast)---
TIP 2: Don't 'kill -9' the postmaster





--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] why postgresql over other RDBMS

2007-05-26 Thread Zoltan Boszormenyi

Zoltan Boszormenyi írta:

If you ask me, yes. When I had to choose between MySQL 3.x and
PostgreSQL 6.5 a long ago and I was able to exclude the DB superuser
with REVOKE CONNECT from MySQL, I said no, thanks.
I did it on purpose to prove that you can the external configuration
is better in this case.


I wanted to write you can reenable the superuser to fix problems later,
so the external configuration is better.

And sorry for the top-posting.


And apart from fixing pg_hba.conf after you move the machine,
PostgreSQL is quite location agnostic network-wise.
You can modify the IP address[es] and FQDN of  the machine,
which is not easily doable if you use e.g. Informix where the hostname
is stored deep inside the DB and some subsystems break if it changes.

Harpreet Dhaliwal írta:
is the host base configuration methodology in postgres superior to 
other RDBMS.

is this something novel that postgres has come up with?

~Harpreet

On 5/26/07, * Tom Lane* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Stefan Kaltenbrunner  [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] writes:
 Tom Lane wrote:
 A more interesting question is what sort of hardware you need
for that
 actually to be a win, though.  Loading a few tables in parallel
sounds
 like an ideal recipe for oversaturating your disk bandwidth...

 you don't actually need that much of disk bandwidth both COPY
and CREATE
 INDEX are CPU bottlenecked on modern boxes and reasonable disk
 subsystems - spreading their work over multiple cores/processes
can give
 big benefits.

Hmm ... I wonder if that's true for COPY BINARY ...

regards, tom lane

---(end of
broadcast)---
TIP 2: Don't 'kill -9' the postmaster








--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] /var/run/postgresql ??

2007-05-08 Thread Zoltan Boszormenyi

Hi,

Manuel Preliteiro írta:

Hello, i have installed postgresql version 8.1.4-1.1 in Fedora 6


I have the same system.


Since I'm very Linux unexperienced I used yum for this purpose.

I have all the commands installed, liek createdb (under 
/usr/bin/createdb witch is not the location specified in the online 
manual), i also have the psql shell/interface runing and i can 
manipulate the databases i create. The postgresql deamon is up and 
running also.


The problem is this location /var/run/postgresql does not exists, i 
searched in some foruns and it's mentioned a lot so i dont know why i 
dont have it...


I'm using a postgres OCaml library witch has the parameter  
unix_domain_socket_dir = /var/run/postgresql  and when i try to 
compile it i have the following error:


Set this in your OCaml config:

unix_domain_socket_dir = '/tmp'

since PostgreSQL on Fedora uses the default
/tmp/.s.PGSQL.5432 for its socket.


File , line 0, characters 0-1:
Uncaught exception: Unix.Unix_error (20, connect, )
Uncaught exception: Unix.Unix_error(20, connect, )

I dont know if the problem comes from the above description (i think 
so, but as i said i'm unexperienced in linux), but does any one know 
hot to fix it? Did i miss something during install?? I installed the 
client, the server and the libs were alredy installed...


Thank you
Manuel


--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Checking for string data that makes sense Re: [GENERAL] postgresql vs mysql

2007-02-22 Thread Zoltan Boszormenyi

Chris írta:

CaT wrote:

On Thu, Feb 22, 2007 at 01:08:04PM +1100, Chris wrote:

In postgres, to stop an empty blank string:

create table a(a text not null check (char_length(a)  0));


What's wrrong with using

a  ''

sd the check? Or is this just a flavour thing?


Nothing, I just thought of the other way first :)

Probably better doing it as a  '' otherwise postgres might have to 
run the char_length function every time you do an insert (ie might be 
a very slight performance issue).




I would do a CHECK (trim(a)  '')


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: Checking for string data that makes sense Re: [GENERAL] postgresql vs mysql

2007-02-22 Thread Zoltan Boszormenyi

CaT írta:

On Thu, Feb 22, 2007 at 09:13:13AM +0100, Zoltan Boszormenyi wrote:
  

Chris ?rta:


CaT wrote:
  

On Thu, Feb 22, 2007 at 01:08:04PM +1100, Chris wrote:


create table a(a text not null check (char_length(a)  0));
  

What's wrrong with using

a  ''


Nothing, I just thought of the other way first :)
  

I would do a CHECK (trim(a)  '')



Whitespaces are values too, you know.
  


Yes, I know. But e.g. for a real people name, would you store
accidentally entered spaces before or after the actual name, too?
Which would also ruin sorting by name. But of course, it doesn't
make sense in every case.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Internal function call from C-language function

2006-12-07 Thread Zoltan Boszormenyi

Hi,

I need to call date_part() from a C function.
How to do that?

Thanks in advance,
Zoltán Böszörményi


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Internal function call from C-language function

2006-12-07 Thread Zoltan Boszormenyi

Hi,

Martijn van Oosterhout írta:

On Thu, Dec 07, 2006 at 09:48:25AM +0100, Zoltan Boszormenyi wrote:
  

Hi,

I need to call date_part() from a C function.
How to do that?



Look in fmgr.h for the functions {Oid,Direct,}FunctionCall* which
provide various ways to call other functions.

There's also FunctionCallInvoke() which is more efficient if you're
going to call it lots of times.

Have a nice day,
  


thanks, I found the DirectFunctionCall family,
that wasn't the problem.

The real trick was that inside a C function,
I have to use timestamp_part(), as date_part()
doesn't even exists. The header catalog/pg_proc.h
proves it. date_part() is an SQL wrapper around
real C functions: timestamp[tz]_part(), time[tz]_part()
and interval_part().

However, I have another problem. I have this in the code:

  HeapTupleHeader t;
  Datum   timest;
  boolisnull;

   t = PG_GETARG_HEAPTUPLEHEADER(0);
   timest = DatumGetTimestamp(GetAttributeByName(t, ts_today, 
isnull));
   elog(NOTICE, DatumGetTimestamp() OK, value is %s, isnull ? 
NULL : NOT NULL);


   if (isnull)
   PG_RETURN_BOOL(false);

   yeardatum = CStringGetDatum(year);  
   elog(NOTICE, CStringGetDatum() 1 OK);
   returndatum = DirectFunctionCall2(timestamp_part, yeardatum, 
timest);
   elog(NOTICE, date_part() 1 OK); 
   year = DatumGetFloat8(returndatum);
   elog(NOTICE, conversion 1 OK);


...

But I get this:

NOTICE:  PG_GETARG OK
NOTICE:  DatumGetTimestamp() OK, value is NOT NULL
NOTICE:  CStringGetDatum() 1 OK
ERROR:  invalid memory alloc request size 1951613700

So DirectFunctionCall2() fails. How can I fix it?

Best regards,
Zoltán


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Internal function call from C-language function

2006-12-07 Thread Zoltan Boszormenyi

Martijn van Oosterhout írta:

On Thu, Dec 07, 2006 at 12:55:47PM +0100, Zoltan Boszormenyi wrote:
  

However, I have another problem. I have this in the code:



snip

  
   yeardatum = CStringGetDatum(year);  
   elog(NOTICE, CStringGetDatum() 1 OK);
   returndatum = DirectFunctionCall2(timestamp_part, yeardatum, 
timest);



You're passing a cstring as first argument, whereas I'm fairly sure you
should be passing text. When calling from C the're no argument
checking. I think what you're looking for is:

  yeardatum = text_in(year);

Or something like that.

Hope this helps,
  


text_in() doesn't exists, it's textin() but I have to call it through
DirectFunctionCall1(), like this:

yeardatum = DirectFunctionCall1(textin, CStringGetDatum(year));

However, the session crashes on the subsequent

returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest);

Best regards,

Zoltán


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Internal function call from C-language function

2006-12-07 Thread Zoltan Boszormenyi

Hi,

Martijn van Oosterhout írta:

On Thu, Dec 07, 2006 at 01:40:22PM +0100, Zoltan Boszormenyi wrote:
  

text_in() doesn't exists, it's textin() but I have to call it through
DirectFunctionCall1(), like this:

yeardatum = DirectFunctionCall1(textin, CStringGetDatum(year));

However, the session crashes on the subsequent

returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest);



It would be a good idea to actually find out where it crashes, that
would help you work out what the actual problem is. Just looking at the
code you posted, I only see this other bit that looks a bit suspect:

  Datum   timest;
  boolisnull;

   t = PG_GETARG_HEAPTUPLEHEADER(0);
   timest = DatumGetTimestamp(GetAttributeByName(t, ts_today, isnull));

You're calling DatumGetTimestamp() which would return a timestamp
(probably some structure) but you're storing it in a Datum. Just take
the result of GetAttributeByName directly.
  


Thanks, that worked for me.

I have just one more question:
How can I get an Oid out of a Datum, i.e.
how do I know what type I get in a given Datum?
DatumGetObjectId() seems to give me an Oid that
was specifically stored as a Datum.

The function I am working on is made for an
INSERT RULE, something like this:

CREATE OR REPLACE FUNCTION
myfunc( row1 table1 ) RETURNS BOOL VOLATILE
LANGUAGE C AS 'myfunc.so', 'myfunc';

CREATE RULE rule_table1_insert
AS ON INSERT TO table1
DO INSTEAD (SELECT myfunc( new ) );

So I get the to-be-inserted row in my function.
In the function, depending on the passed in values
I need to insert some other table. To do it,
I need to use SPI_prepare() which needs the list of Oids.


Get at least a backtrace next time it crashes...
  


And how exactly can I do that? psql only reports that
the backend crashed and unable to reset connection.
At that time the backend session is already gone, isn't it?


Have a nice day,
  


Thanks, to you, too. You helped a lot.

Best regards,
Zoltán


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Internal function call from C-language function

2006-12-07 Thread Zoltan Boszormenyi

Hi,

Zoltan Boszormenyi írta:

Hi,

Martijn van Oosterhout írta:

On Thu, Dec 07, 2006 at 01:40:22PM +0100, Zoltan Boszormenyi wrote:
 

text_in() doesn't exists, it's textin() but I have to call it through
DirectFunctionCall1(), like this:

yeardatum = DirectFunctionCall1(textin, CStringGetDatum(year));

However, the session crashes on the subsequent

returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest);



It would be a good idea to actually find out where it crashes, that
would help you work out what the actual problem is. Just looking at the
code you posted, I only see this other bit that looks a bit suspect:

  Datum   timest;
  boolisnull;

   t = PG_GETARG_HEAPTUPLEHEADER(0);
   timest = DatumGetTimestamp(GetAttributeByName(t, ts_today, 
isnull));


You're calling DatumGetTimestamp() which would return a timestamp
(probably some structure) but you're storing it in a Datum. Just take
the result of GetAttributeByName directly.
  


Thanks, that worked for me.

I have just one more question:
How can I get an Oid out of a Datum, i.e.
how do I know what type I get in a given Datum?
DatumGetObjectId() seems to give me an Oid that
was specifically stored as a Datum.


I have found the alternative solution.
If t is HeapTupleHeader then:

   Oid tupType;
   int32   tupTypmod;
   TupleDesc   tupDesc;

   tupType = HeapTupleHeaderGetTypeId(t);
   tupTypmod = HeapTupleHeaderGetTypMod(t);
   tupDesc = lookup_rowtype_tupdesc(tupType, tupTypmod);

will give me the needed TupleDesc and I can use SPI_gettypeid().

Thanks and best regards,
Zoltán


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Vote for your favorite database

2006-12-07 Thread Zoltan Boszormenyi

Scott Marlowe írta:

ouch!  I stand corrected.  Of course, I didn't register to vote either.

Are there more firebird users in hungary (.hr right?) than postgresql
maybe?  I wonder.
  


Then stand a bit longer. :-)
Hungary is .hu, .hr is for Hrvatska which is Croatia for English speakers.

Best regards,
Zoltán Böszörményi


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq