Re: [GENERAL] PG wire protocol question

2016-05-18 Thread Boszormenyi Zoltan

2016-05-17 15:29 keltezéssel, Albe Laurenz írta:

Boszormenyi Zoltan wrote:

it was a long time I have read this list or written to it.

Now, I have a question. This blog post was written about 3 years ago:
https://aphyr.com/posts/282-jepsen-postgres

Basically, it talks about the client AND the server as a system
and if the network is cut between sending COMMIT and
receiving the answer for it, the client has no way to know
whether the transaction was actually committed.

The client connection may just timeout and a reconnect would
give it a new connection but it cannot pick up its old connection
where it left. So it cannot really know whether the old transaction
was committed or not, possibly without doing expensive queries first.

Has anything changed on that front?

That blog post seems ill-informed - that has nothing to do with
two-phase commit.


In the blog post 2pc was mentioned related to the communication,
not as a transaction control inside the database. I wouldn't call
it misinformed. After all, terminology can mean different things
in different contexts.


The problem - that the server may commit a transaction, but the client
never receives the server's response - is independent of whether
two-phase commit is used or not.

This is not a problem of PostgreSQL, it is a generic problem of communication.


Indeed.


What would be the alternative?
That the server has to wait for the client to receive the commit response?


Not quite. That would mean constantly sending an ack that the other
received the last ack, which would be silly.

If the network connection is cut, the client should be able to
reconnect to the old backend and query the last state and continue
where it left, maybe confirming via some key or UUID that it was
indeed the client that connected previously.


But what if the client received the message and the server or the network
go down before the server learns of the fact?
You see that this would lead to an infinite regress.

Yours,
Laurenz Albe





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PG wire protocol question

2016-05-14 Thread Boszormenyi Zoltan

Hi,

it was a long time I have read this list or written to it.

Now, I have a question. This blog post was written about 3 years ago:
https://aphyr.com/posts/282-jepsen-postgres

Basically, it talks about the client AND the server as a system
and if the network is cut between sending COMMIT and
receiving the answer for it, the client has no way to know
whether the transaction was actually committed.

The client connection may just timeout and a reconnect would
give it a new connection but it cannot pick up its old connection
where it left. So it cannot really know whether the old transaction
was committed or not, possibly without doing expensive queries first.

Has anything changed on that front?

There is a 10.0 debate on -hackers. If this problem posed by
the above article is not fixed yet and needs a new wire protocol
to get it fixed, 10.0 would be justified.

Thanks in advance,
Zoltán Böszörményi



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Heartbleed Impact

2014-04-16 Thread Boszormenyi Zoltan

2014-04-16 12:40 keltezéssel, Tony Theodore írta:

On 16 April 2014 18:48, Dev Kumkar  wrote:


We embed certain binaries and libssl.so.1.0.0 gets shipped along with
pre-build in-house database with product.

1.0.0 isn't affected.


The package version and the soversion are only loosely related.
E.g .the upstream OpenSSL 1.0.0 and 1.0.1 series both ship soversion 1.0.0.

Best regards,
Zoltán Böszörményi



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] using rpmbuild with PostgreSQL 9.2.6 source code

2014-01-20 Thread Boszormenyi Zoltan

2014-01-20 10:50 keltezéssel, Sameer Kumar írta:

Hi,

I still get issues with uuid-devel.

[root@pflex75 SOURCES]# rpmbuild -ba postgresql-9.2.spec
error: Failed build dependencies:
uuid-devel is needed by postgresql92-9.2.6-2PGDG.el6.ppc64


On googling a bit I found that uuid-devel is  libuuid-devel on RHEL (is that the case?). 
I have below packages:


Both exist. libuuid[-devel] is part of util-linux, uuid[-devel] is a standalone library 
with a different API.

This is from Fedora 20:

$ rpm -q libuuid-devel uuid-devel
libuuid-devel-2.24-2.fc20.x86_64
uuid-devel-1.6.2-21.fc20.x86_64

You need to install uuid-devel, this is the one needed for building PostgreSQL.

http://mirrors.kernel.org/centos/6.5/os/x86_64/Packages/ carries both
uuid-1.6.1-10.el6.x86_64.rpm
and
uuid-devel-1.6.1-10.el6.x86_64.rpm

You just need to find a mirror that has ppc64 packages.


[root@pflex75 SOURCES]# yum install *uuid*
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use 
subscription-manager to register.

Setting up Install Process
Package uuidd-2.17.2-12.9.el6.ppc64 already installed and latest version
Package libuuid-2.17.2-12.9.el6.ppc64 already installed and latest version
Package libuuid-devel-2.17.2-12.9.el6.ppc64 already installed and latest version
Package uuid-1.6.1-10.el6.ppc64 already installed and latest version
Nothing to do

So I changed the requires parameter from uuid-devel to libuuid-devel (which I have 
reverted back now) and tried rpmbuild. I got below error:


checking for uuid_export in -lossp-uuid... no
checking for uuid_export in -luuid... no
configure: error: library 'ossp-uuid' or 'uuid' is required for OSSP-UUID
error: Bad exit status from /var/tmp/rpm-tmp.O9zMnW (%build)


RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.O9zMnW (%build)


Apologies for earlier cross posting to Hackers thread.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is this a bug in ECPG?

2013-09-07 Thread Boszormenyi Zoltan

2013-09-06 09:57 keltezéssel, Boszormenyi Zoltan írta:

2013-09-06 02:57 keltezéssel, Wang, Jing írta:


Hi,

ECPG don't support the following scenario:

char cur_name = "cur1";

EXEC SQL DECLARE :cur_name CURSOR FOR ...

EXEC SQL OPEN :cur_name;

EXEC SQL FETCH cur1 INTO...

...

EXEC SQL CLOSE :cur_name;

An compile error will occur for above codes because ECPG can't find the "cur1".



Of course it cannot find it, ECPG doesn't interpret the content of the variable,
so it cannot know it is "cur1".


But it does support the following scenario:

char cur_name = "cur1";

EXEC SQL DECLARE :cur_name CURSOR FOR ...

EXEC SQL OPEN :cur_name;

EXEC SQL FETCH :cur_name INTO...

...

EXEC SQL CLOSE cur1;

Does the second scenario is a bug for ECPG?



Actually, it looks like it is a bug.


BTW, my "ECPG cursor readahead" patch unintentionally fixed this.
https://commitfest.postgresql.org/action/patch_view?id=1195




 Will anyone use the second scenario to close an CURSOR that is named by a 
variable?



Not very likely.


Kind regards,

Jing




--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web:http://www.postgresql-support.de
  http://www.postgresql.at/



--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [GENERAL] Is this a bug in ECPG?

2013-09-06 Thread Boszormenyi Zoltan

2013-09-06 02:57 keltezéssel, Wang, Jing írta:


Hi,

ECPG don't support the following scenario:

char cur_name = "cur1";

EXEC SQL DECLARE :cur_name CURSOR FOR ...

EXEC SQL OPEN :cur_name;

EXEC SQL FETCH cur1 INTO...

...

EXEC SQL CLOSE :cur_name;

An compile error will occur for above codes because ECPG can't find the "cur1".



Of course it cannot find it, ECPG doesn't interpret the content of the variable,
so it cannot know it is "cur1".


But it does support the following scenario:

char cur_name = "cur1";

EXEC SQL DECLARE :cur_name CURSOR FOR ...

EXEC SQL OPEN :cur_name;

EXEC SQL FETCH :cur_name INTO...

...

EXEC SQL CLOSE cur1;

Does the second scenario is a bug for ECPG?



Actually, it looks like it is a bug.


 Will anyone use the second scenario to close an CURSOR that is named by a 
variable?



Not very likely.


Kind regards,

Jing




--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [GENERAL] Add a NOT NULL column with default only during add

2013-08-02 Thread Boszormenyi Zoltan

2013-08-02 16:58 keltezéssel, Tom Lane írta:

Adrian Klaver  writes:

No I am saying that in the ALTER data_type case the column is not being
created and USING is working on data(assuming data had actually been
entered already) that exists. What you propose is a two step process,
create a column and then fill it with a default value that goes away
after the ALTER TABLE ADD COLUMN statement. In fact what you are already
doing.

I do see a use-case that's not covered by ADD COLUMN ... DEFAULT
but could be covered with USING: when you want to initialize the new
column with data taken from some other existing column(s).

Whether this comes up often enough to justify a new feature isn't
clear.  You could get the same effect, for pretty much the same cost,
with
1. ADD COLUMN new_col, not specifying any default;
2. UPDATE ... SET new_col = some expression of other columns;
3. ALTER COLUMN new_col SET DEFAULT, if needed.

If you need to make the column NOT NULL, that could be done after step 3,
but then you'd incur another table scan to verify this constraint.
So a USING clause could save you that extra scan.

But if you add another quantum of complication, namely that the new
column's data has to come from some other table, USING would fail at that;
you're back to having to do it with UPDATE.  So it seems like there's
only a pretty narrow window of applicability for this proposed feature.
I'm having a hard time getting excited about it.


If this feature also allows constants and non-volatile functions,
the window isn't so narrow anymore.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Counterintuitive locking behavior

2013-01-05 Thread Boszormenyi Zoltan

2013-01-06 03:18 keltezéssel, Chris Travers írta:

Hi everyone;

I recently discovered that subselects in update statements don't assume that the select 
is for update of the updating table.


For example, if I do this:

CREATE TABLE foo (
   test int primary key,
);

INSERT INTO foo VALUES (1);

then in one session:

BEGIN;
UPDATE foo SET test = 2 WHERE test in (select test from foo where test = 1);

and then in the other session

BEGIN;
UPDATE foo SET test = 3 WHERE test in (select test from foo where test = 1);

When I commit both transactions, the second one chronologically always takes precedence. 
 In other words, the locks takes effect after the subselect but before the rows are 
updated.  This strikes me as quite error prone and quite a bit more error prone than a 
rule which says that unless stated otherwise subselects of the updated table are to be 
selected for update.


This may strike some as a "do what I mean" kind of feature, but the way I am looking at 
it is that a SQL statement is usually written as a declarative block, and an assumption 
that the SQL statement is to be evaluated atomically is a good one for predicability of 
software (in other words, locks apply to the whole statement).


Is there a reason why we don't do locking this way?  (i.e. where on UPDATE foo, all rows 
selected from foo during the update are locked unless the subselect specifically states 
otherwise.)


What you are seeing is the MVCC behaviour of PostgreSQL
in READ COMMITTED mode. You can use REPEATABLE READ mode
in 9.2.x or SERIALIZABLE mode in earlier generations.
Please read the "Concurrency control" section of the documentation.
http://www.postgresql.org/docs/9.2/interactive/mvcc.html

With this mode:

session 1:

zozo=> create table foo (test int primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "foo_pkey" for table 
"foo"
CREATE TABLE
zozo=> insert into foo values (1);
INSERT 0 1
zozo=> begin isolation level repeatable read;
BEGIN
zozo=> update foo set test = 2 where test = (select test from foo where test = 
1);
UPDATE 1

session 2:
zozo=> begin isolation level repeatable read;
BEGIN
zozo=> update foo set test = 2 where test = (select test from foo where test = 
1);
(session 2 is waiting for the lock on the row at this point)

session 1:

zozo=> commit;
COMMIT

session 2 threw an error after session 1 committed:

ERROR:  could not serialize access due to concurrent update

But updating a row blindly is quite silly even in this small example,
you can use SELECT ... FOR UPDATE for explicit locking.

session 1:

zozo=> begin;
BEGIN
zozo=> select * from foo where test = 1 for update;
 test
--
1
(1 row)

session 2:

zozo=> begin;
BEGIN
zozo=> select * from foo where test = 1 for update;
session 2 waits...

session 1:

zozo=> update foo set test = 2 where test = 1;
UPDATE 1
zozo=> commit;
COMMIT

session 2 now gives the result:

zozo=> select * from foo where test = 1 for update;
 test
--
(0 rows)

and your application can warn the user that the row is not there anymore,
so no point in the subsequent UPDATE. You should do a new query to find
the row you wanted.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where is 'createdb'?

2012-11-01 Thread Boszormenyi Zoltan

2012-11-01 16:32 keltezéssel, Kevin Burton írta:


This is probably a question for the authors of a book I have been reading but it may be 
faster to get an answer here.


I was trying to follow along in a book 'Seven Databases in Seven Weeks' and chapter 2 
deals with PostgreSQL. One of the first things it does is issue a command 'createdb 
book'. The text before this command says, "Once you have Postgres installed, create a 
schema called book using the following command: $ createdb book' But when I tried to 
issue this command (at a Linux command prompt) I get 'createdb command not found'. Are 
the authors out of date? What is the current command?


Kevin



The instructions start with "Once you have Postgres installed...".
Since the command is not available, you don't have Postgres installed.

$ which createdb
/usr/bin/createdb
$ rpm -q --whatprovides /usr/bin/createdb
postgresql-9.1.6-1.fc17.x86_64

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [GENERAL] Curosity question regarding "LOCK" NOWAIT

2012-09-22 Thread Boszormenyi Zoltan

2012-09-22 06:08 keltezéssel, John R Pierce írta:

On 09/21/12 7:43 PM, David Johnston wrote:
Has there been any discussion regarding adding a time-limited version of NOWAIT, say: 
“WAITONLY 50” (milliseconds), when dealing the explicit LOCK TABLE or the 
SELECT…FOR(SHARE|UPDATE) commands?


is this a feature in any other major databases?


Yes, at least Informix has it in the form of SET LOCK MODE TO { NOT WAIT | WAIT 
[N] }.


is this in the sql spec?


No.




what do you expect to happen if these timeouts expire? return an error, and abort the 
transaction?






--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Curosity question regarding "LOCK" NOWAIT

2012-09-22 Thread Boszormenyi Zoltan

2012-09-22 07:55 keltezéssel, Josh Kupershmidt írta:

On Fri, Sep 21, 2012 at 7:43 PM, David Johnston  wrote:

Has there been any discussion regarding adding a time-limited version of
NOWAIT, say: “WAITONLY 50” (milliseconds), when dealing the explicit LOCK
TABLE or the SELECT…FOR(SHARE|UPDATE) commands?

I think you could do this by issuing

SET statement_timeout = 50;

and then attempting the LOCK TABLE or SELECT ... FOR UPDATE command,
without NOWAIT. You'll either get an "ERROR:  canceling statement due
to statement timeout" or the command should succeed.


This is not the same as the handling of a timeout waiting for a lock.
The statement_timeout may also trigger when returning large
result sets without locking, i.e. in a plain SELECT case.

Best regards,,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Curosity question regarding "LOCK" NOWAIT

2012-09-22 Thread Boszormenyi Zoltan

Hi,

2012-09-22 04:43 keltezéssel, David Johnston írta:


Has there been any discussion regarding adding a time-limited version of NOWAIT, say: 
"WAITONLY 50" (milliseconds), when dealing the explicit LOCK TABLE or the 
SELECT...FOR(SHARE|UPDATE) commands?


David J.



we have a proposed patch for 9.3 at
http://archives.postgresql.org/message-id/50339b3d.8040...@cybertec.at
that adds "SET lock_timeout = N;" to have a time-limited wait for locks.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [GENERAL] Lock out PostgreSQL users for maintenance

2012-05-12 Thread Boszormenyi Zoltan

2012-05-12 06:29 keltezéssel, Alexander Farber írta:

Or should I edit pg_hba.conf and restart the process?


hostallpostgres127.0.0.1/32md5
hostallall 0.0.0.0/0reject

Only postgres user is allowed to connect. You don't need
to restart the server, one of these would do:

killall -HUP postmaster
or
SELECT pg_reload_conf();

To kick out the current users except yourself, do:

SELECT pg_terminate_backend(procpid)
FROM pg_stat_activity WHERE procpid <> pg_backend_pid();



Thank you
Alex




--
--
Zoltán Böszörményi
Cybertec Schönig&  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 8.x on Linux : how to use HugePages

2012-05-02 Thread Boszormenyi Zoltan

Hi,

2012-05-02 11:28 keltezéssel, STERBECQ Didier írta:


We think of using Linux HugePages to isolate shared memory used by Postgres, hoping that 
we got more performance.


I see some souce code samples for doing that, but I do not see any information
about compatibility :   Postgres 7.x, 8.x, 9.x
about recommandations

Thanks by advance.
Didier Sterbecq.
Mail : didier.sterb...@ratp.fr



if you are using a kernel older than 2.6.38, you can use this with LD_PRELOAD:
http://oss.linbit.com/hugetlb/

With 2.6.38 and up, HugePages are used automatically in giant allocations.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig&  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [GENERAL] IP range in pg_hba.conf?

2011-11-08 Thread Boszormenyi Zoltan
2011-11-08 12:33 keltezéssel, Chrishelring írta:
> Hi,
>
> properbly a simple question (with a simple answer). Nevertheless I´ve been
> struggeling with it for some time now. Hope you guys can point me in the
> right direction!
>
> I want to exclude access to our postgresql db using a configuration in the
> pg_hba.conf file. I have a range of IP adress that should have access, but
> how do I do that?
>
> The range is 10.17.64.1 - 10.17.79.254 (eg. 255.255.240.0 as subnet). 

hostallall10.17.64.0/20md5

>
> Best regards,
>
> Christian
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/IP-range-in-pg-hba-conf-tp4973998p4973998.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>


-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto-increment column

2011-10-04 Thread Boszormenyi Zoltan
Hi,

2011-10-04 14:05 keltezéssel, Robert Buckley írta:
> Thanks for the replies,
>
> I have one question regarding this comment...
>
> "You also need to add a DEFAULT expression and optionally
> make the sequence owned by the column:"
>
> What difference does it make if a table "owns" a sequence of not?...does this 
> contraint
> the use of the sequence to ONLY that one table?
>
> The sequence will only be used to auto-increment the id column in order to 
> have an
> automatic primary key. Could I then somehow use the sequence for all tables 
> which need this?
>
> yours,
>
> Rob

please, don't use top-posting. It messes up the order of the messages.

The sequenced that is OWNED BY a column can of course be used
by other columns in other tables. You can specify nextval('same sequence')
for any number of columns' default value.

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [GENERAL] auto-increment column

2011-10-04 Thread Boszormenyi Zoltan
2011-10-04 13:30 keltezéssel, Robert Buckley írta:
> Hi,
>
> I have a column in a table called hist_id with the datatype "integer". When I 
> created
> the table I assigned this column the primary key constraint but didn´t make 
> it an
> auto-increment column.
>
> How could I do this to an the already existing column?
>
> I have created the sequence with the following command but don´t know how to 
> change the
> existing column to auto-increment.
>
>
> $ create sequence hist_id_seq;

You also need to add a DEFAULT expression and optionally
make the sequence owned by the column:

ALTER TABLE tablename ALTER COLUMN hist_id SET DEFAULT nextval('hist_id_seq');
ALTER SEQUENCE hist_id_seq OWNED BY tablename.hist_id;

The "ALTER SEQUENCE ... OWNED BY ..." will make the sequence
automatically dropped if this column or table is dropped.

>
> thanks for any help,
>
> Rob


-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [GENERAL] Inconsistency: varchar is equivalent to varchar(255) and also not equivalent?

2011-10-03 Thread Boszormenyi Zoltan
2011-10-03 19:31 keltezéssel, Boszormenyi Zoltan írta:
> 2011-10-03 18:12 keltezéssel, Boszormenyi Zoltan írta:
>> Hi,
>>
>> here is the testcase:
>>
>> create type mytype as (id integer, t varchar(255));
>> create table mytest (id serial, t1 varchar(255), t2 varchar(255));
>> create or replace function myfunc () returns setof mytype as $$
>> begin
>>   return query select id, (t1 || t2)::varchar from mytest;
>> end;$$ language plpgsql;
>>
>> Now the problem is:
>>
>> select * from myfunc();
>> ERROR:  structure of query does not match function result type
>> DETAIL:  Returned type text does not match expected type character 
>> varying(255) in column 2.
>> CONTEXT:  PL/pgSQL function "myfunc" line 2 at RETURN QUERY
>>
>> But the types are said to be the same:
>>
>> create cast (varchar as varchar(255)) without function;
>> ERROR:  source data type and target data type are the same
>>
>> create cast (varchar as varchar(255)) with inout;
>> ERROR:  source data type and target data type are the same
>>
>> This cast already exists:
>> create cast (varchar as varchar(255)) with function 
>> pg_catalog.varchar(varchar, integer,
>> boolean);
>> ERROR:  cast from type character varying to type character varying already 
>> exists
>>
>> I know, explicit cast to ::varchar(255) in the function solves this problem.
>> But I would like to know why isn't the type conversion from unlimited varchar
>> to varchar(255) invoked in the pl/pgsql function?
> Two additions:
>
> create function myfunc1() returns setof varchar(255) as $$
> begin
>return query select (t1 || t2)::varchar from mytest;
> end;$$ language plpgsql;
>
> select * from myfunc1();
>  myfunc1
> -
> (0 rows)
>
> create or replace function myfunc2(out id integer, out t varchar(255)) 
> returns setof
> record as $$
> begin
>   return query select mytest.id, (t1 || t2)::varchar from mytest;
> end;$$ language plpgsql;
>
> select * from myfunc2();
>  id | t
> +---
> (0 rows)
>
> Only the conversion from anonymous record to composite type
> causes a problem, individual output parameters or single-value return
> values get the implicit cast.

They actually don't. Let's add a row ensuring t1||t2 is longer than 255:

=# insert into mytest (t1, t2) values (repeat('a', 250), repeat('b', 250));
INSERT 0 1
=# select length(t1), length(t2) from mytest;
 length | length
+
250 |250
(1 row)

=# select length(myfunc1) from myfunc1();
 length

500
(1 row)

=# select length(t) from myfunc2();
 length

500
(1 row)

So, although the functions look like they accept and would
perform the implicit type conversion, they actually do not. But:

=# select ''::varchar(3);
 varchar
-
 aaa
(1 row)

I would expect either the accepted type conversion implicitly
truncates or gives me a runtime error just like this below:

zozo=# insert into mytest (t1, t2) values (now()::text, '');
INSERT 0 1
zozo=# select t1::timestamp from mytest where id = 2;
 t1

 2011-10-03 21:23:52.423667
(1 row)

zozo=# select t1::timestamp from mytest;
ERROR:  invalid input syntax for type timestamp:
"aa"

I forgot to report the version:

=# select version();

version
-
 PostgreSQL 9.0.4 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.6.0 
20110530
(Red Hat 4.6.0-9), 64-bit
(1 row)

This is on Fedora 15. I just checked, it's the same on 9.1.1 compiled fresh.

Of course, the explicit type conversion truncates correctly.

=# select id, length((t1 || t2)::varchar(255)) from mytest;
 id | length
+
  1 |255
  2 | 29
(2 rows)

Now I start to think that pl/pgsql simply lacks some type checks and
should be stricter.

-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inconsistency: varchar is equivalent to varchar(255) and also not equivalent?

2011-10-03 Thread Boszormenyi Zoltan
2011-10-03 19:17 keltezéssel, Scott Ribe írta:
> On Oct 3, 2011, at 10:12 AM, Boszormenyi Zoltan wrote:
>
>> But I would like to know why isn't the type conversion from unlimited varchar
>> to varchar(255) invoked in the pl/pgsql function?
> What if t1 || t2 is longer than 255? You need to explicitly specify.

Yes, but then explicit casting would be needed everywhere
and it's not the case. My question is more like: why the
record -> record type conversion doesn't try to match
individual elements of the two record types?

-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inconsistency: varchar is equivalent to varchar(255) and also not equivalent?

2011-10-03 Thread Boszormenyi Zoltan
2011-10-03 18:12 keltezéssel, Boszormenyi Zoltan írta:
> Hi,
>
> here is the testcase:
>
> create type mytype as (id integer, t varchar(255));
> create table mytest (id serial, t1 varchar(255), t2 varchar(255));
> create or replace function myfunc () returns setof mytype as $$
> begin
>   return query select id, (t1 || t2)::varchar from mytest;
> end;$$ language plpgsql;
>
> Now the problem is:
>
> select * from myfunc();
> ERROR:  structure of query does not match function result type
> DETAIL:  Returned type text does not match expected type character 
> varying(255) in column 2.
> CONTEXT:  PL/pgSQL function "myfunc" line 2 at RETURN QUERY
>
> But the types are said to be the same:
>
> create cast (varchar as varchar(255)) without function;
> ERROR:  source data type and target data type are the same
>
> create cast (varchar as varchar(255)) with inout;
> ERROR:  source data type and target data type are the same
>
> This cast already exists:
> create cast (varchar as varchar(255)) with function 
> pg_catalog.varchar(varchar, integer,
> boolean);
> ERROR:  cast from type character varying to type character varying already 
> exists
>
> I know, explicit cast to ::varchar(255) in the function solves this problem.
> But I would like to know why isn't the type conversion from unlimited varchar
> to varchar(255) invoked in the pl/pgsql function?

Two additions:

create function myfunc1() returns setof varchar(255) as $$
begin
   return query select (t1 || t2)::varchar from mytest;
end;$$ language plpgsql;

select * from myfunc1();
 myfunc1
-
(0 rows)

create or replace function myfunc2(out id integer, out t varchar(255)) returns 
setof
record as $$
begin
  return query select mytest.id, (t1 || t2)::varchar from mytest;
end;$$ language plpgsql;

select * from myfunc2();
 id | t
+---
(0 rows)

Only the conversion from anonymous record to composite type
causes a problem, individual output parameters or single-value return
values get the implicit cast.


>
> Thanks in advance,
> Zoltán Böszörményi
>


-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inconsistency: varchar is equivalent to varchar(255) and also not equivalent?

2011-10-03 Thread Boszormenyi Zoltan
Hi,

here is the testcase:

create type mytype as (id integer, t varchar(255));
create table mytest (id serial, t1 varchar(255), t2 varchar(255));
create or replace function myfunc () returns setof mytype as $$
begin
  return query select id, (t1 || t2)::varchar from mytest;
end;$$ language plpgsql;

Now the problem is:

select * from myfunc();
ERROR:  structure of query does not match function result type
DETAIL:  Returned type text does not match expected type character varying(255) 
in column 2.
CONTEXT:  PL/pgSQL function "myfunc" line 2 at RETURN QUERY

But the types are said to be the same:

create cast (varchar as varchar(255)) without function;
ERROR:  source data type and target data type are the same

create cast (varchar as varchar(255)) with inout;
ERROR:  source data type and target data type are the same

This cast already exists:
create cast (varchar as varchar(255)) with function pg_catalog.varchar(varchar, 
integer,
boolean);
ERROR:  cast from type character varying to type character varying already 
exists

I know, explicit cast to ::varchar(255) in the function solves this problem.
But I would like to know why isn't the type conversion from unlimited varchar
to varchar(255) invoked in the pl/pgsql function?

Thanks in advance,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] heavy swapping, not sure why

2011-08-30 Thread Boszormenyi Zoltan
Hi,

2011-08-29 22:36 keltezéssel, Lonni J Friedman írta:
> ...  I read that
> (max_connections * work_mem) should never exceed physical RAM, and if
> that's accurate, then I suspect that's the root of my problem on
> systemA (below).

work_mem is process-local memory so

(max_connections * work_mem)  < (physical RAM - shared_buffers)

Some queries may allocate multiples of work_mem, too.

Also, the kernel uses some memory for internal accounting, caching
and you need to account for the process binary in memory.

>   However, I'd like confirmation before I start
> tweaking things, as one of these servers is in production, and I can't
> easily tweak settings to experiment (plus this problem takes a few
> weeks before swapping gets bad enough to impact performance).
>
> A few examples:
>
> 0) system A: 56GB RAM, running postgresql-8.4.8 with the following parameters:
> maintenance_work_mem = 96MB
> effective_cache_size = 40GB
> work_mem = 256MB
> wal_buffers = 16MB
> shared_buffers = 13GB
> max_connections = 300

RAM (56GB) - shared_buffers (13GB) = 43GB

which is less than

work_mem * max_connections = 300 * 0.25GB = 75GB

The system would start swapping before 43GB/0.25GB = 172 clients.

> 1) system B: 120GB RAM, running postgresql-9.0.4 with the following 
> parameters:
> maintenance_work_mem = 1GB
> effective_cache_size = 88GB
> work_mem = 576MB
> wal_buffers = 4MB
> shared_buffers = 28GB
> max_connections = 200

Similarly:

120GB - 28GB = 92GB

is less than

work_mem * max_connections = 200 * 576MB = 112.5GB

Also, if you run anything else on the machine then the system would start
swapping much sooner than hitting max_connections number of clients.

I would never set work_mem that high by default. 8 - 16MB is usually
enough for the common case and you can set work_mem for special
queries from the client and then reset it.

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Boszormenyi Zoltan
Hi,

COPY (SELECT ...) appeared in 8.2.x so you need to upgrade.

Best regards,
Zoltán Böszörményi

2011-08-11 13:21 keltezéssel, Siva Palanisamy írta:
> Hi Andreas,
>
> FYI, I am using PostgreSQL 8.1.4. 
>
> Thanks and Regards,
> Siva.
>
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Siva Palanisamy
> Sent: Thursday, August 11, 2011 4:48 PM
> To: Andreas Kretschmer; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Copy command to handle view for my export requirement
>
> Hi Andreas,
>
> I tried the command as below. It failed. Please correct me.
>
> \copy (select * from view1) to '/sample.csv' delimiters ',' csv header;
> ERROR: \copy: parse error at "select"
>
> Thanks and Regards,
> Siva.
>
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Andreas Kretschmer
> Sent: Thursday, August 11, 2011 2:23 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Copy command to handle view for my export requirement
>
> Siva Palanisamy  wrote:
>
>> Hi All,
>>
>>
>>
>> I understand that copy and \copy commands in PostgreSQL work only for 
>> tables. I
>> want it to export the data from varies tables. Instead, I can create a view 
>> for
>> the list of tables. Can the copy or \copy commands be utilized to operate on
>> views directly? Please let me know on this.
> Sure, you can do that (with recent versions) with:
>
> copy (select * from your_view) to ...
>
>
> Andreas
> --
> Really, I'm not out to destroy Microsoft. That will just be a completely
> unintentional side effect.  (Linus Torvalds)
> "If I was god, I would recompile penguin with --enable-fly."   (unknown)
> Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
> ::DISCLAIMER::
> ---
>
> The contents of this e-mail and any attachment(s) are confidential and 
> intended for the named recipient(s) only.
> It shall not attach any liability on the originator or HCL or its affiliates. 
> Any views or opinions presented in
> this email are solely those of the author and may not necessarily reflect the 
> opinions of HCL or its affiliates.
> Any form of reproduction, dissemination, copying, disclosure, modification, 
> distribution and / or publication of
> this message without the prior written consent of the author of this e-mail 
> is strictly prohibited. If you have
> received this email in error please delete it and notify the sender 
> immediately. Before opening any mail and
> attachments please check them for viruses and defect.
>
> ---
>


-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] possible ecpg compatibility problem

2011-08-02 Thread Boszormenyi Zoltan
Hi,

2011-08-01 16:11 keltezéssel, Paul Tilles írta:
> I have an executable which uses ecpg which was created on a system using 
> postgres
> version 8.2.6.  I sent it to a site with version 8.4.7 installed.  The 
> executable fails
> with a memory fault.
>
> Is there a problem doing this?

Try recompiling your ecpg code with 8.4.x's ecpg and use that.
At least the public ECPGdo() interface have changed between
8.2 and 8.4, so your old binary compiled with 8.2 obviously fails.
I wonder why the runtime linker didn't complain, the major
SO version has also changed between these two versions.

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FILLFACTOR and increasing index

2011-06-12 Thread Boszormenyi Zoltan
Hi,

2011-05-12 00:28 keltezéssel, Tomas Vondra írta:
> Hi,
>
> I've studied the implementation of the btree indexes and how exactly the
> fillfactor is used, and in general
>
> - when a page split happens, the process needs to obtain more locks
>   than with simple insert, which may result in contention with other
>   processes that modify the index (the same pages)
>
> - the fillfactor is used only for the leaf pages, the rest of the index
>   does not use it (directly)
>
> So lowering the number of page splits might remove some contention when
> there's a lot of processes accessing the same pages.
>
> But that's the theory - I really was not able to come up with a test
> that benefits from lower fillfactor. Actually the lower the fillfactor,
> the larger the index, which may be a significant issue with huge indexes.

We recently had a testcase for exercising FILLFACTOR on indexes.
Several (15+) GB raw data arrives daily and must be imported into
the database for analytic purposes, the table is heavily partitioned
and each partition has 5 or 6 indexes. The importer is heavily threaded
and uses COPY to insert the data. This is strictly an INSERT-only scenario,
the analysis comes later. This is where FILLFACTOR=70 helped to
reduce the index contention.

With the default 90% (and 100% on the table itself) when a lot of data
arrived in burst that were in the same time interval so 4-8 threads tried
to push data into the same partition, individual data chunks (about 10-15MB
each, about 63500 pieces) were imported in about 1.5-2.5 minutes, with
a few spikes over 4 minutes. This was the result of a few days long
network outage, the data collectors sent their backlogs.

When all indexes were re-created with FILLFACTOR=70, the import times
went down considerably. Out of about 63000 raw data chunks, there were
only 6 or 8 where the import time fell in the 10-25 seconds range, the rest
of them were imported under 10 seconds and the majority of them (65%)
were imported under 3 seconds. The system has 24 cores, so we can use
8 of them dedicated to importing. This almost 1TB data was imported
in about 3.5-4 hours after the FILLFACTOR change. Without it, it didn't
finish in 2 days.

Best regards,
Zoltán Böszörményi

> So in your case, I'd probably go with the default fillfactor (90), and
> maybe I'd consider even using fillfactor=100, to make the index as dense
> as possible.
>
> Anyway I guess the impact of this will be negligible, compared to other
> parameters (shared buffers, work_mem, etc.).
>
> regards
> Tomas
>


-- 
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to refer to computed columns from other computed columns?

2010-08-17 Thread Boszormenyi Zoltan
Alvaro Herrera írta:
> Excerpts from Boszormenyi Zoltan's message of lun ago 16 14:45:07 -0400 2010:
>   
>> Matthew Wilson írta:
>> 
>
>   
>>> I don't care if the code is rearranged so that c is replaced with an
>>> inline definition during compilation.  I'm not concerned about
>>> efficiency here.  I just don't want to have to redefine it manually over
>>> and over again, because I know that as I update how c is defined, I'll
>>> forget to update it everywhere.
>>>
>>> Maybe sql needs a preprocessing macro language like C.
>>>   
>>>   
>> Or maybe we can dust off my GENERATED column patch
>> I posted here in 2006. :-)
>> 
>
> Hmm, that seems entirely unrelated ...
>   

What makes you think so? A generated column would put
the work into INSERT and UPDATE statements, SELECTs
would be faster and this way re-typing the same expression
would be avoided. The generated column's definition is defined
at one central place, with the type modifier on such a column in
CREATE or ALTER TABLE , so the problem of the OP
would be also solved.

There was only one drawback, as Tom Lane pointed out a while back,
but this was explicitely covered by the SQL standard at the time,
it said that before triggers cannot look at the content of the generated
columns.

And with HOT and no indexes on the generated column, most
of the bloat would also be avoided that comes from the extra
internal UPDATE that such a column would introduce.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to refer to computed columns from other computed columns?

2010-08-16 Thread Boszormenyi Zoltan
Matthew Wilson írta:
> On Mon Aug 16 10:26:36 2010, Tom Lane wrote:
>   
>> Matthew Wilson  writes:
>> 
>>> All I can come up with so far is to use a view and then another view on
>>> top of that one:
>>>   
>> Note that you don't actually need a view, as you can just write the
>> subselect in-line:
>>
>>  select a, b, c,
>>  case when c < 0 then 'no'
>>  else 'yes'
>>  end as d
>>  from (select a, b, a - b as c from foo) as v1;
>>
>> This is the standard method for avoiding repeat calculations in SQL.
>>
>> One thing to keep in mind is that the planner will usually try to
>> "flatten" a nested sub-select (and whether it was written out manually
>> or pulled from a view does not matter here).  This will result in the
>> sub-select's expressions getting inlined into the parent, so that the
>> calculations will actually get done more than once.  If you're trying
>> to reduce execution time not just manual labor, you may want to put an
>> "offset 0" into the sub-select to create an optimization fence.  But
>> test whether that really saves anything --- if there are bigger joins
>> or additional WHERE conditions involved, you can easily lose more than
>> you gain by preventing flattening.
>>
>>  regards, tom lane
>>
>> 
>
> Thanks so much for the help!
>
> I don't care if the code is rearranged so that c is replaced with an
> inline definition during compilation.  I'm not concerned about
> efficiency here.  I just don't want to have to redefine it manually over
> and over again, because I know that as I update how c is defined, I'll
> forget to update it everywhere.
>
> Maybe sql needs a preprocessing macro language like C.
>   

Or maybe we can dust off my GENERATED column patch
I posted here in 2006. :-)

Best regards,
Zoltán Böszörményi


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dealing with prepared transactions in XactCallback

2010-06-29 Thread Boszormenyi Zoltan
Hi,

I need to rework a custom notification scheme that sends TCP
messages about new/modified/deleted records to an external process.
It's obvious that I am using RegisterXactCallback() to do that.
The problem is with prepared transactions. I get XACT_EVENT_PREPARE
in the XactCallback function, where I can save the data to be sent later.
But how can I decide in a later transaction that the application
is doing COMMIT PREPARED for a particular Xid? Does PREPARE
TRANSACTION saves the TopTransactionContext and loads it back fully
so I don't need to worry about saving and reloading?

Thanks in advance,
Zoltán Böszörményi


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inconsistent SQL errors

2010-04-15 Thread Boszormenyi Zoltan
gvim írta:
> I'm running PostgreSQL 8.4.3 on OS X Snow Leopard via MacPorts and I'm
> getting strange inconsistent errors such as:
>
> dbuser-# select * from log_form;

The error is here above. You had a "-#" prompt, saying
that you already started another statement in a previous line
but you haven't finished it with a ";"

> ERROR:  syntax error at or near "select"
> LINE 2: select * from log_form;
> ^

Also, the "LINE 2" message tells that you had something in
the first line which was not shown by you.

>
> Then later the same query will run fine, as it should.

I bet that you had a "=#" prompt when it ran fine, not "-#".

Best regards,
Zoltán Böszörményi


-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice on webbased database reporting

2010-04-03 Thread Boszormenyi Zoltan
Hi,

Davor J. írta:
> I need to make certain views from the database visible online (on our 
> webpage) and I wonder if there is any reasonably quick solution for this 
> that works with Postgres?
>
> At best, a query should be specified and the user should be able to select 
> the layout on certain columns (like stepped, or outlined).
>
> I don't mind running a whole CMS on our Apache server as long as it allows 
> me to make reports and is free to use.
>
> Has anyone any suggestions?
>
> Kind regards,
> Davor 
>   

have you seen RLIB? It's a nice XML based reporting tool
that can use PostgreSQL. The output can be PDF, HTML, etc.

http://rlib.sicompos.com/

You should get the CVS version from sourceforge,
it has quite some fixes since the last public release.

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dblink vs calling a function that returns void

2010-03-29 Thread Boszormenyi Zoltan
Hi,

I need to call a function via dblink that returns a void, i.e.
technically nothing.

=# select public.dblink_exec('import', 'SELECT
import.add_one_word(''word'', true)');
ERROR:  statement returning results not allowed

=# select * from public.dblink('import', 'SELECT
import.add_one_word(''word'', true)') as x();
ERROR:  syntax error at or near ")"
LINE 1: ...ort', 'SELECT import.add_one_word(''iphon'', true)') as x();

=# select public.dblink('import', 'SELECT import.add_one_word(''word'',
true)') as x(x void);
ERROR:  syntax error at or near "("
LINE 1: ...'SELECT import.add_one_word(''iphon'', true)') as x(x void);

And, although RETURNS VOID is indistinguishable from returning a NULL:

=# select * from public.dblink('import', 'SELECT
import.add_one_word(''word'', true)') as x(x int);
ERROR:  invalid input syntax for integer: ""

So, how can I do it? Besides modifying the interface of the function,
say "RETURNS int4" and using PG_RETURN_NULL()?

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ECPG: No multidimensional array support for simple data types

2010-02-22 Thread Boszormenyi Zoltan
mike stanton írta:
> Hello all, I get the following error message when ecpg precompiles an
> EXEC SQL INCLUDE on this variable:
>  
> short   cst_vent[MAX_SUC][12]; (MAX_SUC is defined as 24)
>  
> Mesage:
>
>  "No multidimensional array support for simple data types"
>
> Is there a fix or am I stuck?
> Version: PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC gcc
> (GCC) 4.2.4

You're stuck. Only 1D arrays are supported for "simple" (int, double,
etc.) types.
"2D" character arrays are supported (actually, 1D array of strings) if
you use
char   *str[N];

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: could not load library "...": Exec format error

2010-02-09 Thread Boszormenyi Zoltan
Boszormenyi Zoltan írta:
> Korry Douglas írta:
>   
>>> I have the $SUBJECT problem loading my own
>>> module in PostgreSQL. The server is HP-UX/ia64,
>>> PostgeSQL 8.4.2 was compiled with HP CC.
>>> pl/PgSQL can be loaded fine.
>>>
>>> ...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
>>>libodbc.so.1 => /usr/local/pgsql/runtime/lib/libodbc.so.1
>>>libpthread.so.1 =>  /usr/lib/hpux64/libpthread.so.1
>>>libc.so.1 =>/usr/lib/hpux64/libc.so.1
>>>libdl.so.1 =>   /usr/lib/hpux64/libdl.so.1
>>>
>>> "/usr/local/pgsql/runtime" is a link to "/usr/local/pgsql/pgsql-cc-8.4"
>>>
>>> ...:/usr/local/pgsql/pgsql-cc-8.4/lib# file plpgsql.so odbclink.so
>>> plpgsql.so: ELF-64 shared object file - IA64
>>> odbclink.so:ELF-64 shared object file - IA64
>>>
>>> The module compilation was done using "USE_PGXS=1 gmake".
>>>
>>> How can I solve this issue?
>>>   
>> IIRC, HP/UX doesn't like to dynamic-load shared libraries that use
>> thread-local storage. Your shared library (odbclink.so) is linked
>> against libpthread.so.1 so you may be running into that problem.  I
>> would recommend running the HP/UX equivalent of strace to capture more
>> information about the call to dlopen()  (or perhaps shl_load(),
>> depending on which version of HP/UX you are using).
>> 
>
> Yes, it seems you are right, I have read dlopen(3) in full on HP-UX,
> this small detail is documented there. It seems I need to preload
> the libraries: odbclink.so, libodbc.so.1 and all the needed libs from
> the target ODBC driver... Thanks for the info.
>   

Actually, I experimented with a "threaded" PostgreSQL server,
I added LDFLAGS="-lpthreads" to the PG configure options
and suddenly my module that's linked to libraries also linked
with libpthread.so started working. The dlopen(3) man page
doesn't say this as a working constellation.

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: could not load library "...": Exec format error

2010-02-09 Thread Boszormenyi Zoltan
Korry Douglas írta:
>> I have the $SUBJECT problem loading my own
>> module in PostgreSQL. The server is HP-UX/ia64,
>> PostgeSQL 8.4.2 was compiled with HP CC.
>> pl/PgSQL can be loaded fine.
>>
>> ...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
>>libodbc.so.1 => /usr/local/pgsql/runtime/lib/libodbc.so.1
>>libpthread.so.1 =>  /usr/lib/hpux64/libpthread.so.1
>>libc.so.1 =>/usr/lib/hpux64/libc.so.1
>>libdl.so.1 =>   /usr/lib/hpux64/libdl.so.1
>>
>> "/usr/local/pgsql/runtime" is a link to "/usr/local/pgsql/pgsql-cc-8.4"
>>
>> ...:/usr/local/pgsql/pgsql-cc-8.4/lib# file plpgsql.so odbclink.so
>> plpgsql.so: ELF-64 shared object file - IA64
>> odbclink.so:ELF-64 shared object file - IA64
>>
>> The module compilation was done using "USE_PGXS=1 gmake".
>>
>> How can I solve this issue?
>
>
> IIRC, HP/UX doesn't like to dynamic-load shared libraries that use
> thread-local storage. Your shared library (odbclink.so) is linked
> against libpthread.so.1 so you may be running into that problem.  I
> would recommend running the HP/UX equivalent of strace to capture more
> information about the call to dlopen()  (or perhaps shl_load(),
> depending on which version of HP/UX you are using).

Yes, it seems you are right, I have read dlopen(3) in full on HP-UX,
this small detail is documented there. It seems I need to preload
the libraries: odbclink.so, libodbc.so.1 and all the needed libs from
the target ODBC driver... Thanks for the info.

Best regards,
Zoltán Böszörményi

>
>
> -- Korry
>
>
> ---
> Korry Douglas
> Senior Database Dude
> EnterpriseDB Corporation
> The Enterprise Postgres Company
>
> Phone: (804)241-4301
> Mobile: (620) EDB-NERD
>
>


-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: could not load library "...": Exec format error

2010-02-09 Thread Boszormenyi Zoltan
Hi,

I have the $SUBJECT problem loading my own
module in PostgreSQL. The server is HP-UX/ia64,
PostgeSQL 8.4.2 was compiled with HP CC.
pl/PgSQL can be loaded fine.

...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
libodbc.so.1 => /usr/local/pgsql/runtime/lib/libodbc.so.1
libpthread.so.1 =>  /usr/lib/hpux64/libpthread.so.1
libc.so.1 =>/usr/lib/hpux64/libc.so.1
libdl.so.1 =>   /usr/lib/hpux64/libdl.so.1

"/usr/local/pgsql/runtime" is a link to "/usr/local/pgsql/pgsql-cc-8.4"

...:/usr/local/pgsql/pgsql-cc-8.4/lib# file plpgsql.so odbclink.so
plpgsql.so: ELF-64 shared object file - IA64
odbclink.so:ELF-64 shared object file - IA64

The module compilation was done using "USE_PGXS=1 gmake".

How can I solve this issue?

Thanks in advance,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the correct way to extract values from an int8 array in SPI?

2009-11-16 Thread Boszormenyi Zoltan
Tom Lane írta:
> Boszormenyi Zoltan  writes:
>   
>> //ids =
>> PointerGetDatum(PG_DETOAST_DATUM(SPI_getbinval(prod_inv->vals[0],
>> prod_inv->tupdesc, 1, &isnull)));
>> 
>
> well, for one thing, you probably want DatumGetPointer ...

You chose the commented out line to comment on. :-)
The original line was after this one:

ids = SPI_getbinval(prod_inv->vals[0], prod_inv->tupdesc, 1, &isnull);


I only experimented with whether I need to detoast the array value.

>  for another,
> you shouldn't really be converting to Pointer here at all, since the
> next line expects ids to still be a Datum.  On some platforms you can
> get away with being fuzzy about the distinction between Datum and
> Pointer, but it doesn't surprise me in the least that such code would
> fail elsewhere.  Try paying more attention to compiler warnings ;-)
>   

I did. :-) PointerGetDatum(PG_DETOAST_DATUM(SPI_getbinval(...)))
doesn't emit any warnings.

> The lack of any checks for null-ness scares me, too.
>   Aside from the
> multiple places where you're just plain ignoring an isnull return flag,
> there's the risk that the array might contain one or more nulls,


I omitted this particular check because:
- ids bigint[] NOT NULL, and
- the code that builds the content of the array ensures
  that no array member can be NULL. They are bigint IDs
  from another table. :-)

>  in
> which case you can't address the last element that way (even if that
> element itself isn't null).
>   

Yeah, this is what bothers me.

$ psql -p 5433 index_test
psql (8.4.1)
Type "help" for help.

index_test=# select array_length(ids,1) from
product.t_product_inv_titleonly where word='cpu';
 array_length
--
   96
(1 row)

index_test=# select ids from product.t_product_inv_titleonly where
word='cpu';


 



 


   
ids  


 



 


--
--
--
--
--
-
 
{29767643,29783831,33973788,33994384,33966944,33974483,33945574,33988076,33957605,33985034,29050215,33925825,33961012,29066655,33955860,33981152,33990118,33937422,33
972534,33923080,33921945,33979786,33926521,33983828,33980602,33932253,33926012,33925643,40361238,42814197,45923261,33933417,33952470,33988350,33930668,33925627,339799
81,33937362,31250473,35083034,33958934,33946597,33948953,33993455,33987994,33923724,33934644,33961183,34905945,33931220,33973198,33979613,33993878,31973668,47835781,4
7835782,47836159,47866522,47866523,47867628,33943565,33966303,45072269,33955440,33959714,33948651,33977798,30113741,33975105,33943434,33932791,33954807,33922152,33971
756,27401475,27407609,27401410,27405102,33620032,33621234,33624659,30116651,33966940,30116815,30121547,30113990,30115882,33958841,30123948,33953821,33929316,47373326,
47374380,47374458,30123436,33930912}
(1 row)


You can see that the above array doesn't have NULLs.
But this has debug output has revealed the real problem:

ids = SPI_getbinval(prod_inv->vals[0], prod_inv->tupdesc, 1, &isnull);
n_ids = DatumGetInt32(DirectFunctionCall2(array_length, ids,
Int32GetDatum(1)));
ids_arr = DatumGetArrayTypeP(ids);
ids_data = (Datum *) ARR_DATA_PTR(ids_arr);

/* Set up t

[GENERAL] What is the correct way to extract values from an int8 array in SPI?

2009-11-16 Thread Boszormenyi Zoltan
Hi,

I am using this code on 8.4/8.5, which works on 64-bit,
but segfaults on 32-bit Linux:

oids[0] = TEXTOID;
values[0] = lex;
nulls[0] = false;
ret = SPI_execute_with_args(
(just_title ?
"SELECT ids FROM product.t_product_inv_titleonly WHERE word
= $1"
:  "SELECT ids FROM product.t_product_inv WHERE word = $1"),
1, oids, values, nulls, true, 1);
if (ret != SPI_OK_SELECT)
elog(ERROR, "SPI_execute_with_args(SELECT FROM
product.t_product_inv[_titleonly]) error (%d)", ret);
prod_inv = SPI_tuptable;

/* If no such record, quit */
if (SPI_processed == 0)
break;

//ids =
PointerGetDatum(PG_DETOAST_DATUM(SPI_getbinval(prod_inv->vals[0],
prod_inv->tupdesc, 1, &isnull)));
ids = SPI_getbinval(prod_inv->vals[0], prod_inv->tupdesc, 1, &isnull);
n_ids = DatumGetInt32(DirectFunctionCall2(array_length, ids,
Int32GetDatum(1)));
ids_arr = DatumGetArrayTypeP(ids);
ids_data = (Datum *) ARR_DATA_PTR(ids_arr);

/* Set up the initial indexes for binary search */
idx_min = 0;
idx_max = n_ids - 1;
idx_mid = (idx_max + idx_min) / 2;

elog(NOTICE, "n_ids %d idx_min %d idx_max %d idx_mid %d", n_ids,
idx_min, idx_max, idx_mid);

elog(NOTICE, "getting display_price of last element %" PRId64,
DatumGetInt64(ids_data[idx_max]));

Field "ids" in the above query is an int8[] array
with SET STORAGE EXTERNAL.
Executing DatumGetInt64(ids_data[idx_max]) in
the last elog() call segfaults in 32-bit. It doesn't matter
if the SPI_getbinval() call above is detoasted or not.
How can I extract individual elements from the int8[]
Datum correctly?

Thanks in advance,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can I pass an array to SPI_execute_with_args()?

2009-11-04 Thread Boszormenyi Zoltan
Hi,

I would like to execute the code below.

I SELECTed a bigint[] from the database into "Datum ids",
I need to insert a new bigint ID in the middle.

Datum   ids;
int n_ids;
int idx_min, idx_max, idx_mid;
ArrayType  *ids_arr;
Datum  *ids_data;
ArrayType   *array_prefix, *array_postfix;

...
ids = SPI_getbinval(prod_inv->vals[0],
prod_inv->tupdesc, 1, &isnull);
n_ids = DatumGetInt32(DirectFunctionCall2(array_length,
ids, Int32GetDatum(1)));
ids_arr = DatumGetArrayTypeP(ids);
ids_data = (Datum *) ARR_DATA_PTR(ids_arr);
...

At this point it's already ensured that 0 < idx_min < n_ids - 1,
idx_min is the index in the array where I need to split:

get_typlenbyvalalign(INT8OID, &typlen, &typbyval,
&typalign);

/* Split the array and UPDATE */
/* ids[0 ... idx_min - 1] || new_id || ids[idx_min ...
n_ids - 1] */
array_prefix = construct_array(ids_data, idx_min,
INT8OID, typlen, typbyval,
typalign);
array_postfix = construct_array(&ids_data[idx_min],
n_ids - idx_min,
INT8OID, typlen, typbyval,
typalign);

oids[0] = ANYARRAYOID;
values[0] = PointerGetDatum(array_prefix);
nulls[0] = false;

    oids[1] = INT8OID; /* ANYELEMENTOID; */
values[1] = id; /* really an int8 Datum */
nulls[1] = false;

oids[2] = ANYARRAYOID;
values[2] = PointerGetDatum(array_postfix);
nulls[2] = false;

oids[3] = TEXTOID;
values[3] = lex;
nulls[3] = false;

ret = SPI_execute_with_args(
"UPDATE product.t_product_inv SET ids =
array_append($1, $2) || $3 WHERE word = $4",   
4, oids, values, nulls, false, 1);

If the above marked line sets oids[1] = INT8OID, I get this error:

ERROR:  function array_append(anyarray, bigint) does not exist
LINE 1: UPDATE product.t_product_inv SET ids = array_append($1, $2) ...
   ^
HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.
QUERY:  UPDATE product.t_product_inv SET ids = array_append($1, $2) ||
$3 WHERE word = $4

If I use ANYELEMENTOID there, I get this error:

ERROR:  argument declared "anyarray" is not an array but type anyarray
CONTEXT:  SQL statement "UPDATE product.t_product_inv SET ids =
array_append($1, $2) || $3 WHERE word = $4"

I am stuck here. Can someone help me?

Thanks in advance,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to bring up a PG instance with corrupted data in it?

2009-06-08 Thread Boszormenyi Zoltan
Hi,

Keaton Adams írta:
> This is a QA system and unfortunately there is no recent backup So
> as a last resort I am looking for any way to bring up Postgres when it
> has corrupt data in it:
>
> FATAL: could not remove old lock file "postmaster.pid": Read-only file
> system
> HINT: The file seems accidentally left over, but it could not be
> removed. Please remove the file by hand and try again.

The message above should give you a clue.
Repair the file system first and remount read-write.
Then try again to bring up the postmaster.

> Jun 8 06:43:16 mxlqa401 postgres[21401]: [1-1] FATAL: could not remove
> old lock file "postmaster.pid": Read-only file system
> Jun 8 06:43:16 mxlqa401 postgres[21401]: [1-2] HINT: The file seems
> accidentally left over, but it could not be removed. Please remove the
> file by hand and try again
> .
> FATAL: could not remove old lock file "postmaster.pid": Read-only file
> system
> HINT: The file seems accidentally left over, but it could not be
> removed. Please remove the file by hand and try again.
> Jun 8 06:43:29 mxlqa401 postgres[21476]: [1-1] FATAL: could not remove
> old lock file "postmaster.pid": Read-only file system
> Jun 8 06:43:29 mxlqa401 postgres[21476]: [1-2] HINT: The file seems
> accidentally left over, but it could not be removed. Please remove the
> file by hand and try again
> .
> Jun 8 06:44:23 mxlqa401 postgres[21520]: [1-1] LOG: database system
> was interrupted at 2009-06-05 21:52:54 MDT
> Jun 8 06:44:24 mxlqa401 postgres[21520]: [2-1] LOG: checkpoint record
> is at 134/682530F0
> Jun 8 06:44:24 mxlqa401 postgres[21520]: [3-1] LOG: redo record is at
> 134/682530F0; undo record is at 0/0; shutdown FALSE
> Jun 8 06:44:24 mxlqa401 postgres[21520]: [4-1] LOG: next transaction
> ID: 3005778382; next OID: 103111004
> Jun 8 06:44:24 mxlqa401 postgres[21520]: [5-1] LOG: next MultiXactId:
> 93647; next MultiXactOffset: 190825
> Jun 8 06:44:24 mxlqa401 postgres[21520]: [6-1] LOG: database system
> was not properly shut down; automatic recovery in progress
> Jun 8 06:44:24 mxlqa401 postgres[21520]: [7-1] LOG: redo starts at
> 134/68253134
> Jun 8 06:44:24 mxlqa401 postgres[21520]: [8-1] PANIC: could not access
> status of transaction 3005778383
> Jun 8 06:44:24 mxlqa401 postgres[21520]: [8-2] DETAIL: could not read
> from file "pg_clog/0B32" at offset 139264: Success
> Jun 8 06:44:29 mxlqa401 postgres[21518]: [1-1] LOG: startup process
> (PID 21520) was terminated by signal 6
> Jun 8 06:44:29 mxlqa401 postgres[21518]: [2-1] LOG: aborting startup
> due to startup process failure
> Jun 8 06:44:36 mxlqa401 postgres[21574]: [1-1] LOG: database system
> was interrupted while in recovery at 2009-06-08 06:44:24 MDT
> Jun 8 06:44:36 mxlqa401 postgres[21574]: [1-2] HINT: This probably
> means that some data is corrupted and you will have to use the last
> backup for recovery.
> Jun 8 06:44:36 mxlqa401 postgres[21574]: [2-1] LOG: checkpoint record
> is at 134/682530F0
> Jun 8 06:44:36 mxlqa401 postgres[21574]: [3-1] LOG: redo record is at
> 134/682530F0; undo record is at 0/0; shutdown FALSE
> Jun 8 06:44:36 mxlqa401 postgres[21574]: [4-1] LOG: next transaction
> ID: 3005778382; next OID: 103111004
> Jun 8 06:44:36 mxlqa401 postgres[21574]: [5-1] LOG: next MultiXactId:
> 93647; next MultiXactOffset: 190825
> Jun 8 06:44:36 mxlqa401 postgres[21574]: [6-1] LOG: database system
> was not properly shut down; automatic recovery in progress
>
> I tried to bring up a postgres backend process to get into the
> database in single-user mode and that won’t work either:
>
> bash-3.2$ postgres -D /mxl/var/pgsql/data
> PANIC: could not access status of transaction 3005778382
> DETAIL: could not read from file "pg_clog/0B32" at offset 139264: Success
> Aborted
>
> bash-3.2$ postgres -D /mxl/var/pgsql/data -d 5 postgres
> PANIC: could not access status of transaction 3005778382
> DETAIL: could not read from file "pg_clog/0B32" at offset 139264: Success
> Aborted
>
> Any suggestions other than the obvious (restore from backup) would be
> appreciated.
>
> Thanks,
>
> Keaton
>
>
>
>


-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get block of N numbers from sequence

2009-05-19 Thread Boszormenyi Zoltan
Thomas Guettler írta:
> Hi,
>
> how can you get N numbers (without holes) from a sequence?
>
>  Thomas
>
>   

# create sequence tmp_seq cache 1000;
CREATE SEQUENCE

>From the same client:
# select nextval('tmp_seq');
 nextval
-
   1
(1 sor)

# select nextval('tmp_seq');
 nextval
-
   2
(1 sor)

# select nextval('tmp_seq');
 nextval
-
   3
(1 sor)

# select nextval('tmp_seq');
 nextval
-
   4
(1 sor)

... (ad nauseum)


>From another psql client:

# select nextval('tmp_seq');
 nextval
-
1001
(1 sor)

# select nextval('tmp_seq');
 nextval
-
1002
(1 sor)

# select nextval('tmp_seq');
 nextval
-
1003
(1 sor)

# select nextval('tmp_seq');
 nextval
-
1004
(1 sor)

# select nextval('tmp_seq');
 nextval
-
1005
(1 sor)

...


You can get up to 1000 (or the number specified as CACHE N)
numbers in a series (without holes) in he same client at once,
you don't even need to be in the same transaction.

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: how critical is WAL

2001-02-23 Thread Boszormenyi Zoltan

> > > Indexes could get corrupt, requiring rebuilding. 
> > > That problem is gone
> > > with WAL.  Transaction commits where always safe.
> > > 
> > 
> > Thanks. I was specifically asking about Result '3'
> > above mentions 'totally corrupted table page content
> > because of partially written data pages' . Is this
> > also repairable ?
> 
> I never heard of that happening to anyone.

Sorry to disappoint you but it seems to be the case with my crash.

One of our customers accindentally switched off his machine.
This is a shop with 5500+ items. The table contained these items
were lost. All the other tables (bills, partners, etc.) were in
good shape, select retrieved all data from them.

I looked into the postgres manuals and as it suggests, I stopped
postmaster and I tried to reindex this table in a standalone
postgres. "reindex table" (without force) did not reindexed.
"reindex index" dumped core. I dropped the primary index and
tried to recreate it, but it also dumped core.

I guess it falls under the 3rd category of your listed types of data loss.
:-(

The question still is : is this kind of failure repairable?

The system is: RH 6.2, kernel-2.2.16-3, postgresql-7.0.3 rpms from
ftp.postgresql.org.

Regards,
Zoltan Boszormenyi