Re: [GENERAL] warning: libssl.so.4, needed by

2008-09-01 Thread Marco Bizzarri
On Fri, Aug 29, 2008 at 2:39 PM, J Welcomecert [EMAIL PROTECTED] wrote:
make compilation errors removed

It looks like you haven't openssl installed on your system, or you
didn't provided the right switches for compiling it?

Regards
Marco

-- 
Marco Bizzarri
http://notenotturne.blogspot.com/
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql RPM upgrade (8.2.3 - 8.2.9)

2008-08-27 Thread Marco Bizzarri
On Thu, Aug 28, 2008 at 3:00 AM, Phoenix Kiula [EMAIL PROTECTED] wrote:

 Now, is it correct that i don't have to stop my server and I can just
 execute these commands:

 rpm -Uvh --test postgresql-libs-8.2.9-1PGDG.rhel4.i386.rpm
 rpm -Uvh --test postgresql-8.2.9-1PGDG.rhel4.i386.rpm
 rpm -Uvh --test postgresql-contrib-8.2.9-1PGDG.rhel4.i386.rpm
 rpm -Uvh --test postgresql-devel-8.2.9-1PGDG.rhel4.i386.rpm
 rpm -Uvh --test postgresql-server-8.2.9-1PGDG.rhel4.i386.rpm

 And all will be okay? (I would run these without the --test) Right
 now, these tests give errors.



Most probably your errors are caused by dependencies between the
packages; as someone else already suggested, try to write the command
as:

rpm -Uvh --test postgresql-8.2.9... postgresql-server...

etc. In that way all the dependencies should be addressed.

If that does not solve the problem, post the error message.


Regards
Marco
-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice on implementing counters in postgreSQL

2008-08-03 Thread Marco Bizzarri
On Sat, Aug 2, 2008 at 5:11 PM, David Fetter [EMAIL PROTECTED] wrote:
 On Sat, Aug 02, 2008 at 09:23:31AM +0200, Marco Bizzarri wrote:
 Hi all.

 I need to keep a numer of counters in my application; my counters
 are currently stored in a table:

 name | next_value | year

 The counters must be progressive numbers with no holes in between
 them, and they must restart from 1 every year.

 Here's a backward-compatible way to do this:

 http://www.varlena.com/GeneralBits/130.php

 Cheers,
 David.
 --
 David Fetter [EMAIL PROTECTED] http://fetter.org/
 Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
 Skype: davidfetter  XMPP: [EMAIL PROTECTED]

 Remember to vote!
 Consider donating to Postgres: http://www.postgresql.org/about/donate



David, thanks for pointing me to such a complete solution.

These work both on READ-COMMITTED and SERIALIZABLE isolation levels, am I right?

Regards
Marco


-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice on implementing counters in postgreSQL

2008-08-03 Thread Marco Bizzarri
On Sun, Aug 3, 2008 at 1:40 AM, Berend Tober [EMAIL PROTECTED] wrote:


 The way I understand the documentation at

 http://www.postgresql.org/docs/8.3/static/transaction-iso.html;

 and

 'http://www.postgresql.org/docs/current/static/explicit-locking.html',

 you should not have to use the serial isolation level.

 I would define the counter table so as to hold the last-used value, rather
 that the next value, and then do the UPDATE first.

 As a consequence, assuming all this happens within a transaction  of course,
 the SELECT FOR UPDATE syntax is not required either because the UPDATE will
 grab a lock on the row and block other updates until the transaction is
 finished. That is, concurrency is protected and you don't have to restart
 any transactions because subsequent transactions will just wait until the
 first one finishes due to nature of the lock automatically acquired by the
 initial UPDATE statement.


Yes, I'm considering moving away from serializable; the problem is
that I have to explore all the implications of this on my code. Up to
now, I wrote considering a serializable level, so I think I should do
quite a review to be sure about it.

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Advice on implementing counters in postgreSQL

2008-08-02 Thread Marco Bizzarri
Hi all.

I need to keep a numer of counters in my application; my counters are
currently stored in a table:

name | next_value | year


The counters must be progressive numbers with no holes in between
them, and they must restart from 1 every year. What I've done so far
is to access them while in SERIALIZABLE ISOLATION LEVEL, with the
following:

SELECT next_value FROM counters WHERE name = 'name' for update;
UPDATE counters SET next_value = next_value + 1 WHERE name = 'name';

of course, if I do not find the counter, I create it (which
automatically happens at the begin of a new year).

This seems to work to me, but I've two questions:

1) is there any scenario which I'm missing here and which could lead
me to troubles? Deadlocks excluded.


2) while this works, it has the unfortunate behaviour to cause
conflict between concurrent transactions; so, one of them has to be
restarted and redone from scratch. Is there a way to avoid this
behaviour? maybe with lock to tables?


Thanks you all for your attention

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice on implementing counters in postgreSQL

2008-08-02 Thread Marco Bizzarri
Thanks for the advice, Craig.

I'm on a number of different PostgreSQL versions, ranging from 7.4 to
8.3, so I've to retain, where possible, compatibility with older
versions.

Is this better on a transaction/serialization point of view?

Regards
Marco

On Sat, Aug 2, 2008 at 10:19 AM, Craig Ringer
[EMAIL PROTECTED] wrote:
 Marco Bizzarri wrote:
 Hi all.

 I need to keep a numer of counters in my application; my counters are
 currently stored in a table:

 name | next_value | year


 The counters must be progressive numbers with no holes in between
 them, and they must restart from 1 every year. What I've done so far
 is to access them while in SERIALIZABLE ISOLATION LEVEL, with the
 following:

 SELECT next_value FROM counters WHERE name = 'name' for update;
 UPDATE counters SET next_value = next_value + 1 WHERE name = 'name';

 If you're using a sufficiently recent version of Pg you can use:

 UPDATE counters
 SET next_value = next_value + 1
 WHERE name = 'name'
 RETURNING next_value;

 instead, which is slightly nicer. It'll return the *new* value of
 `next_value', so you'd have to make a few tweaks.

 --
 Craig Ringer




-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice on implementing counters in postgreSQL

2008-08-02 Thread Marco Bizzarri
On Sat, Aug 2, 2008 at 11:04 AM, Craig Ringer
[EMAIL PROTECTED] wrote:
 Marco Bizzarri wrote:
 Thanks for the advice, Craig.

 I'm on a number of different PostgreSQL versions, ranging from 7.4 to
 8.3, so I've to retain, where possible, compatibility with older
 versions.

 Is this better on a transaction/serialization point of view?

 As far as I know it's not significantly different, though I expect it'd
 be somewhat more efficient. However, support for UPDATE ... RETURNING
 was only added in 8.2 (or somewhere around there) anyway, so if you need
 to work with old versions like 7.4 it's no good to you anyway.

 I take it there's no way you can present the gapless identifiers at the
 application level, leaving the actual tables with nice SEQUENCE
 numbering? Or, alternately, insert them by timestamp/sequence (leaving
 the user-visible ID null) then have another transaction come back and
 assign them their gapless numeric identifiers in a single simple pass later?


 You're really going to suffer on concurrency if you have to acquire
 values from a gapless sequence as part of a transaction that does much
 other work.

Well, the sequence must be gapless, because it is an implementation of
a law regarding how documents must be recorded when they are received
or sent in a public administration.

I can accept a degraded performance in this topic, considering that
usually, I've between 200 and 1000 documents recorded (i.e. numbered)
in a day, which is not such a great number.


However, I would avoid as much as possible serialization errors, which
would force me to repeat the transaction.

I'm experimenting with LOCK counters IN EXCLUSIVE MODE; it seems it is
able to rip me of thos serialization errors. Do you see any problems
in this?

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary Tables and Web Application

2008-06-05 Thread Marco Bizzarri
On Thu, Jun 5, 2008 at 5:36 PM, Tim Tassonis [EMAIL PROTECTED] wrote:

 Is there a way to create temporary tables in another way, so they are
 visible between sessions, or do I need to create real tables for my purpose?
 And is the perfomance penalty big for real tables, as they have been written
 to disk/read from disk?

You could create a real table on disk, inserting just the primary keys
of the table; then, you could join on the main table, to get the real
results.

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] backup of postgres scheduled with cron

2007-11-22 Thread Marco Bizzarri
On Nov 22, 2007 2:19 PM, Sorin N. Ciolofan [EMAIL PROTECTED] wrote:
 Hello all!

 I've a small bash script backup.sh for creating dumps on my Postgre db:

 #!/bin/bash
 time=`date '+%d'-'%m'-'%y'`
 cd /home/swkm/services/test
   pg_dump mydb  mydb_dump_$time.out

 I've edited crontab and added a line:

 00 4 * * * swkm /home/swkm/services/test/backup.sh

 to execute the backup.sh as user swkm daily at 4 am.

 The user swkm is the user I use to create backups manually. The script
 itself is executed fine if run manually but run on cron scheduler I got an
 mydb_dump_$time.out file empty (of 0 kb)

 Do you have any idea about what's wrong?

 Thanks
 Sorin


Hi Sorin,

why don't you add a MAILTO=youraddress at the start of your
crontab file, so that you can receive a report of the problem?

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [ADMIN] backup of postgres scheduled with cron

2007-11-22 Thread Marco Bizzarri
On Nov 22, 2007 2:46 PM, Sorin N. Ciolofan [EMAIL PROTECTED] wrote:
 Hi Marco!

 Thank you for the advice.

 I got:

 /home/swkm/services/test/backup.sh: line 4: pg_dump: command not found
 updating: mydb_dump_22-11-07.out (stored 0%)

 which seems strange



Try putting the full path of the pg_dump command in the script.

Regards
Marco



-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [ADMIN] backup of postgres scheduled with cron

2007-11-22 Thread Marco Bizzarri
On Nov 22, 2007 2:53 PM, Andrew Sullivan [EMAIL PROTECTED] wrote:
 On Thu, Nov 22, 2007 at 02:28:08PM +0100, Marco Bizzarri wrote:
 
  why don't you add a MAILTO=youraddress at the start of your
  crontab file, so that you can receive a report of the problem?

 Note: check that your cron accepts such an addition.  Many systems now use
 Vixie's cron, which does accept that, but some don't.  It's a nice feature,
 and good for this purpose.

Andrew, can you confirm the previous statement? I'm checking on a Debian Linux,
at it seems to be a Vixie Cron, and that feature is described in the man page...


 A

 --
 Andrew Sullivan
 Old sigs will return after re-constitution of blue smoke

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] computing and updating the size of a table with large objects

2007-09-01 Thread Marco Bizzarri
On 8/31/07, Daniel Verite [EMAIL PROTECTED] wrote:

 You can get the sizes from pg_largeobject, this way:

 SELECT id_doc, sum(length(data)) as filesize
 FROM documenti, pg_largeobject
 WHERE documenti.file = pg_largeobject.loid
 GROUP BY id_doc;

 --
  Daniel
  PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org


Thanks a lot for the suggestion: I didn't think to use the
pg_largeobject: much cleaner, now :-)

Regards
Marco
-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] computing and updating the size of a table with large objects

2007-08-31 Thread Marco Bizzarri
Hi all.

I've a table with large objects, here is the definition:


PAFlow-emmebi=# \d documenti
   Tabella public.documenti
 Colonna | Tipo  | Modificatori
-+---+--
 id_doc  | character varying(50) | not null
 file| oid   |
 kind| integer   | not null
 size| bigint|
Indici:
documenti_pkey chiave primaria, btree (id_doc, kind)

(sorry for the mixed language output)

I need to populate the size attribute with the size of the large object in file.

My first attempt was:

SELECT  lo_lseek(lo_open(file, 262144), 0, 2) AS size FROM documenti ;

This opens the largeobject, and passes the result to the lo_lseek,
which goes up to the end of the largeobject, and thus computes the
size.

Now I could prepare an update which does the job. My doubt is that I
could use all the resources in the update (documenti table is quite
large).

I thought I could use something like this:

SELECT  lo_lseek(lo_open(file, 262144), 0, 2) AS size, lo_close(0)
FROM documenti ;

which is quite nasty, and relies on side effects happening in the
proper order, but uses just one file descriptor for all the query.

Does anyone has any other suggestion?

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] permission in the db or in the application?

2006-12-18 Thread Marco Bizzarri

Hi Sandro.

I think there is no silver bullet here (as in many other fields, too).

You could end with a mix of different approaches (simple checks done
on the database, while complex one are left in the application logic).
Also, most probably you will end moving that logic in the lifetime of
the application.

If you have time (and money) take a look at Chapter 20 from Agile
Database Techniques from Scott Ambler: it examines a number of
possibilities which could be worthy to you.

Regards
Marco
--
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] basic SQL request

2006-11-05 Thread Marco Bizzarri

You wrote articletype instead of articletypes in the first WHERE
clause: is this the problem?

Regards
Marco


--
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Storing images in PostgreSQL databases (again)

2006-10-05 Thread Marco Bizzarri

Hi.

I can provide some insight on the difference between the two interfaces.

AFAIK, the difference is in size of the file you can store, and in the
interface you have when you want to access.

The size is not important (I think), since you are far below the limit.

For the interface, the bytea gives you a query based interfaces,
while largeobject are able to provide a file based interface.

With Large Object, you can avoid reading the whole object with one
read, or you can even move inside the Large Object, which can be
useful if you have large files stored.

I think there are differences also in how the space is reclaimed, but
my PostgreSQL - Fu stops here.

Regards
Marco

--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.

2006-09-11 Thread Marco Bizzarri

On 9/11/06, Purusothaman A [EMAIL PROTECTED] wrote:

Hi Marco Bizzarri and Martijn van Oosterhout,


Thanks for your valuable reply.

I am trying to execute all query from VC++ through
CDatabase::ExecuteSQL(sQueryString) function call.

ie, via programming, not by manual entering query statements.
so, in my situation I can construct a string and pass on to this function to
execute the query string.

Whatever the string I pass to ExecuteSQL() function, gets executed and I
cannot get any result retured from the passed query string [like OID -
198705, from lo_create()].


I'm not an expert in VC++. I think you should obtain some sort of
ResultSet object. You could then check that.




Now, can you suggest me how to achieve it?

And one more clarification, what is the value 131072? How can I get this
vlaue?


This is actually 0x2 value in decimal. Check large object
interface in postgresql documentation (C API).


According to PostgreSQL documentation, they gave C Syntax as client side
function.

Should I use those C API calls?


If you're working from inside C, you can check:

http://www.postgresql.org/docs/8.1/static/lo-interfaces.html

Regards
Marco



:)
Purusothaman A


On 9/11/06, Marco Bizzarri [EMAIL PROTECTED]  wrote:
 I will try to explain it with a sample session: this is for creating
 and writing a blob.

 From the psql prompt ( are the commands, the other are the results).

  begin ;
 BEGIN;

  SELECT lo_creat(131072) ;
 lo_creat
 --
198705
 (1 row)
 (this is the OID number of the newly created large object).

  select  lo_open(198705, 131072) ;
 lo_open
 -
0
 (1 row)

 (this is the file handler which you will use in the operations).

  SELECT lowrite(0, '');
 lowrite
 -
4
 (1 row)

 (you wrote 4 character in a large object)

  select lo_close(0);
 lo_close
 --
 0
 (1 row)

 (you closed the file).

  commit ;
 COMMIT

 In this way, you created a new large object, and stored a string of 4
 bytes inside of it.

 Regards
 Marco

 On 9/11/06, Purusothaman A  [EMAIL PROTECTED] wrote:
  Thanks Martijn van Oosterhout and Marco Bizzarri.
 
  But, according to syntax of client side lo_import and lo_export, we
should
  have 2 variable PGconn (for esatablished connection) and lobjld
(imported
  file ID in PostgreSQL).
 
  I don't know how to do this in SQL statements.
 
  pls give me sample client side sql statements.
 
  :)
  Purusothaman A
 
 
  On 9/11/06, Martijn van Oosterhout  kleptog@svana.org wrote:
  
  On Mon, Sep 11, 2006 at 03:27:09PM +0530, Purusothaman A wrote:
   Thanks Martijn van Oosterhout,
  
   So, I have to write my own wrapper function upon the functions below.
 1. Oid lo_import(PGconn *conn, const char *filename);
 2. int lo_export(PGconn *conn, Oid lobjId, const char *filename);
 
  Not sure why you need a wrapper (you didn't say which language you were
  using) but those functions work exactly like the version you put in the
  SQL statements, except the filenames are for the client computer with
  client permissions.
 
  Hope this helps,
  --
  Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
   From each according to his ability. To each according to his ability
to
  litigate.
 
 
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.1 (GNU/Linux)
 
 
iD8DBQFFBTR4IB7bNG8LQkwRAvhPAJ9KHp9DO1EjPqbkGwBdaSaKx5J90wCfQtZ8
  ijq1n/SgAlwIiEgDI6zfICg=
  =Xk7N
  -END PGP SIGNATURE-
 
 
 
 


 --
 Marco Bizzarri
 http://notenotturne.blogspot.com/






--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Database migration and redesign

2006-09-11 Thread Marco Bizzarri

Since you're in the process of modifying a database, you could find this useful:

Agile Databases Techniques, by Scott Ambler.


Regards
Marco


On 9/11/06, Brandon Aiken [EMAIL PROTECTED] wrote:





I've been tasked with the unenviable job or migrating a MySQL 4.0 database
to something more usable (namely, PostgreSQL 8).  MySQL 4.0 doesn't even
support basic things like subqueries, and in order to emulate the effects
that RULEs, TRIGGERs and VIEWs bring, they had been using PHP scripts to
replicate and build tables across the database (not from one server to
another – within the DB itself).  The database was built across 5 separate
schemata simply to organize the 50 odd tables, and all the tables are using
the MyISAM engine which means no transactions, no row-level locking, and no
foreign key constraints.



Yeah.  It's ugly.  You should see the front-end.



My question relates to primary keys.  The vast majority of tables have a
primary key on a single char or varchar field.  Is it considered better
practice to create a serial type id key to use as the primary key for the
table, and then create a unique index on the char and varchar fields?
Should foreign keys reference the new primary id or the old unique key?
What about compound [primary] keys?



Also, any suggestions for good DB design books would be appreciated.  I no
longer have any good DB design books, and I don't know what's good anymore.



--


Brandon Aiken


CS/IT Systems Engineer





Confidentiality Notice


This email, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed.  If the reader of this email is not the intended
recipient or his/her authorized agent, the reader is hereby notified that
any dissemination, distribution or copying of this email is prohibited.  If
you have received this email in error, please notify the sender by replying
to this message and deleting this email immediately.





--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.

2006-09-11 Thread Marco Bizzarri

Actually, you can use direclty the lo_create, lo_open, lo_read and
lo_write directly into your SQL code, instead of having to write a
wrapper function. It is not simple, but it can be done.

I don't have a general example: we have done this in a couple of our
projects, one being public availabe (www.paflow.it): look in the
Documents.py source, and look for the lo_xxx queries.

This is indipendent from the fact that client and server are running
on the same server.

Regards
Marco


On 9/11/06, Purusothaman A [EMAIL PROTECTED] wrote:

Thanks Martijn van Oosterhout,

So, I have to write my own wrapper function upon the functions below.
   1. Oid lo_import(PGconn *conn, const char *filename);
   2. int lo_export(PGconn *conn, Oid lobjId, const char *filename);

Am I right?

:)
Purusothaman A


On 9/11/06, Martijn van Oosterhout kleptog@svana.org wrote:

Short answer, you can't. The database server can obviously only access
things on the database server. Since SQL is also executed on the
server, no SQL statements can access stuff on the client computer.

If you want to load a file on the client side to the server, you need
to open the file and copy it over using the lo_ functions. There no way
(IIRC) to access the contents of large objects from just SQL.

Hope this helps,

On Mon, Sep 11, 2006 at 02:45:10PM +0530, Purusothaman A wrote:
 Hi Martijn van Oosterhout,

 Thanks for your valuable reply.

 Yes I misunderstood the documentation.

 Then, I searched PostgreSQL documentation for equivalent client-side
 functions.
 But I found only C programming APIs instead of SQL functions.

 I want functions which is usable in SQL statements.

 Can you give example or equivalent client side function syntax?

 Thanks in advance.

 :)
 Purusothaman A




 On 9/9/06, Martijn van Oosterhout kleptog@svana.org wrote:
 
 On Sat, Sep 09, 2006 at 05:19:26PM +0530, Purusothaman A wrote:
  I usually log in to postgresql server with admin username.
 
  But whatever its, according to documentation, if client machine's user
 has
  write access
  permission, this query should successfully be executed.
 
 Read carefully, you are using the server-side functions, therefore:
 
 These two functions read and write files in the server's file system,
 using the permissions of the database's owning user.
 
 What your client user is is irrelevent. Perhaps you actually want to
 use the client-side functions?
 
 Have a nice day,
 --
 Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
  From each according to his ability. To each according to his ability to
 litigate.
 
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.1 (GNU/Linux)
 

iD8DBQFFAq41IB7bNG8LQkwRAsptAKCDSWuRI5T+JMqpWEQt7r69kJE6CwCfVTOO
 mOqGDNQBTZZDs4WVf6NM+wQ=
 =kT4s
 -END PGP SIGNATURE-
 
 
 

--
Martijn van Oosterhout   kleptog@svana.orghttp://svana.org/kleptog/
 From each according to his ability. To each according to his ability to
litigate.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFFBS6LIB7bNG8LQkwRAvI6AJ9OW7cxZiJR0QsEsSOwkYHKkYDZ6gCbBrDA
GVPAoBeOhE+2toFa2zNbN3M=
=9W8I
-END PGP SIGNATURE-







--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.

2006-09-11 Thread Marco Bizzarri

I will try to explain it with a sample session: this is for creating
and writing a blob.


From the psql prompt ( are the commands, the other are the results).



begin ;

BEGIN;


SELECT lo_creat(131072) ;

lo_creat
--
  198705
(1 row)
(this is the OID number of the newly created large object).


select  lo_open(198705, 131072) ;

lo_open
-
  0
(1 row)

(this is the file handler which you will use in the operations).


SELECT lowrite(0, '');

lowrite
-
  4
(1 row)

(you wrote 4 character in a large object)


select lo_close(0);

lo_close
--
   0
(1 row)

(you closed the file).


commit ;

COMMIT

In this way, you created a new large object, and stored a string of 4
bytes inside of it.

Regards
Marco

On 9/11/06, Purusothaman A [EMAIL PROTECTED] wrote:

Thanks Martijn van Oosterhout and Marco Bizzarri.

But, according to syntax of client side lo_import and lo_export, we should
have 2 variable PGconn (for esatablished connection) and lobjld (imported
file ID in PostgreSQL).

I don't know how to do this in SQL statements.

pls give me sample client side sql statements.

:)
Purusothaman A


On 9/11/06, Martijn van Oosterhout kleptog@svana.org wrote:

On Mon, Sep 11, 2006 at 03:27:09PM +0530, Purusothaman A wrote:
 Thanks Martijn van Oosterhout,

 So, I have to write my own wrapper function upon the functions below.
   1. Oid lo_import(PGconn *conn, const char *filename);
   2. int lo_export(PGconn *conn, Oid lobjId, const char *filename);

Not sure why you need a wrapper (you didn't say which language you were
using) but those functions work exactly like the version you put in the
SQL statements, except the filenames are for the client computer with
client permissions.

Hope this helps,
--
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to
litigate.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFFBTR4IB7bNG8LQkwRAvhPAJ9KHp9DO1EjPqbkGwBdaSaKx5J90wCfQtZ8
ijq1n/SgAlwIiEgDI6zfICg=
=Xk7N
-END PGP SIGNATURE-







--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Consulta

2006-08-06 Thread Marco Bizzarri

Sorry, I will try to answer in english and in italian, if you're able
to understand either:

from psql prompt:

\dt

Regards
Marco

Mi spiace, provero' a rispondere in inglese o in italiano, se riesci a
leggere uno dei due:

dal prompt di psq:

\dt

Saluti
Marco

On 8/6/06, Feri@ [EMAIL PROTECTED] wrote:




Hola a todos, soy nuevo en esto y estoy pasando de mysql a postgres, alguien
seria tan amable de explicarme como puedo obtener en un array los nombres de
las tablas de una determinada base de datos en postgres. Les agradezco de
antemano su colaboración.

Feri@
--
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que está limpio.
MailScanner agradece a transtec Computers por su apoyo.



--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] logic/db question

2006-08-03 Thread Marco Bizzarri

If table reorganization is an option for you, you could consider using
integer ranges for describing parant/children relationships.

I have seen them in one of Joe Celko books:

http://www.amazon.com/gp/product/1558609202/sr=8-2/qid=1154595988/ref=pd_bbs_2/104-2243146-1376759?ie=UTF8

In that way, you can find all the children with just one query.

Regards
Marco

On 8/2/06, bruce [EMAIL PROTECTED] wrote:

hi...

i have a tbl
 fooTBL
   name
   parentID
   ID

so a name can have might have a parentID, as well as an ID. 'name's are
associated with other 'name's via the parentID. in other words, if a name's
parentID == a name's ID, name1 is the parent of name2.

  name  parentIDID
  foo-  1
  cat   1   2
  dog   2   3

my question, how can i come up with a sql query that will list all the
children (and children's children...) of a top level item?

i had done this awhile ago.. but can't recall how i did it..

thanks



---(end of broadcast)---
TIP 6: explain analyze is your friend




--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Clustering and backup with large objects

2006-07-13 Thread Marco Bizzarri

Hi all.

I'm working on a document management application (PAFlow). The
application is Zope based and uses PostgreSQL as its (main) storage
system.

PostgreSQL must contain both profile data for documents and the
documents themselves. Documents are stored as large objects in
PostgreSQL.

Up to now, we've done backups using the pg_dump, and that was fine.
However, a number of installations have databases which have backups
which are increasingly large. Therefore, making a complete backup (and
a restore) is more and more time consuming.

PostgreSQL, at the moment, is 7.4.x We will move to newer version, but
I think we will not be able to migrate all customers to 8.1.x soon.

I've read the chapter on backups and large backups. Is there any
strategy for doing large backups, aside from those mentioned in the
documentation?

I would also like to ask possible solutions for clustering under
PostgreSQL. My use case scenario would be the following:

1) application makes comparably few writes wrt reads (1 to 10);
2) application is multithreaded, and any thread can do read and write;
3) database contains large objects (as mentioned before);
4) clustering is done for improving performance, rather than availability.

Thanks for your attention.

Regards
Marco
--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Fwd: [GENERAL] Long term database archival

2006-07-12 Thread Marco Bizzarri

-- Forwarded message --
From: Marco Bizzarri [EMAIL PROTECTED]
Date: Jul 12, 2006 9:03 PM
Subject: Re: [GENERAL] Long term database archival
To: Karl O. Pinc [EMAIL PROTECTED]


Long term archival of electronic data is a BIG problem in the
archivist community. I remember, a few years ago, a paper describing
the problem of historical (20+ years old) data which were running the
risk of being lost simply because of lacking of proper hardware.

What I would suggest is to explore the problem trying to search first
with experience and research already done on the topic. The topic
itself is big, and it is not simply a matter of how you dumped the
data.

A little exploration in the archivist community could produce some
useful result for your problem.

Regards
Marco

On 7/6/06, Karl O. Pinc [EMAIL PROTECTED] wrote:

Hi,

What is the best pg_dump format for long-term database
archival?  That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Mostly, we're interested in dumps done with
--data-only, and have preferred the
default (-F c) format.  But this form is somewhat more
opaque than a plain text SQL dump, which is bound
to be supported forever out of the box.
Should we want to restore a 20 year old backup
nobody's going to want to be messing around with
decoding a custom format dump if it does not
just load all by itself.

Is the answer different if we're dumping the
schema as well as the data?

Thanks.

Karl [EMAIL PROTECTED]
Free Software:  You don't pay back, you pay forward.
  -- Robert A. Heinlein


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




--
Marco Bizzarri
http://notenotturne.blogspot.com/


--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Dumping in LATIN1 and restoring in UTF-8

2006-07-06 Thread Marco Bizzarri

On 7/6/06, Tino Wildenhain [EMAIL PROTECTED] wrote:

Marco Bizzarri schrieb:
 Hi all.

 Here is my use case: I've an application which uses PostgreSQL as
 backend. Up to now, the database was encoded in SQL_ASCII or LATIN1.
 Now, we need to migrate to UTF-8.

 What we tried, was to:

 1) dump the database using pg_dump, in tar format (we had blob);
 2) modifying the result, using some conversion tool (like recode)


 3) destroying the old database
 4) recreating the database with UNICODE setting
 5) restoring the database using pg_restore

 The result was not what I expected. The pg_restore was using the
 LATIN1 encoding to encode the strings, resulting in a LATIN1 encoded
 in UTF-8...

 The problem lied in the toc.dat file, which stated that the client
 encoding was LATIN1, instead of UTF-8.

 The solution in the end has been to manually modifying the toc.dat
 file, substituting the LATIN1 string with UTF-8 (plus a space, since
 the toc.dat is a binary file).

 Even though it worked for us, I wonder if there is any other way to
 accomplish the same result, at least to specify the encoding for the
 restore.

Yes, its actually quite esay: you dump as you feel apropriate,
then create the database with the encoding you want,
restore w/o creating database and you are done.
Restore sets the client encoding to what it actually was
in the dump data (in your case latin-1) and the database
would be utf-8 - postgres automatically recodes. No need
for iconv and friends.

Regards
Tino



First of all, thank you for your answer. However, I suspect I did not
understand your answer, since the commands I used were:

1) pg_dump -Ft -b -f dump.sql.tar database
2) dropdb database
3) createdb -E UNICODE database
4) pg_restore -d database dump.sql.tar

According to my experience, this produces a double encoding. As you
can see, I hand-created the database, with the proper encoding.
However, when I reimported the database, the result was a latin1
encoded in utf-8, rather than a pure utf-8.

How my procedure was different with respect to yours?

I will make some test with a sample database, and enabling the
logging, so that I can understand the commands which are issued.

Regards
Marco


--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Dumping in LATIN1 and restoring in UTF-8

2006-07-06 Thread Marco Bizzarri

On 7/6/06, Tino Wildenhain [EMAIL PROTECTED] wrote:

...
 Yes, its actually quite esay: you dump as you feel apropriate,
 then create the database with the encoding you want,
 restore w/o creating database and you are done.
 Restore sets the client encoding to what it actually was
 in the dump data (in your case latin-1) and the database
 would be utf-8 - postgres automatically recodes. No need
 for iconv and friends.

 Regards
 Tino


 First of all, thank you for your answer. However, I suspect I did not
 understand your answer, since the commands I used were:

 1) pg_dump -Ft -b -f dump.sql.tar database
 2) dropdb database
 3) createdb -E UNICODE database
 4) pg_restore -d database dump.sql.tar

 According to my experience, this produces a double encoding. As you
 can see, I hand-created the database, with the proper encoding.
 However, when I reimported the database, the result was a latin1
 encoded in utf-8, rather than a pure utf-8.

 How my procedure was different with respect to yours?

That was the correct way. I wonder if you have recoding support
enabled? Did you build postgres yourself?


Support for recoding? I don't know... I compiled myself postgres,
which is, BTW, 7.4.8. How can I check if auto recoding is enabled?


Latin-1 double encoded into utf-8 seems not like possible...
utf-8 barfs on most latin-1 characters, current 8.1 is very
picky about it. So maybe you can work with a small
test table to find out what's going wrong here.


Yes, I will do... I understand postgresql in later release became much
more picky about encoding.


(The changing of the client_enccoding setting in the backup is only
needed in the case when you had data in the wrong encoding
- like SQLAscii filled with latin-1 or something)


Ok, thanks!

Regards
Marco

--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Dumping in LATIN1 and restoring in UTF-8

2006-07-05 Thread Marco Bizzarri

Hi all.

Here is my use case: I've an application which uses PostgreSQL as
backend. Up to now, the database was encoded in SQL_ASCII or LATIN1.
Now, we need to migrate to UTF-8.

What we tried, was to:

1) dump the database using pg_dump, in tar format (we had blob);
2) modifying the result, using some conversion tool (like recode)


3) destroying the old database
4) recreating the database with UNICODE setting
5) restoring the database using pg_restore

The result was not what I expected. The pg_restore was using the
LATIN1 encoding to encode the strings, resulting in a LATIN1 encoded
in UTF-8...

The problem lied in the toc.dat file, which stated that the client
encoding was LATIN1, instead of UTF-8.

The solution in the end has been to manually modifying the toc.dat
file, substituting the LATIN1 string with UTF-8 (plus a space, since
the toc.dat is a binary file).

Even though it worked for us, I wonder if there is any other way to
accomplish the same result, at least to specify the encoding for the
restore.

Regards
Marco
--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Documentation of the Front End/Back End Protocol for Large Objects

2006-06-23 Thread Marco Bizzarri

Hi all.

I would like to study the protocol for large object operations
(lo_read, lo_write,...) between the front-end and back-end.

I've found this
http://www.postgresql.org/docs/8.1/static/protocol.html, but it looks
like it has no message specific for large object...

How is it implemented in client (libpq, I suppose)?

Regards
Marco

--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Documentation of the Front End/Back End Protocol for Large Objects

2006-06-23 Thread Marco Bizzarri

Hi Tom.

Thanks for your suggestion, this was my choice, after I was unable to
find any reference.



On 6/23/06, Tom Lane [EMAIL PROTECTED] wrote:

Marco Bizzarri [EMAIL PROTECTED] writes:
 I would like to study the protocol for large object operations
 (lo_read, lo_write,...) between the front-end and back-end.

 I've found this
 http://www.postgresql.org/docs/8.1/static/protocol.html, but it looks
 like it has no message specific for large object...

libpq does it through Function Call messages that invoke lo_open and so
on.  Look into src/interfaces/libpq/fe-lobj.c.

regards, tom lane




--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Changing encoding of a database

2006-06-21 Thread Marco Bizzarri

Hi Tomi.

Thanks for your answer, I was not aware of such a tool.

The next question at this point is (of course): what is the problem if
I have blob? Should I recode them as well?

Regards
Marco

On 6/20/06, Tomi NA [EMAIL PROTECTED] wrote:

On 6/19/06, Marco Bizzarri [EMAIL PROTECTED] wrote:
 I all.

 We've PostgreSQL database, with SQL_ASCII or LATIN1 encoding. We would
 like to migrate them to UNICODE. Is there some contributed/available
 script, or this is something we should do at hand?

 Regards
 Marco

If you don't have blobs in your database, dump it to insert
statements, use the recode tool to recode your data, create a new
database based on UTF8 and load the data.

t.n.a.




--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Changing encoding of a database

2006-06-19 Thread Marco Bizzarri

I all.

We've PostgreSQL database, with SQL_ASCII or LATIN1 encoding. We would
like to migrate them to UNICODE. Is there some contributed/available
script, or this is something we should do at hand?

Regards
Marco
--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] MySQL insert() and instr() equiv

2006-06-18 Thread Marco Bizzarri

Not sure this is the right answer: in older version you could enable
it via the postgresql.conf file, modifing the variable log_statement
and setting that to true. Also, you should check the syslog level
variable in the same file.

Regards
Marco

On 6/17/06, Mark Constable [EMAIL PROTECTED] wrote:

On Sunday 18 June 2006 02:33, Tom Lane wrote:
  uid is an email address stored in the passwd table as [EMAIL PROTECTED]
  and this construct allows an incoming username such as user.domain.com
  to be compared to the stored [EMAIL PROTECTED].

 But, if you're not wedded to that particular way, why not use replace()?

I only decided to ditch MySQL tonight so I'm within the first
1/2 dozen hours of using pgsql for almost the first time. I've
lost a fair amount of hair with the basics of db and user setup
and close to burnout.

 SELECT wpath FROM passwd WHERE uid=\L OR replace(uid, '@', '.')=\L

Excellent. Just the double quotes needed to be changed to single
quotes to avoid this error and replace() indeed works for my needs.

 ERROR:  column user.domain.com does not exist

Another anti-burnout question, how would I turn on the ability
to view any SQL requests in the logfile ?

No doubt there is an answer in the list archives somewhere but
my first few searches brought up nothing useful.

--markc

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




--
Marco Bizzarri
http://notenotturne.blogspot.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Questions about the use of largeobject functions from inside SQL

2005-03-20 Thread Marco Bizzarri
Hi all.
We have a setup with Zope and a remote Postgresql server. We're storing 
blobs in largeobject files.

What we need to do is to be able to do the transfer of blobs between 
Zope and postgres. I thought it was possible to use lo_* functions, by 
creating a largeobject, and then sending the data using an lo_write, but 
I'm unable to figure how to do this in SQL.

Any suggestions?
Note:
1) NFS in order to share disk space between the two servers is not an 
option (in that case, I would be able to use lo_import/lo_export);

2) We're using ZPsycopgDA on the client side
3) I'm not on the list, so please CC to me directly
Regards
Marco
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Certifications in military environment

2004-11-20 Thread Marco Bizzarri
Jeff Eckermann wrote:
--- Marco Bizzarri [EMAIL PROTECTED] wrote:

Hi all.
I would like to know if postgresql has any
certification for the 
military environment.

There are no official certifications, nor are there
likely to be.  But certifications may be offered by
individual companies, like Red Had does with Linux.
Here is one which might be of interest:
http://groups.google.com/groups?hl=enlr=selm=20041014.104826.128620349.t-ishii%40sra.co.jp


If possible, please answer me directly (I'm not on
this mailing list). 
Otherwise, I will read from the web interface.

Regards
Marco
Thanks for your answer, I will check with that company. My problem 
however was not with certification of a person as an expert of postgres, 
but of certification of the platform as suitable for military enviroment 
(like the C2 or ITSEC certifications...)

Regards
Marco
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Certifications in military environment

2004-11-16 Thread Marco Bizzarri
Hi all.
I would like to know if postgresql has any certification for the 
military environment.

If possible, please answer me directly (I'm not on this mailing list). 
Otherwise, I will read from the web interface.

Regards
Marco
---(end of broadcast)---
TIP 8: explain analyze is your friend