[GENERAL] (Fwd) Majordomo Delivery Error

2001-06-01 Thread fabrizio . ermini

It's just me that's receiving all this majordomo errors or it's the 
whole list?


--- Forwarded message follows ---
Date sent:  Thu, 31 May 2001 12:34:23 -0400 (EDT)
To: [EMAIL PROTECTED]
From:   [EMAIL PROTECTED]
Subject:Majordomo Delivery Error

This message was created automatically by mail delivery software.
A Majordomo message could not be delivered to the following addresses:

  [EMAIL PROTECTED]:
450 4.7.1 [EMAIL PROTECTED]... Can not check MX records for recipient host 
ejurka.com

  [EMAIL PROTECTED]:
450 4.7.1 [EMAIL PROTECTED]... Can not check MX records for 
recipient host tedgarrett.penguinpowered.com

  [EMAIL PROTECTED]:
450 4.7.1 [EMAIL PROTECTED]... Can not check MX records for recipient host 
nsdigital.com

  [EMAIL PROTECTED]:
450 4.7.1 [EMAIL PROTECTED]... Can not check MX records for recipient host 
syntext.com

  [EMAIL PROTECTED]:
450 4.7.1 [EMAIL PROTECTED]... Can not check MX records for recipient host 
milams.net

-- Original message omitted --

--- End of forwarded message ---
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] Whait is the $PGDATA/data/pg_log file use for?

2001-05-11 Thread fabrizio . ermini

On 8 May 2001, at 7:13, Raymond Chui wrote:

 What is the $PGDATA/data/pg_log file use for?
 Is it a logical log file file? I see that file grow very big.
 Can I do `cat /dev/null  $PGDATA/data/pg_log`
 reduce it to zero size once a while? Or is there other

DON'T!
That file is the transaction log... it is VITAL for the consistency of 
the DB. 
Consider that as a part of the DB itself, together with other files 
under /data, and let postgres handle it with its means...

 
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Non-english articles on the techdocs.postgresql.org website

2001-04-17 Thread fabrizio . ermini

On 12 Apr 2001, at 0:00, Justin Clift wrote:
 
 Any there any people who can understand both english and non-english
 languages, who wouldn't mind translating an article or two on the
 techdocs.postgresql.org website to a different language?
 
I would gladly do some translation in italian. Who should I contact 
to volunteer? 

bye!


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] need hint for a trigger...

2001-03-16 Thread fabrizio . ermini

Hi all. I have to write a trigger and I have never did it before, so if 
there is someone who could give me a start, I will be very grateful...

I have two tables that I want to keep "partially" synced, i.e.:

table1 (field1,field2,field3) 
table2 (field1,field2,field3, ... some other fields).

I've created them using the same data for the common fields, and 
then populated the other fields of table2. field1 is unique key for 
both tables.

I would like that when a record gets changed in table1, the same 
changes reflect on the correspondent record in table2. Eventual 
changes made on the first 3 fields of table2 should be overwritten, 
leaving the other fields untouched.

I was presuming this could be done with a trigger on UPDATE on 
table1, but I don't know how to write it... I know the first reply that I 
can expect is RTFM, but if a gentle soul has the time to write an 
example...

TIA,
Ciao


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Printing PostgreSQL reports

2001-02-22 Thread fabrizio . ermini

On 22 Feb 2001, at 12:49, Jeff MacDonald wrote:

 I think CURSORS would be the correct way to do it..
 
 On Tue, 20 Feb 2001, Richard  Ehrlich wrote:
 
  I can post info to PostgreSQL from a webform via JSP, and I can post 
  reports from PostgreSQL to a webpage. Can anyone tell me how I might format 
  a PostgreSQL report to a web page so that it will print discrete, sequenced 
  pages?
  

Here the problem is in that "discrete" keyword, that doesn't fare 
well at all with HTML. You can't put anything that seems a 
pagebreak on a HTML, the browser handles the print as it prefers.
The best you could do is try to estimate the lenght of the printed 
page and put a lot of whitespace between a page and the next, but 
given the variety of browsers, systems, and printers combinations, 
you should have a lot of luck...

If you want to generate "M$ Access-like" reports, divided into 
pages, you'll have to resort to a different formatting language. For 
example, you can generate a downloadable .rtf file, or even a .pdf 
one. It's a lot of work, but the result is guaranteed.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]



[GENERAL] Error from index pg_type_typname_index????

2001-02-12 Thread fabrizio . ermini

Hi all...

I've a postgresql 7.0.2 used as a backend for a website. Randomly, 
and not very frequently, an error pops up saying that the following 
problem has happened:

ERROR: Cannot insert a duplicate key into unique index 
pg_type_typname_index

The query causing it it's an innocent query that duplicates a table 
in a temporary one, i.e.

"select * into forum_clone from forums"

That of course doesn't cause any problem 99% of the time. The 
fact that it happens doesn't seem to be related to load, neither 
vacuumize the db seems to change anything.

Now my question is: anybody has a hint on what mey be 
happening in that darn 1%???

TIA,
Ciao!


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]



Re: [GENERAL] Error from index pg_type_typname_index????

2001-02-12 Thread fabrizio . ermini

On 12 Feb 2001, at 10:10, Tom Lane wrote:

  ERROR: Cannot insert a duplicate key into unique index 
  pg_type_typname_index
  The query causing it it's an innocent query that duplicates a table 
  in a temporary one, i.e.
  "select * into forum_clone from forums"
 
 I think you're probably trying to do two of these at the same time.
 
And you do think right. (And this should not came as a surprise, I 
would add :-)).
I've ascertained it doing a little stress-testing, and simply rethinking 
on the fact that I was doing a dumb thing... 

 I'll take a look to see if the order of operations can't be reversed so 
 that you get a more understandable complaint about a unique index on
 pg_class in this case.  However, the real answer for you is to be using
 a TEMP table if you are going to have multiple clients creating
 temporary tables at about the same time.  That avoids the name conflict.
 

Nope. This is the first thing I've tried after I've realized what was 
happening, but it does not work in a web environment, at least in a 
PHP based like mine; I think it scales down to PHP ways of 
optimizing connection pool (which, in effect, have given me some 
worry over time): if use a TEMP table and try to stress test the 
page (i.e. "hit furiosly F5 cycling to several explorer windows with 
the mouse" :-)) i got many errors complaining things such "table 
doesn't exist" or similar. Evidently the various TEMP tables of the 
various pages where mismatching, since they have a lifetime based 
on the concept of a "session" that's not 1:1 with the lifetime of a 
web page.

I resorted to handle the creation of the various tables at application 
level, creating temp tablenames with uniqueid() function. A little 
overhead but it works well.

Summarizing all this thoughts, the moral is that it's not been PG's 
fault (unless for a less-than-clear error message, but that's a venial 
sin :-)), that I should think more before screaming wolf, and that I 
really should study better the way PHP handles PG connection... 
there's some "hidden magic" in there that doesn't convince me. 

Thanks for you attention, as ever, and
Ciao

Fabrizio



/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]



Re: [GENERAL] Re: PostreSQL SQL for MySQL SQL

2001-02-07 Thread fabrizio . ermini

On 6 Feb 2001, at 14:24, Chuck Esterbrook wrote:

 At 01:10 PM 2/6/2001 -0600, John Burski wrote:
 interactively, via psql, via the Perl Pg module, or via PHP.  If you 
 attempt to drop a database that doesn't exist, PostgreSQL will issue an 
 error message.  If you're running interactively, you'll see the message; 
 if you're accessing via a Perl module or PHP, you can check the query 
 results to see if an error occurred.  I'm fairly certain that this same 
 mechanism exists if you're using C or C++ to access your databases.
 
 I'd prefer to skip the error message, because otherwise my regression test 
 suite will barf, saying something like "Test X failed due to SQL error". I 
 suppose I work in some code to catch this and swallow it.
 
You can always search the system catalog to know if a DB exists 
or not.

This is how I do it using pglib, in C:

 sprintf(Query,"SELECT * from pg_database where 
datname='%s';",DBname);
res = PQexec(conn, Query);
if (PQresultStatus(res) != PGRES_TUPLES_OK)
Throw_Error(121);
NumMax=PQntuples(res);

if(NumMax==1)
{
sprintf(Query,"DROP DATABASE %s;",DBname);
res = PQexec(conn, Query);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
Throw_Error(122);
}

 I'm not familiar with the "use Foo" functionality of MySQL, so I can't 
 discuss it.
 
 I think you may have answered it with your "\connect dbname" comment. 
 Provided that I can put that after the "create database" in the SQL script 
 and feed the whole mess to psql.
 

Sure you can. If you use psql as command interpreter, "\connect 
dbname" has almost 1:1 functionality in respect to MySql's "use 
foo". If you use a pglib-based API (i.e. you're using C, Perl, PHP or 
other) you got to use the connection function to select the db.

HTH, bye!


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]



[GENERAL] Could somebody EXPLAIN? :-)

2000-12-21 Thread fabrizio . ermini

Hi all. 
I wanted to compare the performances of 2 ways of writing a query, 
one using a cartesian join, one using a subselect, to see which 
one was the faster.
I used the EXPLAIN command to understand how Postgres 
planned to execute them, but the results are a little obscure.
Can somebody shed some light? 

Here are the results of the explains:

With the join:

EXPLAIN
SELECT distinct s.* FROM items_products AS r, support AS s 
WHERE r.family_name='XXX' 
 AND r.item_id=s.id 
ORDER BY s.date DESC

NOTICE:  QUERY PLAN:

Sort  (cost=38.89 rows=2 width=116)
  -  Nested Loop  (cost=38.89 rows=2 width=116)
-  Seq Scan on items_products r  (cost=36.84 rows=1 
width=4)
-  Index Scan using support_id_key on support s  
(cost=2.05 rows=382
width=112)

With the subselect:

EXPLAIN
SELECT * FROM support WHERE id IN (SELECT 
DISTINCT(item_id) FROM
items_products WHERE family_name='XXX') ORDER BY date 
DESC;

NOTICE:  QUERY PLAN:

Sort  (cost=23.61 rows=382 width=112)
  -  Seq Scan on support  (cost=23.61 rows=382 width=112)
SubPlan
  -  Unique  (cost=36.84 rows=1 width=4)
-  Sort  (cost=36.84 rows=1 width=4)
  -  Seq Scan on items_products  (cost=36.84 
rows=1 width
=4)



(I could also post table structure, if it's of any help).

All this figures confuse me. Which one should i use for 
comparison?

TIA, merry Xmas to all!


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]



Re: [GENERAL] Unanswered questions about Postgre

2000-12-12 Thread fabrizio . ermini

 Yes, this was my point.  We now have TOAST, but by not going the extra
 mile to enable storage of binary files, we really aren't taking full
 advantage of our new TOAST feature.
 
 I can see people saying, "Wow, you can store rows of unlimited length
 now.  Let me store this jpeg.  Oh, I can't because it is binary!"
 
Well, to me it seems that, when TOAST will be available (i.e. when 
the long awaited, most desired, more bloated, world-
conquering 7.1 version will come-out...), 90% of the work it is 
already done to support also column-style BLOBs... at least for 
web applications, that are incidentally my focus. 
Any web programmer worth its salt could put up a simple layer that 
does base64 encode/decode and use "CLOBs" (I think TOAST 
columns could be called that way, right?)... and he should write 
anyway some interface for file uploading/downloading, since its 
client are using a browser as their frontend. Using PHP, it's no 
more than a few rows of code.

Granted, base64 encode can waste a LOT of space, but it looks 
like a columbus' egg in this scenario. 

Maybe base64 could also be a quick way to write a binary "patch" 
for TOAST so it would be binary-compatible "natively"?

Or am I saying a lot of bullsììt? :-)

Just wanted to share some toughts.
Merry Christmas to everybody...


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]



Re: [GENERAL] Re: [HACKERS] My new job

2000-10-11 Thread fabrizio . ermini

 
 Bottom line is we're not sure what to do now.  Opinions from the 
 floor, anyone?
 

From the lowly end of the floor... for what I am concerned, I'm not 
worried about the involvment of the core team. Instead, I'm happy 
that companies like GB and Postgres Inc have been founded.

I'm not an active member of open source community (if not for 
advocating it), but just for the lack of skill. I know RS's and ESR's 
works, I think I got the ideas, but I think that "commercial support" 
is useful for the quality of the projects, and not detrimental.

I really think that there is no possibility that a commercial company 
based on a open source project could steer away from the good of 
the project. The equation is simple: the more the "product" is good, 
the more the company would penetrate the market.

We all know that marketing and FUD approaches are incompatable 
with open source projects, just the quality of the product can give 
people the reason to adopt it. 

What should we fear? That GB will purpusedly put some limitations 
or bugs in tha code, so they could gain more on supporting it
(ya 'now, somebody says that some guy have earned billions 
following this strategy ;-))?
But this is simply not feasible. They don't sell the product, so they 
could not gain on "new realeses" and "service packs". And who 
could hide bugs in an open source project and call them "features"?

At the most, as Tom said, they will be more focused to hunt bugs 
and add features basing on requests made by paying customers. 
Well, those are nonetheless bugs that will be corrected and new 
features that will be added, and we all will benefit for them. There's 
good chance that they are the same bugs and same features that 
some of OUR customers (I'm meaning "we" as in "independent 
consultants and developers that use open source projects as 
tools") will ask for. 
And this way the people that are working on that will be also well 
payed (er, I don't know the payrolls, I'm just hoping that they are 
good...), and I can't see anything bad in that!

No, as I said, commercial companies investing in open source 
development can only do good. 

just my 0.02 Euro ;-)

and good luck to all core members for their new jobs!


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]



[GENERAL] Instability in copying large quantities of data

2000-09-05 Thread fabrizio . ermini

Hi all...

I've a big thorne in my side at the moment.

I'developing a web app based essentially on a set of report. This 
reports are generated from queryes on my client's legacy system. 
For obviuos security reason, my app doesn't interacts directly with 
the main server, but is built around a Postgres DB on a separate 
machine (that is also the web server), and I set up a "poor man's 
replication" that batch transfer data from legacy server to pgsql 
server.

In practice, the legacy server generates ASCII dumps of the data 
necessary for the reports and the zips'em and ftp'em to the web 
server. Then, a little process sheduled in cron get them up and 
COPY them in the pgsql system. I built this process using C and 
LibPQ (if necessary, I can post the code, but is a very simple thing 
and I assume you can figure up how it works).

I used this schema many times for various web app, and I never 
encountered problems (I've got an app built eons ago, based on 
Slack 3.5 and PG 6.3.2, that's housed on a far-away provider and 
that never stopped a single second in all of this time. Wow!).

Now I was trying it on a brand new RH 6.2 with PG 7.0.2, RPM 
version. The problem is that the COPY of the data, apparently, 
sometimes leaves a table in an inconsistent state.
The command doesn't throw any error, but when I try to SELECT or 
VACUUM that table the backend dumps core.  Apparently the only 
thing I can do is drop the table and recreate it. This is 
EXTREMELY unfortunate, since it all must be automated and if I 
can't catch any error condition during the update, than also the web 
app start crashing down... 

Sadly this happens in a very inconsistent way. However, it seems 
that the size of the data file is related to the frequency of the 
problem: and since some of the table dumps are more then 20 
Meg, this is no good news. 

I have not got any log, cause the RPM versions doesn't create 
them, however, I'll try to fix this as soon as possible.

In the meantime, anybody can share some hint on how to resolve 
this nightmare?


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]



[GENERAL] Instability in copying large quantities of data

2000-09-04 Thread fabrizio . ermini

Hi all... 

I've a big thorne in my side at the moment. 

I'developing a web app based essentially on a set of report. This 
reports are generated from queryes on my client's legacy system. 
For obviuos security reason, my app doesn't interacts directly with 
the main server, but is built around a Postgres DB on a separate 
machine (that is also the web server), and I set up a "poor man's 
replication" that batch transfer data from legacy server to pgsql 
server. 

In practice, the legacy server generates ASCII dumps of the data 
necessary for the reports and the zips'em and ftp'em to the web 
server. Then, a little process sheduled in cron get them up and 
COPY them in the pgsql system. I built this process using C and 
LibPQ (if necessary, I can post the code, but is a very simple thing 
and I assume you can figure up how it works). 

I used this schema many times for various web app, and I never 
encountered problems (I've got an app built eons ago, based on 
Slack 3.5 and PG 6.3.2, that's housed on a far-away provider and 
that never stopped a single second in all of this time. Wow!). 


Now I was trying it on a brand new RH 6.2 with PG 7.0.2, RPM 
version. The problem is that the COPY of the data, apparently, 
sometimes leaves a table in an inconsistent state. 
The command doesn't throw any error, but when I try to SELECT or 
VACUUM that table the backend dumps core.  Apparently the only 
thing I can do is drop the table and recreate it. This is 
EXTREMELY unfortunate, since it all must be automated and if I 
can't catch any error condition during the update, than also the web 
app start crashing down... 


Sadly this happens in a very inconsistent way. However, it seems 
that the size of the data file is related to the frequency of the 
problem: and since some of the table dumps are more then 20 
Meg, this is no good news. 


I have not got any log, cause the RPM versions doesn't create 
them, however, I'll try to fix this as soon as possible. 


In the meantime, anybody can share some hint on how to resolve 
this nightmare? 



/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Fabrizio Ermini   Alternate E-mail:
C.so Umberto, 7   [EMAIL PROTECTED]
loc. Meleto Valdarno  Mail on GSM: (keep it short!)
52020 Cavriglia (AR)  [EMAIL PROTECTED]