Re: [HACKERS] [GENERAL] Performance while loading data and indexing

2002-09-26 Thread Denis Perchine

On Thursday 26 September 2002 21:52, Shridhar Daithankar wrote:

> I might have found the bottleneck, although by accident. Mysql was running
> out of space while creating index. So my friend shut down mysql and tried
> to move things by hand to create links. He noticed that even things like cp
> were terribly slow and it hit us.. May be the culprit is the file system.
> Ext3 in this case.
>
> My friend argues for ext2 to eliminate journalling overhead but I favour
> reiserfs personally having used it in pgbench with 10M rows on paltry 20GB
> IDE disk for 25 tps..
>
> We will be attempting raiserfs and/or XFS if required. I know how much
> speed difference exists between resiserfs and ext2. Would not be surprised
> if everythng just starts screaming in one go..

As it was found by someone before any non-journaling FS is faster than
journaling one. This due to double work done by FS and database.

Try it on ext2 and compare.

--
Denis


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Firebird 1.0 released

2002-04-16 Thread Denis Perchine

Hi,

I was interested in this:
Firebird's indexes are very dense because they compress both the prefix and 
the suffix of each key. Suffix compression is simply the elimination of 
trailing blanks or zeros, depending on the data type. Suffix compression is 
performed on each segment of a segmented key. Prefix compression removes the 
leading bytes that match the previous key. Thus a duplicate value has no key 
stored at all. Dense storage in indexes minimizes the depth of the btrees, 
eliminating the advantage of other index types for most data.

Do we do this? How feasible is this?

On Tuesday 16 April 2002 00:35, Christopher Kings-Lynne wrote:
> The Firebird guys have gotten around to releasing 1.0.  If you read this
> front page spiel, you'll notice that they use MVCC, but with an overwriting
> storage manager.

--
Denis


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Importing Large Amounts of Data

2002-04-15 Thread Denis Perchine

On Monday 15 April 2002 05:15, Christopher Kings-Lynne wrote:
> > On Monday 15 April 2002 03:53, Christopher Kings-Lynne wrote:
> > > BTW, instead of:
> > >
> > > CREATE UNIQUE INDEX bigone_pkey ON bigone (rec_no);
> > >
> > > do:
> > >
> > > ALTER TABLE bigone ADD PRIMARY KEY(rec_no);
> >
> > I am sorry, could you please elaborate more on the difference?
>
> They have the same _effect_, it's just that the first sytnax does not mark
> the index as the _primary_ index on the relation.

Yes, I know. I mean how does this affect performance? How this can change
planner decision? Does it have any effect except cosmetical one?

--
Denis


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Importing Large Amounts of Data

2002-04-15 Thread Denis Perchine

On Monday 15 April 2002 03:53, Christopher Kings-Lynne wrote:
> BTW, instead of:
>
> CREATE UNIQUE INDEX bigone_pkey ON bigone (rec_no);
>
> do:
>
> ALTER TABLE bigone ADD PRIMARY KEY(rec_no);

I am sorry, could you please elaborate more on the difference?

--
Denis


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Problems starting up postgres

2001-09-06 Thread Denis Perchine

On Thursday 06 September 2001 20:49, Tom Lane wrote: 
> Denis Perchine <[EMAIL PROTECTED]> writes: 
> Okay.  As a temporary recovery measure, I'd suggest reducing that 
> particular elog from STOP to DEBUG level.  That will let you start up 
> and run the database.  You'll need to look through your tables and try 
> to figure out which one(s) have lost data.  It might be interesting to 
> try to figure out just which page has the bad LSN value --- that might 
> give us a clue why the WAL did not provide protection against this 
> failure.  Unfortunately XLogFlush doesn't have any idea who its caller 
> is, so the only way I can think of to check that directly is to set a 
> breakpoint at this error report and look at the call stack. 
 
OK. I will do this tomorrow. I have no space, and I forced to tgz, untgz 
database. 
 
--  
Sincerely Yours, 
Denis Perchine 
 
-- 
E-Mail: [EMAIL PROTECTED] 
HomePage: http://www.perchine.com/dyp/ 
FidoNet: 2:5000/120.5 
-- 
 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Problems starting up postgres

2001-09-05 Thread Denis Perchine

Hello, 
 
I have quite strange problem. Postgres refuses to start. 
This is 7.1.2. Actually this is Aug 15 snapshot of REL7_1_STABLE branch. 
This is what should be 7.1.2. Any ideas how to repair data? 
This is quite urgent. The system is live, and now stopped. 
 
Sep  5 08:42:30 mx postgres[5341]: [1] DEBUG:  database system shutdown was 
interrupted at 2001-09-05 08:26:25 EDT 
Sep  5 08:42:30 mx postgres[5341]: [2] DEBUG:  CheckPoint record at (23, 2431142676) 
Sep  5 08:42:30 mx postgres[5341]: [3] DEBUG:  Redo record at (23, 2431142676); Undo 
record at (0, 0); Shutdown TRUE 
Sep  5 08:42:30 mx postgres[5341]: [4] DEBUG:  NextTransactionId: 13932307; NextOid: 
9127687 
Sep  5 08:42:30 mx postgres[5341]: [5] DEBUG:  database system was not properly shut 
down; automatic recovery in progress... 
Sep  5 08:42:30 mx postgres[5341]: [6] DEBUG:  redo starts at (23, 2431142740) 
Sep  5 08:42:30 mx postgres[5341]: [7] DEBUG:  ReadRecord: record with zero len at 
(23, 2432317444) 
Sep  5 08:42:30 mx postgres[5341]: [8] DEBUG:  redo done at (23, 2432317408) 
Sep  5 08:42:30 mx postgres[5341]: [9] FATAL 2:  XLogFlush: request is not satisfied 
Sep  5 08:44:42 mx postgres[5441]: [1] DEBUG:  database system shutdown was 
interrupted at 2001-09-05 08:42:30 EDT 
Sep  5 08:44:42 mx postgres[5441]: [2] DEBUG:  CheckPoint record at (23, 2431142676) 
 
--  
Sincerely Yours, 
Denis Perchine 
 
-- 
E-Mail: [EMAIL PROTECTED] 
HomePage: http://www.perchine.com/dyp/ 
FidoNet: 2:5000/120.5 
-- 
 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Rename config.h to pg_config.h?

2001-08-13 Thread Denis Perchine

On Monday 13 August 2001 23:26, Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I vote for ignore.  Don't tons of projects have a config.h file?
>
> That's exactly why there's a problem.  We are presently part of the
> problem, not part of the solution.

The solution usually to have a dir. Like

#include 

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] large objects dump

2001-06-05 Thread Denis Perchine

Hi,

> I had PostgreSQL 7.0.3 (7.1 now) and one nice day I've noticed that much
> number of my BLOBs are broken! Although they seems to be with good content
> in file system (xinv[0-9]+ files) I was not able to get them via
> lo_export... After spending some time trying to fix it, I decided to write
> my own xinv2plainfile converter. I hope if someone has same troubles this
> converter will help him.
> Just compile it, put it in the dir with your xinv[0-9]+ files and run.
> It will create new files with name eq to BLOB id from apropriate xinv.

Either use 7.1.x, or apply my patch to 7.0.3. And you will have no such 
problems at all. :-))

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Re: AW: Plans for solving the VACUUM problem

2001-05-22 Thread Denis Perchine

> >As  a  rule  of  thumb,  online  applications  that hold open
> >transactions during user interaction  are  considered  to  be
> >Broken  By  Design  (tm).   So I'd slap the programmer/design
> >team with - let's use the server box since it doesn't contain
> >anything useful.
>
> Many web applications use persistent database connections for performance
> reasons.

Persistent connection is not the same as an OPEN transaction BTW.

> I suppose it's unlikely for webapps to update a row and then sit and wait a
> long time for a hit, so it shouldn't affect most of them.
>
> However if long running transactions are to be aborted automatically, it
> could possibly cause problems with some apps out there.
>
> Worse if long running transactions are _disconnected_ (not just aborted).

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] applications hang when calling 'vacuum'

2001-04-26 Thread Denis Perchine

On Thursday 26 April 2001 23:36, Barnes, Sandy (Sandra) wrote:
> > We are currently calling 'vacuum' from within our code, using the psql++
> > PgDatabase ExecCommandOk() to send the SQL statement 'vacuum'.  I seems
> > to work when we only have one process running, but when two processes
> > (both invoking the vacuum) are running it immediately hangs.  Some
> > postgres processes exist... one 'idle' and one 'VACUUM'.  The
> > applications hang. I tried using the PgTransaction class to invoke the
> > SQL but received an error stating that vacuum cannot be run in a
> > transaction.
> > Any ideas?

Can confirm this. This is also the case on my systems. I already told about 
this some time ago. There was no reply...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Database corruption in 7.0.3

2001-03-15 Thread Denis Perchine

Can confirm this. Get this just yesterday time ago...

Messages:

NOTICE:  Rel acm: TID 1697/217: OID IS INVALID. TUPGONE 1.

And lots of such lines...
And

pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.


In the end :-((( I lost a library of our institute... :-((( But I have a 
backup!!! :- This table even have NO indices!!!

Program received signal SIGSEGV, Segmentation fault.
0x813837f in PageRepairFragmentation (page=0x82840b0 "") at bufpage.c:311
311 alignedSize = MAXALIGN((*lp).lp_len);
(gdb) bt
#0  0x813837f in PageRepairFragmentation (page=0x82840b0 "") at bufpage.c:311
#1  0x80a9b07 in vc_scanheap (vacrelstats=0x82675b0, onerel=0x8273428, 
vacuum_pages=0xbfffe928, fraged_pages=0xbfffe918) at vacuum.c:1022
#2  0x80a8e8b in vc_vacone (relid=27296, analyze=0 '\000', va_cols=0x0) at 
vacuum.c:599
#3  0x80a8217 in vc_vacuum (VacRelP=0xbfffe9b4, analyze=0 '\000', 
va_cols=0x0) at vacuum.c:299
#4  0x80a818b in vacuum (vacrel=0x8267400 "", verbose=1 '\001', analyze=0 
'\000', va_spec=0x0) at vacuum.c:223
#5  0x813fba5 in ProcessUtility (parsetree=0x8267418, dest=Remote) at 
utility.c:694
#6  0x813c16e in pg_exec_query_dest (query_string=0x820aaa0 "vacuum verbose 
acm;", dest=Remote, aclOverride=0 '\000') at postgres.c:617
#7  0x813c08e in pg_exec_query (query_string=0x820aaa0 "vacuum verbose acm;") 
at postgres.c:562
#8  0x813d4c3 in PostgresMain (argc=9, argv=0xb068, real_argc=9, 
real_argv=0xba3c) at postgres.c:1588
#9  0x811ace5 in DoBackend (port=0x8223068) at postmaster.c:2009
#10 0x811a639 in BackendStartup (port=0x8223068) at postmaster.c:1776
#11 0x811932f in ServerLoop () at postmaster.c:1037
#12 0x8118b0e in PostmasterMain (argc=9, argv=0xba3c) at postmaster.c:725
#13 0x80d5e5e in main (argc=9, argv=0xba3c) at main.c:93
#14 0x40111fee in __libc_start_main () from /lib/libc.so.6

This is plain 7.0.3.

On Thursday 15 March 2001 14:52, Tim Allen wrote:
> We have an application that we were running quite happily using pg6.5.3
> in various customer sites. Now we are about to roll out a new version of
> our application, and we are going to use pg7.0.3. However, in testing
> we've come across a couple of isolated incidents of database
> corruption. They are sufficiently rare that I can't reproduce the problem,
> nor can I put my finger on just what application behaviour causes the
> problems.
>
> The symptoms most often involve some sort of index corruption, which is
> reported by vacuum and it seems that vacuum can fix it. On occasion vacuum
> reports "invalid OID" or similar (sorry, don't have exact wording of
> message). On one occasion the database has been corrupted to the point of
> unusability (ie vacuum admitted that it couldn't fix the problem), and a
> dump/restore was required (thankfully that at least worked). The index
> corruption also occasionally manifests itself in the form of spurious
> uniqueness constraint violation errors.
>
> The previous version of our app using 6.5.3 has never shown the slightest
> symptom of database misbehaviour, to the best of my knowledge, despite
> fairly extensive use. So our expectations are fairly high :-).
>
> One thing that is different about the new version of our app is that we
> now use multiple connections to the database (previously we only had
> one). We can in practice have transactions in progress on several
> connections at once, and it is possible for some transactions to be rolled
> back under application control (ie explicit ROLLBACK; statement).
>
> I realise I haven't really provided an awful lot of information that would
> help identify the problem, so I shall attempt to be understanding if
> no-one can offer any useful suggestions. But I hope someone can :-). Has
> anyone seen this sort of problem before? Are there any known
> database-corrupting bugs in 7.0.3? I don't recall anyone mentioning any in
> the mailing lists. Is using multiple connections likely to stimulate any
> known areas of risk?
>
> BTW we are using plain vanilla SQL, no triggers, no new types defined, no
> functions, no referential integrity checks, nothing more ambitious than a
> multi-column primary key.
>
> The platform is x86 Red Hat Linux 6.2. Curiously enough, on one of our
> testing boxes and on my development box we have never seen this, but we
> have seen it several times on our other test box and at least one customer
> site, so there is some possibility it's related to dodgy hardware. The
> customer box with the problem is a multi-processor box, all the other
> boxes we've tested on are sin

Re: [HACKERS] problem with fe/be protocol and large objects

2001-03-13 Thread Denis Perchine

On Monday 12 March 2001 03:24, Eric Marsden wrote:
> I am trying to debug my socket-level interface to the backend, which
> implements the 6.4 protocol. It works for general queries, but I have
> a problem with large objects.
>
> lo_create and lo_unlink seem to work OK; I get an oid which looks ok
> and there is a corresponding xinv??? file in the base/ directory.
> lo_open returns 0 as a file descriptor. However, following up with one
> of the other lo functions which take descriptor arguments (such as
> lo_write or lo_tell) fails with
>
>ERROR:  lo_tell: invalid large object descriptor (0)

You should do ANY operations with LOs in transaction.

> Looking at be-fsstubs.c it seems that this arises when cookies[fd] is
> NULL. I don't know what this might come from: the lo_tell is sent
> right after the lo_open, on the same connection.
>
> Running the sample lo program in C works, so I suppose the problem
> must come from the bytes I'm sending. Any ideas what could cause this?
>
>
> PostgreSQL 7.0.3 on sparc-sun-solaris2.5.1, compiled by gcc 2.95.2

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Performance monitor

2001-03-13 Thread Denis Perchine

> > Small question... Will it work in console? Or it will be X only?
>
> It will be tck/tk, so I guess X only.

That's bad. Cause it will be unuseful for people having databases far away...
Like me... :-((( Another point is that it is a little bit strange to have 
X-Window on machine with database server... At least if it is not for play, 
but production one...

Also there should be a possibility of remote monitoring of the database. But 
that's just dream... :-)))

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Performance monitor

2001-03-12 Thread Denis Perchine

On Wednesday 07 March 2001 21:56, Bruce Momjian wrote:
> I have started coding a PostgreSQL performance monitor.  It will be like
> top, but allow you to click on a backend to see additional information.
>
> It will be written in Tcl/Tk.  I may ask to add something to 7.1 so when
> a backend receives a special signal, it dumps a file in /tmp with some
> backend status.  It would be done similar to how we handle Cancel
> signals.

Small question... Will it work in console? Or it will be X only?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: AW: AW: AW: [HACKERS] WAL does not recover gracefully from ou t-of -dis k-sp ace

2001-03-09 Thread Denis Perchine

On Saturday 10 March 2001 08:41, Tom Lane wrote:
> More numbers, these from a Powerbook G3 laptop running Linux 2.2:

Eeegghhh. Sorry... But where did you get O_DSYNC on Linux?
Maybe here?

bits/fcntl.h: # define O_DSYNC  O_SYNC

There is no any O_DSYNC in the kernel... Even in the latest 2.4.x.

> [tgl@g3 tmp]$ uname -a
> Linux g3 2.2.18-4hpmac #1 Thu Dec 21 15:16:15 MST 2000 ppc unknown
>
> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=1 tfsync.c
> [tgl@g3 tmp]$ time ./a.out
>
> real  0m32.418s
> user  0m0.020s
> sys   0m14.020s
>
> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=4 tfsync.c
> [tgl@g3 tmp]$ time ./a.out
>
> real  0m10.894s
> user  0m0.000s
> sys   0m4.030s
>
> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=8 tfsync.c
> [tgl@g3 tmp]$ time ./a.out
>
> real  0m7.211s
> user  0m0.000s
> sys   0m2.200s
>
> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=32 tfsync.c
> [tgl@g3 tmp]$ time ./a.out
>
> real  0m4.441s
> user  0m0.020s
> sys   0m0.870s
>
> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_DSYNC -DBLOCKS=64 tfsync.c
> [tgl@g3 tmp]$ time ./a.out
>
> real  0m4.488s
> user  0m0.000s
> sys   0m0.640s
>
> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=1 tfsync.c
> [tgl@g3 tmp]$ time ./a.out
>
> real  0m3.725s
> user  0m0.000s
> sys   0m0.310s
>
> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=4 tfsync.c
> [tgl@g3 tmp]$ time ./a.out
>
> real  0m3.785s
> user  0m0.000s
> sys   0m0.290s
>
> [tgl@g3 tmp]$ gcc -Wall -O -DINIT_WRITE -DUSE_ODSYNC -DBLOCKS=64 tfsync.c
> [tgl@g3 tmp]$ time ./a.out
>
> real  0m3.753s
> user  0m0.010s
> sys   0m0.300s
>
>
> Starting to look like we should just use ODSYNC where available, and
> forget about dumping more per write ...
>
>   regards, tom lane
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Internationalized error messages

2001-03-08 Thread Denis Perchine

> I like this approach.  One of the nice things about Oracle is that
> they have an error manual.  All Oracle errors have an associated
> number.  You can look up that number in the error manual to find a
> paragraph giving details and workarounds.  Admittedly, sometimes the
> further details are not helpful, but sometimes they are.  The basic
> idea of being able to look up an error lets programmers balance the
> need for a terse error message with the need for a fuller explanation.

One of the examples when you need exact error message code is when you want 
to separate unique index violations from other errors. This often needed when 
you want just do insert, and leave all constraint checking to database...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Database Internals Presentation

2001-02-28 Thread Denis Perchine

On Wednesday 28 February 2001 04:04, Bruce Momjian wrote:
> I have completed a database internals presentation.  The PDF is at:
>
>   http://candle.pha.pa.us/main/writings/internals.pdf
>
> I am interested in any comments.  I need to add text to it.  FYI, you
> will find a system catalog chart in the presentation.

Wow! That's cool.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Bug: pgsql 7.0.2 cursor bug

2001-02-12 Thread Denis Perchine

Hello,

I just saw (or better to say waspointed to) the following bug in Bug tracking 
tool submitted yesterday: pgsql 7.0.2 cursor bug.

I have exactly the same trouble... Until I free cursor daemon grows...
I have this in plain 7.0.3. Any comments?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Open 7.1 items

2001-02-09 Thread Denis Perchine

> Store all large objects in a single table (Denis Perchine, Tom)

Hmmm... If you would point me to the document where changes should be done, I 
will do them.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] (one more time) Patches with vacuum fixes available .

2001-01-25 Thread Denis Perchine

> > Did we decide against LAZY? Seems we have a number of people
> > concerned about vacuum downtime, and I can see this as a win
> > for them. If they don't specify LAZY, the code is not run.
>
> First sorry that I wasn't able to deal with vlazy earlier.
>
> Now I have one more open item for 7.1 - restoring index structure
> at runtime (add tuple to parent page for aborted split op,
> create root page if no one exists). I'll try to deal with both
> items (in any case vlazy will be ported to 7.1, as required
> by contract).
>
> As for reported problem: I just looked at Denis' tgz and
> found only table, index and sequence(?) data files

I sent users* :-)))

> - I would
> need in schema definitions, pg_log and pg_variable files
> as well. Denis?

I have sent them to you by private mail.
Schema:

webmailstation=# \d users
 Table "users"
 Attribute |   Type|Modifier
---+---+
 id| integer   | not null default 
nextval('users_id_seq'::text)
 account_name  | text  |
 account_pass  | text  |
 blocked   | boolean   | default 'f'
 commercial| boolean   | default 'f'
 expire_date   | timestamp | default now()
 num_of_pop3   | integer   | default 1
 num_of_local  | integer   | default 1
 first_name| text  |
 last_name | text  |
 bd_year   | integer   |
 gender| integer   |
 occupation| integer   |
 income| integer   |
 alternate_email   | text  |
 state | integer   |
 country   | integer   |
 phone | text  |
 password_question | integer   |
 password_answer   | text  |
 crypt | char(13)  |
 last_login_ip | char(31)  |
 last_seen | timestamp |
 registered| timestamp |
 in_limit  | integer   | not null default 30
 out_limit | integer   | not null default 10
 max_msg_size  | integer   | not null default 64
 max_reply_size| integer   | not null default 16
 max_attach_size   | integer   | not null default 0
 max_replies   | integer   | not null default 10
 max_attachments   | integer   | default 0
Indices: ix_users_account_name,
 ix_users_blocked,
 users_id_key

Do you need 2 other indices?

> Also, when these copies were made - before/after unsuccessful vacuum+lazy?

Surely before. :-))) Otherwise they would be almost useless for you.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Patches with vacuum fixes available for 7.0.x

2001-01-24 Thread Denis Perchine

On Wednesday 24 January 2001 20:37, Bruce Momjian wrote:
> Here is another open item.  What are we doing with LAZY vacuum?

Sorry for inserting in the middle. I would like to say that when I tried LAZY 
vacuum on 7.0.3, I had a lockup on one of the table which disappeared after I 
did usual vacuum. I have sent an original version of a table from the backup 
to Vadim, but did not get any response. Just for your info.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Denis Perchine

> > > The reason you have to visit the main table is that tuple validity
> > > status is only stored in the main table, not in each index.
> > > See prior discussions in the archives.
> >
> > But how Oracle handles this?
>
> Oracle doesn't have non-overwriting storage manager but uses
> rollback segments to maintain MVCC. Rollback segments are used
> to restore valid version of entire index/table page.

Are there any plans to have something like this? I mean overwriting storage 
manager.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Denis Perchine

> Denis Perchine <[EMAIL PROTECTED]> writes:
> > I think this question already was raised here, but... Why PostgreSQL
> > does not do this? What are the pros, and contros?
>
> The reason you have to visit the main table is that tuple validity
> status is only stored in the main table, not in each index.  See prior
> discussions in the archives.

But how Oracle handles this?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Does Oracle store values in indices?

2001-01-23 Thread Denis Perchine

Hello,

just small question.
I just realized that it seems that Oracle stores indexed values in the index 
itself. This mean that it is not necessary to access table when you need to 
get only indexed values.

iso table has an index for vin field. Here is an output for different queries.

SQL> explain plan for select * from iso where vin='dfgdfgdhf';
 
Explained.
 
SQL> @?/rdbms/admin/utlxpls
 
Plan Table

| Operation |  Name|  Rows | Bytes|  Cost  | Pstart| 
Pstop |

| SELECT STATEMENT  |  | 6 |  402 |  8 |   |  
 |
|  TABLE ACCESS BY INDEX ROW|ISO   | 6 |  402 |  8 |   |  
 |
|   INDEX RANGE SCAN|IX_ISO_VI | 6 |  |  3 |   |  
 |

 
6 rows selected.
 
SQL> explain plan for select vin from iso where vin='dfgdfgdhf';
 
Explained.
 
SQL> @?/rdbms/admin/utlxpls
 
Plan Table

| Operation |  Name|  Rows | Bytes|  Cost  | Pstart| 
Pstop |

| SELECT STATEMENT  |  | 6 |   42 |  3 |   |  
 |
|  INDEX RANGE SCAN |IX_ISO_VI | 6 |   42 |  3 |   |  
 |



I think this question already was raised here, but... Why PostgreSQL does not 
do this? What are the pros, and contros?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Re: [PATCHES] Patch to support transactions with BLOBs for current CVS

2001-01-20 Thread Denis Perchine

> > First of all it will not break lo_creat, lo_unlink for sure.
>
> lo_creat depends on inv_create followed by inv_close; your patch
> proposed to disable both of those outside transaction blocks.
> lo_unlink depends on inv_drop, which ditto.  Your patch therefore
> restricts lo_creat and lo_unlink to be done inside transaction blocks,
> which is a new and completely unnecessary restriction that will
> doubtless break many existing applications.

OK.As I already said we can remove checks from inv_create/inv_drop. They are 
not needed there.

> > But I do not see any reasons why we not put lo_import, and lo_export in
> > TX. At least this will prevent other backends from reading partially
> > imported BLOBs...
>
> lo_import and lo_export always execute in a transaction, just like any
> other backend operation.  There is no need to force them to be done in
> a transaction block.  If you're not clear about this, perhaps you need
> to review the difference between transactions and transaction blocks.

Hmmm... Where can I read about it? At least which source/header?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Re: [PATCHES] Patch to support transactions with BLOBs for current CVS

2001-01-20 Thread Denis Perchine

> > On Saturday 20 January 2001 10:05, you wrote:
> > > I just wanted to confirm that this patch was applied.
> >
> > Yes, it is. But the following patch is not applied. But I sure that it is
> > neccessary, otherwise we will get really strange errors (see discussion
> > in the thread).
> >
> > http://www.postgresql.org/mhonarc/pgsql-patches/2000-11/msg00013.html
>
> Can people comment on the following patch that Dennis says is needed?
> It prevents BLOB operations outside transactions.  Dennis, can you
> explain why BLOB operations have to be done inside transactions?

If you forget to put BLOB in TX, you will get errors like 'lo_read: invalid 
large obj descriptor (0)'. The problem is that in be-fsstubs.c in lo_commit 
all descriptors are removed. And if you did not opened TX, it will be 
commited after each function call. And for the next call there will be no 
such fd in the tables.

Tom later wrote:
> I object strongly.  As given, this would break lo_creat, lo_unlink,
> lo_import, and lo_export --- none of which need to be in a transaction
> block --- not to mention possibly causing gratuitous failures during
> lo_commit.

First of all it will not break lo_creat, lo_unlink for sure. But we can 
remove checks from inv_create, and inv_drop. They are not important. At least 
there will be no strange errors issued.

I do not know why do you think there will be any problems with lo_commit. I 
can not find such reasons.

I can not say anything about lo_import/lo_export, as I do not know why they 
are not inside TX themselves.

I am not sure, maybe Tom is right, and we should fix be-fsstubs.c instead. 
But I do not see any reasons why we not put lo_import, and lo_export in TX. 
At least this will prevent other backends from reading partially imported 
BLOBs...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Quite strange crash

2001-01-09 Thread Denis Perchine

> > Didn't you get my mail with a piece of Linux kernel code? I think all is
> > clear there.
>
> That was implementing CPU-time-exceeded kill, which is a different
> issue.

Opps.. You are talking about OOM killer.

/* This process has hardware access, be more careful. */
if (cap_t(p->cap_effective) & CAP_TO_MASK(CAP_SYS_RAWIO)) {
  force_sig(SIGTERM, p);
} else {
  force_sig(SIGKILL, p);
}

You will get SIGKILL in most cases.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Quite strange crash

2001-01-09 Thread Denis Perchine

> > The relevance to the issue at hand is that processes dying during
> > heavy memory load is a documented feature of our supported platforms.
>
> Ugh.  Do you know anything about *how* they get killed --- ie, with
> what signal?

Didn't you get my mail with a piece of Linux kernel code? I think all is 
clear there.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Quite strange crash

2001-01-08 Thread Denis Perchine

On Monday 08 January 2001 23:21, Tom Lane wrote:
> Denis Perchine <[EMAIL PROTECTED]> writes:
> >>>>>>> FATAL: s_lock(401f7435) at bufmgr.c:2350, stuck spinlock. Aborting.
> >>>>>
> >>>>> Were there any errors before that?
> >
> > Actually you can have a look on the logs yourself.
>
> Well, I found a smoking gun:
>
> Jan  7 04:27:51 mx postgres[2501]: FATAL 1:  The system is shutting down
>
> PID 2501 had been running:
>
> Jan  7 04:25:44 mx postgres[2501]: query: vacuum verbose lazy;

Hmmm... actually this is real problem with vacuum lazy. Sometimes it just do 
something for enormous amount of time (I have mailed a sample database to 
Vadim, but did not get any response yet). It is possible, that it was me, who 
killed the backend.

> What seems to have happened is that 2501 curled up and died, leaving
> one or more buffer spinlocks locked.  Roughly one spinlock timeout
> later, at 04:29:07, we have 1008 complaining of a stuck spinlock.
> So that fits.
>
> The real question is what happened to 2501?  None of the other backends
> reported a SIGTERM signal, so the signal did not come from the
> postmaster.
>
> Another interesting datapoint: there is a second place in this logfile
> where one single backend reports SIGTERM while its brethren keep running:
>
> Jan  7 04:30:47 mx postgres[4269]: query: vacuum verbose;
> ...
> Jan  7 04:38:16 mx postgres[4269]: FATAL 1:  The system is shutting down

Hmmm... Maybe this also was me... But I am not sure here.

> There is something pretty fishy about this.  You aren't by any chance
> running the postmaster under a ulimit setting that might cut off
> individual backends after a certain amount of CPU time, are you?

[postgres@mx postgres]$ ulimit -a
core file size (blocks)  100
data seg size (kbytes)   unlimited
file size (blocks)   unlimited
max memory size (kbytes) unlimited
stack size (kbytes)  8192
cpu time (seconds)   unlimited
max user processes   2048
pipe size (512 bytes)8
open files   1024
virtual memory (kbytes)  2105343

No, there are no any ulimits.

> What signal does a ulimit violation deliver on your machine, anyway?

if (psecs / HZ > p->rlim[RLIMIT_CPU].rlim_cur) {
/* Send SIGXCPU every second.. */
if (!(psecs % HZ))
send_sig(SIGXCPU, p, 1);
/* and SIGKILL when we go over max.. */
if (psecs / HZ > p->rlim[RLIMIT_CPU].rlim_max)
send_sig(SIGKILL, p, 1);
}

This part of the kernel show the logic. This mean that process wil get 
SIGXCPU each second if it above soft limit, and SIGKILL when it will be above 
hardlimit.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Quite strange crash

2001-01-08 Thread Denis Perchine

> > Well, I found a smoking gun: ...
> > What seems to have happened is that 2501 curled up and died, leaving
> > one or more buffer spinlocks locked.  ...
> > There is something pretty fishy about this.  You aren't by any chance
> > running the postmaster under a ulimit setting that might cut off
> > individual backends after a certain amount of CPU time, are you?
> > What signal does a ulimit violation deliver on your machine, anyway?
>
> It's worth noting here that modern Unixes run around killing user-level
> processes more or less at random when free swap space (and sometimes
> just RAM) runs low.  AIX was the first such, but would send SIGDANGER
> to processes first to try to reclaim some RAM; critical daemons were
> expected to explicitly ignore SIGDANGER. Other Unixes picked up the
> idea without picking up the SIGDANGER behavior.

That's not the case for sure. There are 512Mb on the machine, and when I had 
this problem it was compltely unloaded (>300Mb in caches).

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Quite strange crash

2001-01-08 Thread Denis Perchine

> >>>> FATAL: s_lock(401f7435) at bufmgr.c:2350, stuck spinlock. Aborting.
> >>
> >> Were there any errors before that?
> >
> > No... Just clean log (I redirect log from stderr/out t file, and all
> > other to syslog).
>
> The error messages would be in the syslog then, not in stderr.

Hmmm... The only strange errors I see are:

Jan  7 04:22:14 mx postgres[679]: query: insert into statistic (date, 
visit_count, variant_id) values (now(), 1, 2)
Jan  7 04:22:14 mx postgres[631]: query: insert into statistic (date, 
visit_count, variant_id) values (now(), 1, 2)
Jan  7 04:22:14 mx postgres[700]: query: insert into statistic (date, 
visit_count, variant_id) values (now(), 1, 2)
Jan  7 04:22:14 mx postgres[665]: query: insert into statistic (date, 
visit_count, variant_id) values (now(), 1, 2)
Jan  7 04:22:14 mx postgres[633]: query: insert into statistic (date, 
visit_count, variant_id) values (now(), 1, 2)
Jan  7 04:22:14 mx postgres[629]: query: insert into statistic (date, 
visit_count, variant_id) values (now(), 1, 2)
Jan  7 04:22:14 mx postgres[736]: query: commit
Jan  7 04:22:14 mx postgres[736]: ProcessUtility: commit
Jan  7 04:22:14 mx postgres[700]: ERROR:  Cannot insert a duplicate key into 
unique index statistic_date_vid_key
Jan  7 04:22:14 mx postgres[700]: query: update users set 
rcpt_ip='213.75.35.129',rcptdate=now() where id=1428067
Jan  7 04:22:14 mx postgres[700]: NOTICE:  current transaction is aborted, 
queries ignored until end of transaction block
Jan  7 04:22:14 mx postgres[679]: query: commit
Jan  7 04:22:14 mx postgres[679]: ProcessUtility: commit
Jan  7 04:22:14 mx postgres[679]: query: update users set 
rcpt_ip='213.75.55.185',rcptdate=now() where id=1430836
Jan  7 04:22:14 mx postgres[665]: ERROR:  Cannot insert a duplicate key into 
unique index statistic_date_vid_key
Jan  7 04:22:14 mx postgres[665]: query: update users set 
rcpt_ip='202.156.121.139',rcptdate=now() where id=1271397
Jan  7 04:22:14 mx postgres[665]: NOTICE:  current transaction is aborted, 
queries ignored until end of transaction block
Jan  7 04:22:14 mx postgres[631]: ERROR:  Cannot insert a duplicate key into 
unique index statistic_date_vid_key
Jan  7 04:22:14 mx postgres[631]: query: update users set 
rcpt_ip='24.20.53.63',rcptdate=now() where id=1451254
Jan  7 04:22:14 mx postgres[631]: NOTICE:  current transaction is aborted, 
queries ignored until end of transaction block
Jan  7 04:22:14 mx postgres[633]: ERROR:  Cannot insert a duplicate key into 
unique index statistic_date_vid_key
Jan  7 04:22:14 mx postgres[633]: query: update users set 
rcpt_ip='213.116.168.173',rcptdate=now() where id=1378049
Jan  7 04:22:14 mx postgres[633]: NOTICE:  current transaction is aborted, 
queries ignored until end of transaction block
Jan  7 04:22:14 mx postgres[630]: query: select id,msg,next from alert
Jan  7 04:22:14 mx postgres[630]: query: select email,type from email where 
variant_id=2
Jan  7 04:22:14 mx postgres[630]: query:
select * from users where senderdate > now()-'10days'::interval AND
variant_id=2 AND crypt='21AN6KRffJdFRFc511'
 
Jan  7 04:22:14 mx postgres[629]: ERROR:  Cannot insert a duplicate key into 
unique index statistic_date_vid_key
Jan  7 04:22:14 mx postgres[629]: query: update users set 
rcpt_ip='213.42.45.81',rcptdate=now() where id=1441046
Jan  7 04:22:14 mx postgres[629]: NOTICE:  current transaction is aborted, 
queries ignored until end of transaction block
Jan  7 04:22:15 mx postgres[711]: query: select message_id from pop3 where 
server_id = 17746
Jan  7 04:22:15 mx postgres[711]: ERROR:  Relation 'pop3' does not exist

They popped up 4 minutes before. And the most interesting is that relation 
pop3 does exist!

> > And the last query was:
> > Jan  7 04:27:53 mx postgres[1008]: query: select message_id from pop3
> > where server_id = 22615
>
> How about the prior queries of other processes?

I do not want to flood maillist (it will be too much of info). I can send you 
complete log file from Jan 7. It is 128Mb uncompressed. With gz it is 8Mb. 
Maybe it will be smaller with bz2.

>  Keep in mind that the
> spinlock could have been left locked by any backend, not only the one
> that complained about it.

Actually you can have a look on the logs yourself. Remember I gave you a 
password from postgres user. This is the same postgres. Logs are in 
/var/log/postgres. You will need postgres.log.1.gz.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Quite strange crash

2001-01-07 Thread Denis Perchine

On Monday 08 January 2001 00:08, Tom Lane wrote:
> Denis Perchine <[EMAIL PROTECTED]> writes:
> > Does anyone seen this on PostgreSQL 7.0.3?
> > FATAL: s_lock(401f7435) at bufmgr.c:2350, stuck spinlock. Aborting.
>
> Were there any errors before that?

No... Just clean log (I redirect log from stderr/out t file, and all other to 
syslog).

Here it is just from the begin:


DEBUG:  Data Base System is starting up at Sun Jan  7 04:22:00 2001
DEBUG:  Data Base System was interrupted being in production at Thu Jan  4 
23:30:22 2001
DEBUG:  Data Base System is in production state at Sun Jan  7 04:22:00 2001
 
FATAL: s_lock(401f7435) at bufmgr.c:2350, stuck spinlock. Aborting.
 
FATAL: s_lock(401f7435) at bufmgr.c:2350, stuck spinlock. Aborting.
Server process (pid 1008) exited with status 6 at Sun Jan  7 04:29:07 2001
Terminating any active server processes...
Server processes were terminated at Sun Jan  7 04:29:07 2001
Reinitializing shared memory and semaphores
-

As far as you can see it happends almost just after start.

I can give you full list of queries which was made by process 1008. But 
basically there was only queries like this:
select message_id from pop3 where server_id = 6214

insert into pop3 (server_id, mailfrom, mailto, subject, message_id, 
sent_date, sent_date_text, recieved_date, state) values (25641, 
'virtualo.com', '[EMAIL PROTECTED]', 'Joao roque Dias I have
tried them allthis one is for real!', 
'[EMAIL PROTECTED]', 
'2001-01-07 04:06:23 -00', 'Sat, 06 Jan 2001 23:06:23 -0500', 'now', 1)

And the last query was:
Jan  7 04:27:53 mx postgres[1008]: query: select message_id from pop3 where 
server_id = 22615

> I've been suspicious for awhile that the system might neglect to release
> buffer cntx_lock spinlocks if an elog() occurs while one is held.  This
> looks like it might be such a case, but you're only showing us the end
> symptom not what led up to it ...

Just say me what can I do. Unfortunatly I can not reproduce the situation...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Quite strange crash

2001-01-07 Thread Denis Perchine

Hi,

Does anyone seen this on PostgreSQL 7.0.3?

FATAL: s_lock(401f7435) at bufmgr.c:2350, stuck spinlock. Aborting.
 
FATAL: s_lock(401f7435) at bufmgr.c:2350, stuck spinlock. Aborting.
Server process (pid 1008) exited with status 6 at Sun Jan  7 04:29:07 2001
Terminating any active server processes...
Server processes were terminated at Sun Jan  7 04:29:07 2001
Reinitializing shared memory and semaphores

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Another vacuum problem

2000-12-20 Thread Denis Perchine

Hello,

Anyone have any clues why such crap can happend?

slygreetings=# vacuum verbose analyze statistic;
NOTICE:  --Relation statistic --
NOTICE:  Pages 498: Changed 1, reaped 490, Empty 0, New 0; Tup 1167: Vac 0, 
Keep/VTL 1110/1110, Crash 0, UnUsed 76888, MinLen 48, MaxLen 48; Re-using: 
Free/Avail. Space 3700424/3700424; EndEmpty/Avail. Pages 0/490. CPU 
0.05s/0.02u sec.
NOTICE:  Index statistic_date_vid_key: Pages 458; Tuples 1167: Deleted 0. CPU 
0.05s/0.00u sec.
NOTICE:  Too old parent tuple found - can't continue vc_repair_frag
NOTICE:  Rel statistic: Pages: 498 --> 498; Tuple(s) moved: 0. CPU 
0.00s/0.01u sec.
VACUUM 

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] vacuum verbose analyze lazy problem.

2000-12-14 Thread Denis Perchine

On Thursday 14 December 2000 23:04, Vadim Mikheev wrote:
> > I wouldn't consider it's a bug, but from my point of view it is quite
> > strange. Any comments?
> >
> > BTW, I did a backup, and can supply anyone interested with original
> > table.
>
> Please send it me.

Another small comment. I did not made initdb. And did not made dump/restore.
Maybe this is a problem. I just installed a patch.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] vacuum verbose analyze lazy problem.

2000-12-14 Thread Denis Perchine

Hello,

I just have installed Vadim's patch for speeding up vacuum.
And have quite strange problem.
I have quite small table:
[root@mx src]# ls -l /home/postgres/data/base/db/users*
-rw---   1 postgres postgres  4120576 Dec 14 08:48 
/home/postgres/data/base/db/users
-rw---   1 postgres postgres   483328 Dec 14 08:46 
/home/postgres/data/base/db/users_id_key
-rw---   1 postgres postgres 8192 Dec 14 08:20 
/home/postgres/data/base/db/users_id_seq

I did vacuum verbose analyze lazy;, and it locks up (or better say do 
something for a long time.

Before started vacuuming users.
19379 pts/0R  2:24 /home/postgres/bin/postgres localhost postgres 
db VACUUM

Before I kill the backend.
19379 pts/0R  4:41 /home/postgres/bin/postgres localhost postgres 
db VACUUM

It spends at least 2 minutes trying vacuuming users. Usually this table is 
vacuumed & analyzed in few seconds.

Here is the output of vacuum verbose analyze, I done after this problem 
arises.

db=# vacuum verbose analyze users;
NOTICE:  --Relation users --
NOTICE:  Pages 978: Changed 1, reaped 832, Empty 0, New 0; Tup 14280: Vac 
13541, Keep/VTL 0/0, Crash 0, UnUsed 265, MinLen 248, MaxLen 340; Re-using: 
Free/Avail. Space 3891320/3854076; EndEmpty/Avail. Pages 0/666. CPU 
0.09s/1.25u sec.
NOTICE:  Index users_id_key: Pages 35; Tuples 14280: Deleted 82. CPU 
0.00s/0.05u sec.
NOTICE:  Index ix_users_account_name: Pages 56; Tuples 14280: Deleted 82. CPU 
0.01s/0.05u sec.
NOTICE:  Index ix_users_blocked: Pages 31; Tuples 14280: Deleted 82. CPU 
0.00s/0.05u sec.
NOTICE:  Rel users: Pages: 978 --> 503; Tuple(s) moved: 640. CPU 0.22s/0.22u 
sec.
NOTICE:  Index users_id_key: Pages 59; Tuples 14280: Deleted 640. CPU 
0.00s/0.04u sec.
NOTICE:  Index ix_users_account_name: Pages 93; Tuples 14280: Deleted 640. 
CPU 0.00s/0.04u sec.
NOTICE:  Index ix_users_blocked: Pages 32; Tuples 14280: Deleted 640. CPU 
0.00s/0.04u sec.
VACUUM

I wouldn't consider it's a bug, but from my point of view it is quite strange.
Any comments?

BTW, I did a backup, and can supply anyone interested with original table.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Why vacuum?

2000-12-14 Thread Denis Perchine

Hello,

Another question about vacuum. Will vacuum/drop/create deadlocks be fixed in 
7.0.x branch? That's really annoying. I cannot run vacuum automatically due 
to this. Just a patch will be really great. Is it so hard to fix?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Strange behavior of PostgreSQL on Linux

2000-12-09 Thread Denis Perchine

Hello,

I have quite strange problem. It's lsof -i -n -P
postmaste 20018  postgres6u  IPv4 12241380   TCP 
127.0.0.1:5432->127.0.0.1:6651 (CLOSE)

And there is no pair for it.

Backtrace of the backend shows:
(gdb) bt
#0  0x40198ba2 in recv () from /lib/libc.so.6
#1  0x80ab2c5 in pq_recvbuf ()
#2  0x80ab395 in pq_getbytes ()
#3  0x80eaa7c in SocketBackend ()
#4  0x80eab13 in ReadCommand ()
#5  0x80ebb9c in PostgresMain ()
#6  0x80d69a2 in DoBackend ()
#7  0x80d6581 in BackendStartup ()
#8  0x80d593a in ServerLoop ()
#9  0x80d53c4 in PostmasterMain ()
#10 0x80abbb6 in main ()
#11 0x400fe9cb in __libc_start_main () at ../sysdeps/generic/libc-start.c:122

[root@mx /root]# ps axwfl|grep 20018
040   507 20018 21602   0   0 152920 tcp_re SW   pts/0  0:00  \_ 
[postmaster]

[root@mx /root]# uname -a
Linux mx.xxx.com 2.2.16-3 #1 Mon Jun 19 19:11:44 EDT 2000 i686 unknown

Looks like it tries to read on socket which is already closed from other 
side. And it seems like recv did not return in this case. Is this OK, or 
kernel bug?

On the other side I see entries like this:
httpd  4260  root4u  IPv4 12173018   TCP 
127.0.0.1:3994->127.0.0.1:5432 (CLOSE_WAIT)

And again. There is no any corresponding postmaster process. Does anyone has 
such expirience before? And what can be the reason of such strange things.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Strange messages in log.

2000-12-05 Thread Denis Perchine

Hello,

what this can be?

FATAL: s_lock(40015071) at spin.c:127, stuck spinlock. Aborting.

>From other sources I can find out that there was real memory starvation. All 
swap was eated out (that's not PostgreSQL problem).

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Current CVS broken?

2000-11-20 Thread Denis Perchine

On 20 November 2000 18:52, Philip Warner wrote:
> I get the following when doing a fresh build:

Did you made distclean?

> make[4]: Entering directory
> `/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg/preproc'
> bison -y -d  preproc.y
> ("preproc.y", line 2256) error: $5 of `CreatedbStmt' has no declared type
> ("preproc.y", line 2256) error: invalid $ value
> ("preproc.y", line 2256) error: $6 of `CreatedbStmt' has no declared type
> ("preproc.y", line 2265) error: $$ of `createdb_opt_list' has no declared
> type ("preproc.y", line 2265) error: $1 of `createdb_opt_list' has no
> declared type ("preproc.y", line 2267) error: $$ of `createdb_opt_list' has
> no declared type ("preproc.y", line 2267) error: $1 of `createdb_opt_list'
> has no declared type ("preproc.y", line 2267) error: $2 of
> `createdb_opt_list' has no declared type ("preproc.y", line 2270) error: $$
> of `createdb_opt_item' has no declared type ("preproc.y", line 2271) error:
> $$ of `createdb_opt_item' has no declared type ("preproc.y", line 2272)
> error: $$ of `createdb_opt_item' has no declared type ("preproc.y", line
> 2273) error: $$ of `createdb_opt_item' has no declared type ("preproc.y",
> line 2276) error: $$ of `createdb_opt_item' has no declared type
> ("preproc.y", line 2280) error: $$ of `createdb_opt_item' has no declared
> type ("preproc.y", line 5365) error: symbol createdb_opt_encoding is used,
> but is not defined as a token and has no rules
> ("preproc.y", line 5365) error: symbol createdb_opt_location is used, but
> is not defined as a token and has no rules
> make[4]: *** [preproc.c] Error 1
> make[4]: Leaving directory
> `/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg/preproc'
> make[3]: *** [all] Error 2
> make[3]: Leaving directory
> `/home/pjw/work/postgresql-cvs/pgsql/src/interfaces/ecpg'
> make[2]: *** [all] Error 2
> make[2]: Leaving directory
> `/home/pjw/work/postgresql-cvs/pgsql/src/interfaces'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/home/pjw/work/postgresql-cvs/pgsql/src'
> make: *** [all] Error 2
>
>
> 
> Philip Warner| __---_
> Albatross Consulting Pty. Ltd.   |/   -  \
> (A.B.N. 75 008 659 498)  |  /(@)   __---_
> Tel: (+61) 0500 83 82 81 | _  \
> Fax: (+61) 0500 83 82 82 | ___ |
> Http://www.rhyme.com.au  |/   \|
>
>  |----
>
> PGP key available upon request,  |  /
> and from pgp5.ai.mit.edu:11371   |/

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] CommandCounterIncrement

2000-11-02 Thread Denis Perchine

> Denis Perchine <[EMAIL PROTECTED]> writes:
> > Small technical question: what exactly CommandCounterIncrement do?
>
> It increments the command counter ;-)
>
> > And what exactly it should be used for?
>
> You need it if, within a chunk of backend code, you want subsequent
> queries to see the results of earlier queries.  Ordinarily a query
> cannot see its own output --- else a command like
>   UPDATE foo SET x = x + 1
> for example, would be an infinite loop, since as it scans the table
> it would find the tuples it inserted, update them, insert the updated
> copies, ...
>
> Postgres' solution is that tuples inserted by the current transaction
> AND current command ID are not visible.  So, to make them visible
> without starting a new transaction, increment the command counter.

Perfect. That what I thought it is.

> > I ask this question because I found out that when I run postgres with
> > verbose=4 I see lot's of StartTransactionCommand &
> > CommitTransactionCommand pair in the place where BLOB is written. And I
> > have a feeling that something is wrong. Looks like explicitly commit all
> > changes. That's really bad...
>
> These do not commit anything, assuming you are inside a transaction
> block.  Offhand I don't think they will amount to much more than a
> CommandCounterIncrement() call in that case, but read xact.c if you want
> to learn more.

Yeps. I get this... But there's still a problem when people try to use BLOBs 
outside of TX. I like to detect it...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] DROP hangup...

2000-11-02 Thread Denis Perchine

Hello,

My previous mail about VACUUM deadlock was silently ignored...
Now I have much more interesting trouble...

Just a minutes ago I found out that my usual routine which recreate indices 
is hanged... I see the picture like this:

10907 ?SW 0:01 /home/postgres/bin/postgres 127.0.0.1 
webmailstation webmailstation DROP waiting

And lots of other backends are also have waiting...

The system is quite heavily loaded. I have > 200.000 queries per day.
There are lot's of inserts and updates, mostly updates.

PostgreSQL 7.0.3pre.
Linux 2.2.16

Any comments? Ideas?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] How to check that I am in transaction inside backend

2000-11-01 Thread Denis Perchine

Hello,

Having some expirience with catching errors with BLOBs, I realised, that it 
is really hard to understand that you forget to enclose BLOB operations in 
transaction...

I would like to add a check for each BLOB operation which will check whether 
we are in transaction, and if not it will issue a notice.

The question is how correctly check that I am in transaction.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Problem with 2 avcuums in parallel

2000-10-31 Thread Denis Perchine

Hello,

there's really wierd trouble.
When I run 2 vacuum's in parallel they hangs. Both.
I use PostgreSQL from 7.0.x CVS (almost 7.0.3).
Any ideas? Tom?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] CommandCounterIncrement

2000-10-30 Thread Denis Perchine

Hello,

Small technical question: what exactly CommandCounterIncrement do?
And what exactly it should be used for?

I use it to see data which is changed in current transaction.
If to be more 
exact when I write BLOB in transaction each time I write additional piece I 
do CommandCounterIncrement.

I ask this question because I found out that when I run postgres with 
verbose=4 I see lot's of StartTransactionCommand & CommitTransactionCommand
pair in the place where BLOB is written. And I have a feeling that something 
is wrong. Looks like explicitly commit all changes. That's really bad...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Re: [PATCHES] Patch to support transactions with BLOBs for current CVS

2000-10-21 Thread Denis Perchine

Hi,

> OK, Denis, can you run the regression tests with your patch and see what
> is going on?
>
> > Bruce Momjian writes:
> > > Applied.  Thanks.  I know it is a pain to generate a new patch against
> > > the release.
> >
> > Regression tests opr_sanity and sanity_check are now failing.

This was due to change in template1.
Here is regression.diff attached.

And also there's test.patch attached which will fix this.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--


*** ./expected/opr_sanity.out	Mon Aug 21 11:48:57 2000
--- ./results/opr_sanity.out	Sun Oct 22 13:16:40 2000
***
*** 482,489 
(p2.pronargs = 1 AND p1.aggbasetype = 0)));
oid  | aggname | oid |   proname   
  ---+-+-+-
!  16984 | max | 768 | int4larger
!  16998 | min | 769 | int4smaller
  (2 rows)
  
  -- Cross-check finalfn (if present) against its entry in pg_proc.
--- 482,489 
(p2.pronargs = 1 AND p1.aggbasetype = 0)));
oid  | aggname | oid |   proname   
  ---+-+-+-
!  16996 | max | 768 | int4larger
!  17010 | min | 769 | int4smaller
  (2 rows)
  
  -- Cross-check finalfn (if present) against its entry in pg_proc.

==

*** ./expected/sanity_check.out	Thu Jul  6 06:02:37 2000
--- ./results/sanity_check.out	Sun Oct 22 13:16:55 2000
***
*** 40,45 
--- 40,46 
   pg_index| t
   pg_inherits | t
   pg_language | t
+  pg_largeobject  | t
   pg_listener | t
   pg_opclass  | t
   pg_operator | t
***
*** 54,58 
   shighway| t
   tenk1   | t
   tenk2   | t
! (44 rows)
  
--- 55,59 
   shighway| t
   tenk1   | t
   tenk2   | t
! (45 rows)
  

==



Index: opr_sanity.out
===
RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/opr_sanity.out,v
retrieving revision 1.20
diff -u -r1.20 opr_sanity.out
--- opr_sanity.out	2000/08/21 04:48:57	1.20
+++ opr_sanity.out	2000/10/22 06:19:58
@@ -482,8 +482,8 @@
   (p2.pronargs = 1 AND p1.aggbasetype = 0)));
   oid  | aggname | oid |   proname   
 ---+-+-+-
- 16984 | max | 768 | int4larger
- 16998 | min | 769 | int4smaller
+ 16996 | max | 768 | int4larger
+ 17010 | min | 769 | int4smaller
 (2 rows)
 
 -- Cross-check finalfn (if present) against its entry in pg_proc.
Index: sanity_check.out
===
RCS file: /home/projects/pgsql/cvsroot/pgsql/src/test/regress/expected/sanity_check.out,v
retrieving revision 1.12
diff -u -r1.12 sanity_check.out
--- sanity_check.out	2000/07/05 23:02:37	1.12
+++ sanity_check.out	2000/10/22 06:19:58
@@ -40,6 +40,7 @@
  pg_index| t
  pg_inherits | t
  pg_language | t
+ pg_largeobject  | t
  pg_listener | t
  pg_opclass  | t
  pg_operator | t
@@ -54,5 +55,5 @@
  shighway| t
  tenk1   | t
  tenk2   | t
-(44 rows)
+(45 rows)
 



Re: [HACKERS] Problem do backup/restore for empty database

2000-10-21 Thread Denis Perchine

BTW, also, if it is possible it is a good idea to remove the following 
warning if there are no BLOBs in the archive: Archiver: WARNING - skipping 
BLOB restoration

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Problem do backup/restore for empty database

2000-10-21 Thread Denis Perchine

> >I tried to do backup/restore of empty database.
> >And get the following error:
> >
> >Archiver(db): Could not execute query. Code = 7. Explanation from backend:
> >'ERROR:  OperatorDef: operator "=" already defined
> >'.
> >
> >Another funny thing is that dump of empty database occupies 657614 bytes.
> >It is quite much...
>
> Works fine for me. What command are you using? What does

pg_dump --blob -Fc test3 -f test3.tar -v
pg_restore test3.tar --db=test4 -v

> pg_restore archive-file-name

pg_restore test3.tar produces 688634 bytes length.

pg_restore -l test3.tar produces long list. Here there are some first lines:

;
; Archive created at Sat Oct 21 18:05:33 2000
; dbname: test3
; TOC Entries: 2899
; Compression: -1
; Dump Version: 1.4-17
; Format: CUSTOM
;
;
; Selected TOC Entries:
;
2338; 15 OPERATOR = postgres

> produce?
>
> Have you done anything nasty to template1?

Just initdb.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Problem do backup/restore for empty database

2000-10-21 Thread Denis Perchine

Hello,

I tried to do backup/restore of empty database.
And get the following error:

Connecting to database for restore
Connecting to test4 as postgres
Creating OPERATOR =
Archiver(db): Could not execute query. Code = 7. Explanation from backend: 
'ERROR:  OperatorDef: operator "=" already defined
'.

Another funny thing is that dump of empty database occupies 657614 bytes.
It is quite much...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Problems with the latest CVS.

2000-10-19 Thread Denis Perchine

Hello,

I try to port my large objects patch to current CVS. All is fine, except it 
does not work... :-)))

When I try to restore data from archive I get:

---
Creating table for BLOBS xrefs
 - Restoring BLOB oid 4440844
 - Restoring BLOB oid 4440846
 - Restoring BLOB oid 4440848
 - Restoring BLOB oid 4440850
 - Restoring BLOB oid 4440852
 - Restoring BLOB oid 4440854
 - Restoring BLOB oid 4440856
 - Restoring BLOB oid 4440858
 - Restoring BLOB oid 4440860
 - Restoring BLOB oid 4440862
 - Restoring BLOB oid 4440864
 - Restoring BLOB oid 4440866
 - Restoring BLOB oid 4440868
 - Restoring BLOB oid 4440870
 - Restoring BLOB oid 4440872
 - Restoring BLOB oid 4440874
 - Restoring BLOB oid 4440876
 - Restoring BLOB oid 4440878
Archiver(db): can not commit database transaction. No result from backend.
---

What's this? Is this a temporary problem or what? If it is just temporaral, I 
will wait until it becames stable. If it my bug, how can I catch it?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [HACKERS] Inserting a select statement result into another table

2000-10-12 Thread Denis Perchine

Hello,

just my $0.02...
If I do
insert into x
  select * from y limit 10;

I will get all of rows in x inserted, not just 10...
I already wrote about this... But did not get any useful reply.

> This is an interesting idea.  We don't allow ORDER BY in INSERT INTO ...
> SELECT because it doesn't make any sense, but it does make sense if
> LIMIT is used:
>
>   ctest=> create table x (Y oid);
>   CREATE
>   test=> insert into x
>   test-> select oid from pg_class order by oid limit 1;
>   ERROR:  LIMIT is not supported in subselects
>
> Added to TODO:
>
>   Allow ORDER BY...LIMIT in INSERT INTO ... SELECT

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[HACKERS] Problem specifying limit in select inside insert.

2000-10-10 Thread Denis Perchine

Hello,

I have quite strange behavior of the following SQL:

insert into address (cid,email) select distinct '49'::int,member.email from 
member imit 1 ;

It should insert just 1 record.
But it insert all recodrs which will be selected by subselect...
What's wrong with this SQL? Or this is a bug? If it is a bug...
How to fix it (patch, workaround...)

Thanks in advance.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--