Re: [rt-users] RT 4.4.1 and transaction isolation level on Postgres

2017-01-04 Thread Václav Ovsík
On Wed, Jan 04, 2017 at 04:50:11AM -0800, Alex Vandiver wrote:
>... 
> MySQL suffers from the exact same problem -- but, as it happens,
> both more silently and more catastrophically.  See
> https://github.com/bestpractical/rt/commit/e36364c5

Eh. I'm glad I did transition from Mysql to Postgres years ago :).

> > I can change isolation level in postgresql.conf to 'repeatable read'
> > and things are different.
> 
> I advise against doing that.  Upon inspection, RT is not prepared to
> deal with the "could not serialize access due to concurrent update"
> errors that arise from updates to rows in multiple transactions in
> Postgres' repeatable-read isolation.

OK, thanks!

> Repeatable-read is only possible in MySQL because it has a fascinating
> definition of "repeatable":
> ...
> > Should I change the default isolation level on Postgres for RT to
> > 'repeatable read'?
> 
> No.  You should try the 4.4/previewscrips-race branch, which I've just
> pushed:
> 
> https://github.com/bestpractical/rt/compare/4.4-trunk...4.4/previewscrips-race
> 
> The gory details are contained in the commits therein.

You have my respect! Wonderful job. I tried your changes on my test
instance first. The problem is solved I think.
I installed changes to production instance too.
Thanks for your time and effort!
-- 
Zito


Re: [rt-users] RT 4.4.1 and transaction isolation level on Postgres

2017-01-03 Thread Václav Ovsík
On Mon, Jan 02, 2017 at 08:10:15PM -0800, Alex Vandiver wrote:
> On Mon, 2 Jan 2017 17:12:29 +0100
> Václav Ovsík <vaclav.ov...@i.cz> wrote:
> > Can anybody confirm on different system?
> 
> Thanks for the detailed replication instructions.  I can replicate, and
> have tracked down a minimal replication case.  I'll drop my findings
> and suggestion on your ticket.
> ...

Great, thank you very much for your time and effort!

How about the Mysql don't have this problem - is this caused by
the different default transaction isolation level or not?

I can change isolation level in postgresql.conf to 'repeatable read'
and things are different.

Should I change the default isolation level on Postgres for RT to
'repeatable read'?

-- 
Zito


[rt-users] RT 4.4.1 and transaction isolation level on Postgres

2017-01-02 Thread Václav Ovsík
Hi,
to reproduce the deadlock problem with RT 4.4.1 on Postgres I tried to install
RT 4.4.1 as clean new instalation:
1. On Debian 8.6 stable + Postgres 9.4
2. On Debian sid (unstable) + Postgres 9.6 (there was a need to
   change one sql command, option NOCREATEUSER -> NOCREATEROLE)

To my surprise I created ticket in queue General (default) and the bug
appeared immediately when I tried to change owner of a ticket and record
comment together.

Can anybody confirm on different system?

Steps:
1. Create Postgres cluster (server instance):

pg_createcluster 9.6 main --locale cs_CZ.UTF-8

2. Change access to rt4 database, so I don't need to set password for rt_user
   in /etc/postgresql/9.6/main/pg_hba.conf

diff --git a/postgresql/9.6/main/pg_hba.conf 
b/postgresql/9.6/main/pg_hba.conf
index b708a885..85857f64 100644
--- a/postgresql/9.6/main/pg_hba.conf
+++ b/postgresql/9.6/main/pg_hba.conf
@@ -86,6 +86,8 @@ local   all postgres  
  peer
 
 # TYPE  DATABASEUSERADDRESS METHOD
 
+local  rt4 rt_user trust
+
 # "local" is for Unix domain socket connections only
 local   all all peer
 # IPv4 local connections:

3. Start Postgres & check

pg_ctlcluster 9.6 main start
pg_lsclusters 

4. Install RT 4.4.1 from RT sources...

cd rt-4.4.1-sources
./configure --prefix=/opt/rt  --with-db-type=Pg
sudo make install

   Change db host, so the connection will go through unix domain socket

cd /opt/rt
cat <<'EOT' >etc/RT_SiteConfig.pm
Set($WebPort, );
Set($DatabaseHost, undef);
Set($DatabaseRTHost, undef);
1;
EOT

   Postgres user needs to read schame etc

chmod a+r etc/{acl,initialdata,schema}*

   Postgres 9.6 no longer support NOCREATEUSER, but NOTCREATEROLE
   (https://www.postgresql.org/docs/9.6/static/release-9-6.html)

sed -i -e 's/NOCREATEUSER/NOCREATEROLE/;' etc/acl.Pg

   Initialize database (if something went wrong, remove not finished database
   using 'sudo -u postgres dropdb rt4' and try better again)

sudo -u postgres /usr/bin/perl -I/opt/rt/local/lib -I/opt/rt/lib 
sbin/rt-setup-database --action init 

5. Start RT built int HTTP server by hand:

sudo -u www-data /opt/rt/sbin/rt-server --port 


6. Go to browser and type location localhost:

- login as root / password
- click on "New ticket in" General, enter something into subject and message
  and click "Create"
  (Owner is nobody, requestor is root)
- Go Actions / Comment, type something into message and change owner
  to root. Click "Update ticket"
- If owner was changed, try repeat previous step switching owner
  between nobody & root, until error.

My session:

bobek:/opt/rt# sudo -u www-data /opt/rt/sbin/rt-server --port 
[21132] [Mon Jan  2 15:58:50 2017] [info]: Successful login for root from 
127.0.0.1 (/opt/rt/sbin/../lib/RT/Interface/Web.pm:831)
[21133] [Mon Jan  2 16:00:21 2017] [info]: 
 #1/31 - Scrip 7 On Create 
Autoreply To Requestors (/opt/rt/sbin/../lib/RT/Action/SendEmail.pm:285)
[21133] [Mon Jan  2 16:00:21 2017] [info]: 
 sent  To: root@localhost 
(/opt/rt/sbin/../lib/RT/Action/SendEmail.pm:316)
[21133] [Mon Jan  2 16:00:21 2017] [info]: 
 #1/31 - Scrip 8 On Create 
Notify Owner and AdminCcs (/opt/rt/sbin/../lib/RT/Action/SendEmail.pm:285)
[21133] [Mon Jan  2 16:00:21 2017] [info]: 
 No recipients found. Not 
sending. (/opt/rt/sbin/../lib/RT/Interface/Email.pm:806)
[21133] [Mon Jan  2 16:00:21 2017] [info]: 
 #1/31 - Scrip 9 On Create 
Notify Ccs (/opt/rt/sbin/../lib/RT/Action/SendEmail.pm:285)
[21133] [Mon Jan  2 16:00:21 2017] [info]: 
 No recipients found. Not 
sending. (/opt/rt/sbin/../lib/RT/Interface/Email.pm:806)
[21133] [Mon Jan  2 16:00:21 2017] [info]: 
 #1/31 - Scrip 10 On Create 
Notify Other Recipients (/opt/rt/sbin/../lib/RT/Action/SendEmail.pm:285)
[21133] [Mon Jan  2 16:00:21 2017] [info]: 
 No recipients found. Not 
sending. (/opt/rt/sbin/../lib/RT/Interface/Email.pm:806)
[21133] [Mon Jan  2 16:00:21 2017] [info]: Ticket 1 created in queue 'General' 
by root (/opt/rt/sbin/../lib/RT/Ticket.pm:567)
[21132] [Mon Jan  2 16:02:37 2017] [info]: 
 not sending to 
root@localhost, creator of the transaction, due to NotifyActor setting 
(/opt/rt/sbin/../lib/RT/Action/SendEmail.pm:901)
[21133] [Mon Jan  2 16:02:48 2017] [info]: 

Re: [rt-users] Postgresql default isolation level (was Re: mysql DB engine ndbdcluster)

2016-12-22 Thread Václav Ovsík
On Wed, Dec 21, 2016 at 06:38:08PM +0100, Václav Ovsík wrote:
>... 
>  
>  * default_transaction_isolation = 'serializable'
>- I tried the action many times, but Pg is silent - nothing appears
>  in its log file and everything seems normal.

Sorry, this is not true. I did more thorough testing today. I did experiments
on one test ticket and as the history of ticket grows, the probability of the
bug increases. Now it is almost certain the problem will occurs.
Isolation level 'serializable' behaves like 'repeatable read'. So the summary 
is:

===
'commited read': -> deadlock, application outputs error:

Comments added
Could not change owner: Could not update column Owner: Owner could not be 
set to 102.

Postgres log:

 2016-12-22 13:18:18 CET [26070-1] rt_rt@rt ERROR:  deadlock detected
 2016-12-22 13:18:18 CET [26070-2] rt_rt@rt DETAIL:  Process 26070 waits for 
ShareLock on transaction 32889; blocked by process 26097.
 Process 26097 waits for ShareLock on transaction 32890; blocked by 
process 26070.
 Process 26070: UPDATE Tickets SET Owner=$1 WHERE id=$2 
 Process 26097: INSERT INTO GroupMembers (LastUpdatedBy, Creator, 
Created, GroupId, MemberId, LastUpdated) VALUES ($1, $2, $3, $4, $5, $6)
 2016-12-22 13:18:18 CET [26070-3] rt_rt@rt HINT:  See server log for query 
details.
 2016-12-22 13:18:18 CET [26070-4] rt_rt@rt CONTEXT:  while updating tuple 
(4509,284) in relation "tickets"
 2016-12-22 13:18:18 CET [26070-5] rt_rt@rt STATEMENT:  UPDATE Tickets SET 
Owner=$1 WHERE id=$2 



===
'repeatable read'
'serializable': -> application output normal status:

Comments added
Owner changed from eva to zito

Postgres log:
 2016-12-22 13:26:36 CET [31696-1] rt_rt@rt ERROR:  could not serialize access 
due to concurrent update
 2016-12-22 13:26:36 CET [31696-2] rt_rt@rt STATEMENT:  SELECT * FROM Tickets 
WHERE id = $1 FOR UPDATE
 2016-12-22 13:26:36 CET [31696-3] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
 2016-12-22 13:26:36 CET [31696-4] rt_rt@rt STATEMENT:  INSERT INTO 
Transactions (Type, Creator, ObjectId, NewValue, Field, Data, ObjectType, 
NewReference, ReferenceType, Created, OldReference, OldValue) VALUES ($1, $2, 
$3, $4, $5, $6, $7, $8, $9, $10, $11, $12)
 2016-12-22 13:26:36 CET [31696-5] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
 2016-12-22 13:26:36 CET [31696-6] rt_rt@rt STATEMENT:  SELECT  * FROM 
Transactions WHERE id = $1
 2016-12-22 13:26:36 CET [31696-7] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
 2016-12-22 13:26:36 CET [31696-8] rt_rt@rt STATEMENT:  SELECT  * FROM Tickets 
WHERE id = $1
 2016-12-22 13:26:36 CET [31696-9] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
 2016-12-22 13:26:36 CET [31696-10] rt_rt@rt STATEMENT:  SELECT  * FROM 
Transactions WHERE id = $1
 2016-12-22 13:26:36 CET [31696-11] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
 2016-12-22 13:26:36 CET [31696-12] rt_rt@rt STATEMENT:  SELECT main.* FROM 
Scrips main JOIN ObjectScrips ObjectScrips_1  ON ( ObjectScrips_1.Scrip = 
main.id ) JOIN ScripConditions ScripConditions_2  ON ( ScripConditions_2.id = 
main.ScripCondition )  WHERE (ObjectScrips_1.ObjectId = '0') AND 
(ObjectScrips_1.Stage = 'TransactionCreate') AND 
(ScripConditions_2.ApplicableTransTypes LIKE '%Comment%' OR 
ScripConditions_2.ApplicableTransTypes LIKE '%Any%') AND (main.Disabled = '0')  
 GROUP BY main.id   ORDER BY MIN(ObjectScrips_1.SortOrder) ASC 
 2016-12-22 13:26:36 CET [31696-13] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
 2016-12-22 13:26:36 CET [31696-14] rt_rt@rt STATEMENT:  SELECT COUNT(DISTINCT 
main.id) FROM Scrips main JOIN ObjectScrips ObjectScrips_1  ON ( 
ObjectScrips_1.Scrip = main.id ) JOIN ScripConditions ScripConditions_2  ON ( 
ScripConditions_2.id = main.ScripCondition )  WHERE (ObjectScrips_1.ObjectId = 
'0') AND (ObjectScrips_1.Stage = 'TransactionCreate') AND 
(ScripConditions_2.ApplicableTransTypes LIKE '%Comment%' OR 
ScripConditions_2.ApplicableTransTypes LIKE '%Any%') AND (main.Disabled = '0') 
 2016-12-22 13:26:36 CET [31696-15] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
 2016-12-22 13:26:36 CET [31696-16] rt_rt@rt STATEMENT:  SELECT main.* FROM 
Scrips main JOIN ObjectScrips ObjectScrips_1  ON ( ObjectScrips_1.Scrip = 
main.id ) JOIN ScripConditions ScripConditions_2  ON ( ScripConditions_2.id = 
main.ScripCondition )  WHERE (ObjectScrips_1.ObjectId = '0') AND 
(ObjectScri

[rt-users] Postgresql default isolation level (was Re: mysql DB engine ndbdcluster)

2016-12-21 Thread Václav Ovsík
Hi,

On Thu, Dec 15, 2016 at 12:07:54AM -0800, Alex Vandiver wrote:
>...
> However, upon writing this, it occurs to me that Postgres' default
> isolation level is _also_ "read committed."[4]  Thus any possible race
> conditions that might show up under NDB are also possible under
> Postgres.  I'd need to do some close analysis to determine if this
> means that Postgres is open to data corruption, or if both are safe
> because the queries RT runs do not care about repeatable-read
> semantics.
>...

I afraid you are right, Postgreses default isolation level is 'read
committed'. I filled bug-report two months ago.
https://issues.bestpractical.com/Ticket/Display.html?id=32381

I'm running PostgreSQL & RT on Debian Jessie, that is Pg 9.4 & RT 4.4.1.
I did only a basic PostgreSQL performance tuning, but didn't change the
default transaction isolation level. I can't find anything about needed
transaction isolation level through RT documentation. I read this fact,
that RT assumes "repeatable-read" isolation level for the first time.

Nevertheless we are hit only by the problem during Comment/Correspond
together with ticket owner change only (I hope).

I did some testing right now on testing RT instance:

 
 * default "committed read" isolation level
   - I'm getting from time to time:

2016-12-21 11:33:38 CET [22545-1] rt_rt@rt ERROR:  deadlock detected
2016-12-21 11:33:38 CET [22545-2] rt_rt@rt DETAIL:  Process 22545 waits for 
ShareLock on transaction 8351856; blocked by process 22539.
Process 22539 waits for ShareLock on transaction 8351857; blocked 
by process 22545.
Process 22545: UPDATE Tickets SET Owner=$1 WHERE id=$2 
Process 22539: INSERT INTO GroupMembers (LastUpdatedBy, 
LastUpdated, GroupId, MemberId, Creator, Created) VALUES ($1, $2, $3, $4, $5, 
$6)
2016-12-21 11:33:38 CET [22545-3] rt_rt@rt HINT:  See server log for query 
details.
2016-12-21 11:33:38 CET [22545-4] rt_rt@rt CONTEXT:  while updating tuple 
(4336,144) in relation "tickets"
2016-12-21 11:33:38 CET [22545-5] rt_rt@rt STATEMENT:  UPDATE Tickets SET 
Owner=$1 WHERE id=$2 

 
 * default_transaction_isolation = 'repeatable read'
   - I'm getting the following errors, but on the application level
 things seems to be normal.

2016-12-21 17:20:41 CET [25923-1] rt_rt@rt ERROR:  could not serialize 
access due to concurrent update
2016-12-21 17:20:41 CET [25923-2] rt_rt@rt STATEMENT:  SELECT * FROM 
Tickets WHERE id = $1 FOR UPDATE
2016-12-21 17:20:41 CET [25923-3] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
2016-12-21 17:20:41 CET [25923-4] rt_rt@rt STATEMENT:  INSERT INTO 
Transactions (OldReference, ObjectType, Data, Field, ObjectId, Type, NewValue, 
ReferenceType, OldValue, Created, NewReference, Creator) VALUES ($1, $2, $3, 
$4, $5, $6, $7, $8, $9, $10, $11, $12)
2016-12-21 17:20:41 CET [25923-5] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
2016-12-21 17:20:41 CET [25923-6] rt_rt@rt STATEMENT:  SELECT  * FROM 
Transactions WHERE id = $1
2016-12-21 17:20:41 CET [25923-7] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
2016-12-21 17:20:41 CET [25923-8] rt_rt@rt STATEMENT:  SELECT  * FROM 
Tickets WHERE id = $1
2016-12-21 17:20:41 CET [25923-9] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
2016-12-21 17:20:41 CET [25923-10] rt_rt@rt STATEMENT:  SELECT  * FROM 
Transactions WHERE id = $1
2016-12-21 17:20:41 CET [25923-11] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
2016-12-21 17:20:41 CET [25923-12] rt_rt@rt STATEMENT:  SELECT main.* FROM 
Scrips main JOIN ObjectScrips ObjectScrips_1  ON ( ObjectScrips_1.Scrip = 
main.id ) JOIN ScripConditions ScripConditions_2  ON ( ScripConditions_2.id = 
main.ScripCondition )  WHERE (ObjectScrips_1.ObjectId = '0') AND 
(ObjectScrips_1.Stage = 'TransactionCreate') AND 
(ScripConditions_2.ApplicableTransTypes LIKE '%Comment%' OR 
ScripConditions_2.ApplicableTransTypes LIKE '%Any%') AND (main.Disabled = '0')  
 GROUP BY main.id   ORDER BY MIN(ObjectScrips_1.SortOrder) ASC 
2016-12-21 17:20:41 CET [25923-13] rt_rt@rt ERROR:  current transaction is 
aborted, commands ignored until end of transaction block
2016-12-21 17:20:41 CET [25923-14] rt_rt@rt STATEMENT:  SELECT 
COUNT(DISTINCT main.id) FROM Scrips main JOIN ObjectScrips ObjectScrips_1  ON ( 
ObjectScrips_1.Scrip = main.id ) JOIN ScripConditions ScripConditions_2  ON ( 
ScripConditions_2.id = main.ScripCondition )  WHERE (ObjectScrips_1.ObjectId = 
'0') AND (ObjectScrips_1.Stage = 'TransactionCreate') AND 
(ScripConditions_2.ApplicableTransTypes LIKE 

[rt-users] p*a*s*s*w*o*r*d quality enforcement?

2015-07-17 Thread Václav Ovsík
Hi,
is there any way to set password quality enforcement better then its
minimal length ($MinimumPasswordLength)?
I mean something like
 http://sourceforge.net/projects/cracklib
 http://www.openwall.com/passwdqc/
or so.

Tried Anyone John The Ripper successfully with RT password hashes?
-- 
Zito


Re: [rt-users] RT saves data in quoted-printable, why???

2015-03-06 Thread Václav Ovsík
Hi,

On Thu, Mar 05, 2015 at 06:37:21PM -0500, Alex Vandiver wrote:
 On Fri, 6 Mar 2015 00:06:32 +0100 Václav Ovsík vaclav.ov...@i.cz
 wrote:
  https://issues.bestpractical.com/Ticket/Display.html?id=29735
 
 Aha -- thanks for digging that out!  I thought I vaguely recalled
 something in this area previously.
 https://issues.bestpractical.com/Ticket/Attachment/286095/157750/utf8-encoding.patch
 looks to be functionally fairly similar to the branch.

Thanks for attention to this...

 There are a few other, orthogonal fixes in there that may still be
 interesting to tease out into their own commits.  It looks like I see
 changes to:
 
  * Fix the computed max size of base64'd attachments; I'd need to
 squint at it harder, but seems eminently reasonable.
 
  * Attempt to gracefully deal with TruncateLongAttachments truncating
 mid-byte of UTF-8 data.  As above; the decode/encode is an interesting
 trick to attempt to ensure that the byte stream is consistent.  I'd
 like to test it a bit, but seems not unreasonable.

It is not too efficient maybe, but easy and safety first :)

  * Choose base64 vs QP based on which is shorter; I'm less convinced by
 this, since it means that for large data, it gets QP'd, base64'd, and
 then one of those _again_ -- which isn't terribly efficient.  I'm less
 convinced by the tradeoff of computation time to stored in-database
 size.

You are right. My intention was to gather as much readable text as
possible. Maybe a text contains some invalid characters, but the rest
of the text is readable, so QP is more appropriate, because it leaves
the most of a text readable.
So the measuring of length of an encoded data Base64/QP gives a result of
how much ASCII chars are there.
 len Base64  len QP - many binary data - maybe some octet stream
 len QP  len Base64 - many ASCII chars - maybe the text

But this is corner case probably and it is not very interesting.
The most of the text should be UTF-8 valid and the rest is not
interesting these days.

 If you're interested in reworking the patch into a 2-3 commit series,
 I'm happy to apply for 4.2-trunk.
  - Alex

https://github.com/bestpractical/rt/compare/stable...zito:4.2-zito-encodelob-utf8-fix
This is a bit newer version I'm using within production instance rt-4.2.9.
I will be happy if some part will be usable for RT mainline.

Thanks for fine software!
Cheers
-- 
Zito


Re: [rt-users] RT saves data in quoted-printable, why???

2015-03-05 Thread Václav Ovsík
https://issues.bestpractical.com/Ticket/Display.html?id=29735
-- 
Zito


[rt-users] RT sends email to disabled users

2014-11-27 Thread Václav Ovsík
Hi,
I am a bit surprised, that our RT instance sends emails to disabled
users. I thought disabling user is sufficient procedure tu suppress
user access and all communication towards that user by RT.

I did some RT database cleanup procedures (adding/deleting watchers on
old tickets, so I can purge some invalid user accounts by shredder) and
to my INBOX arrived a bunch of Undelivered Mail Returned to Sender
bounces (routed to me through rtadmin alias).
These was transactions notifications for owner of old ticket, but the
owner left our company and his email address is no longer valid. His
account is disabled.

Is this bug? Have I some misconfiguration somewhere or should I besides
disabling user also delete his email address from his account?

Best Regards
-- 
Zito


[rt-users] FYI: database upgrade bug in DBD::Pg 3.4.1

2014-09-26 Thread Václav Ovsík
Hi,
I found a bug in the DBD::Pg 3.4.1 yesterday
https://rt.cpan.org/Public/Bug/Display.html?id=99144
The problem was fixed very quickly and the new version 3.4.2 was
released.

The upgrade failed for etc/upgrade/3.9.8/content, details are in the
bug-report above.

...
Processing 3.9.8
Now populating database schema.
Now inserting data.  
[8237] [Thu Sep 25 15:20:56 2014] [warning]: DBD::Pg::db table_info failed: 
ERROR:  operator does not exist: text = integer
LINE 34: ) ti WHERE TABLE_TYPE IN (1)
 ^
HINT:  No operator matches the given name and argument type(s). You might need 
to add explicit type casts. at /opt/interni/sbin/../lib/RT/Handle.pm line 1739, 
 line 1. (/opt/interni/sbin/../lib/RT/Handle.pm:1739)
Couldn't finish 'upgrade' step.
 
ERROR: One of initial functions failed: DBD::Pg::db table_info failed: ERROR:  
operator does not exist: text = integer
LINE 34: ) ti WHERE TABLE_TYPE IN (1)
 ^
HINT:  No operator matches the given name and argument type(s). You might need 
to add explicit type casts. at /opt/interni/sbin/../lib/RT/Handle.pm line 1739, 
 line 1.  


Maybe this can help someone...
-- 
Zito
-- 
RT Training November 4  5 Los Angeles
http://bestpractical.com/training


Re: [rt-users] FullTextSearch the parts of word

2014-04-11 Thread Václav Ovsík
Hi Arkady,

On Thu, Apr 10, 2014 at 03:33:16PM +0400, Arkady Glazov wrote:
 Hi Václav,
 I will be in wait.
 
 I look databases. All content saved as 'quoted-printable'. I can send
 example If it help you.
...

On Fri, Apr 11, 2014 at 09:38:26AM +0400, Arkady Glazov wrote:
 Hi Václav,
 
 I send example of letter after this email.
 
 In my RT Database body of email show as:
...


I can confirm this. I try your message and my own message with latin2
chars and both are qp encoded :(.

  -[ RECORD 1 
]---+
  id  | 313295
  transactionid   | 411992
  parent  | 313294
  messageid   | e1wyuc2-0005yb...@skat-rt.seagroup.inc
  subject | Quoted-printable test for RT
  filename| 
  contenttype | text/plain
  contentencoding | quoted-printable
  content | =D0=9F=D1=80=D0=B8=D0=BC=D0=B5=D1=80 
=D1=82=D0=B5=D0=BA=D1=81=D1=82=D0=B0 =
  | 
=D1=81=D0=BE=D0=B4=D0=B5=D1=80=D0=B6=D0=B0=D1=89=D0=B5=D0=B3=D0=BE =D0=BA=
  | =D0=B8=D1=80=D0=B8=D0=BB=D0=B8=D1=86=D1=83 =D0=B8 
=D0=BB=D0=B0=D1=82=D0=B8=
  | =D0=BD=D0=B8=D1=86=D1=83.
  | This is example of cyrillic and latin text in th body.
  | Encode as quoted-printable.
  | 
  | --
  | Arkady Glazov
  | 
  ...

  nis=# \x
  Expanded display is on.
  nis=# select * from attachments where transactionid =411999;
  -[ RECORD 1 
]---+
  id  | 313298
  transactionid   | 411999
  parent  | 0
  messageid   | 20140411083151.GF8681@bobek.localdomain
  subject | test latin2
  filename| 
  contenttype | text/plain
  contentencoding | quoted-printable
  content | This is latin2 test:
  | Diakritika v =C4=8Desk=C3=BDch znac=C3=ADch...
  | =C5=BDlu=C5=A5ou=C4=8Dk=C3=BD k=C5=AF=C5=88 =C3=BAp=C4=9Bl 
=C4=8F=C3=A1bels=
  | k=C3=A9 =C3=B3dy.
  | --=20
  | V=C3=A1clav Ovs=C3=ADk
  | IIT-UNIX
  | ICZ a.s.
  | Pobo=C4=8Dka Plze=C5=88
  | N=C3=A1m=C4=9Bst=C3=AD M=C3=ADru 10, 301 00 Plze=C5=88, CZ
  | Tel. +420 222 275 511
  | vaclav.ov...@i.cz
  | http://www.i.cz
  | 
  | 
  ...

Than this is maybe ready for bugreport. I will try to debug this a little...
I think the previous versions of RT did decoding MIME encodings as possible
into UTF-8 raw shape, so fulltext can work.

My current RT 3.8.16 has distribution of encoding:

nis=# select distinct contentencoding, count(contentencoding) from attachments 
group by contentencoding;
 contentencoding  | count  
--+
 none | 283405
 quoted-printable |547
 base64   |   1711
  |  0
(4 rows)

Maybe this is a regression or some ugly feature of RT 4.2.x.

-- 
Zito
-- 
RT Training - Dallas May 20-21
http://bestpractical.com/training


Re: [rt-users] FullTextSearch the parts of word

2014-04-11 Thread Václav Ovsík
I have tried to feed test message into production RT instance 3.8.16
and it ends in the database:

  interni=# select contenttype, contentencoding, content, trigrams from 
attachments where transactionid =254774;
  -[ RECORD 1 
]---+--
  contenttype | text/plain
  contentencoding | none
  content | Пример текста содержащего кирилицу и латиницу.
  | This is example of cyrillic and latin text in th body.
  | Encode as quoted-printable.
  | 
  | --
  | Arkady Glazov
  | 
  trigrams| '-pr' 'abl' 'ady' 'amp' 'and' 'ark' 'ati' 'azo' 'ble' 'bod' 
'cod' 'cyr' 'd-p' 'dy.' 'ed-' 'enc' 'est' 'exa' 'ext' 'for' 'gla' 'his' 'ill' 
'int' 'kad' 'lat' 'laz' 'le.' 'lic' 'lli' 'mpl' 'nco' 'nta' 'ode' 'ody' 'ote' 
'ple' 'pri' 'quo' 'ril' 'rin' 'rka' 'tab' 'ted' 'tes' 'tex' 'thi' 'tin' 'uot' 
'xam' 'yri' 'zov' 'ати' 'аще' 'дер' 'его' 'екс' 'ерж' 'жащ' 'или' 'име' 'ини' 
'ири' 'ицу' 'кир' 'кст' 'лат' 'лиц' 'мер' 'ниц' 'оде' 'при' 'ржа' 'рил' 'рим' 
'сод' 'ста' 'тек' 'тин' 'цу.' 'щег'
  
So I think, this is really problem with RT 4.2.3 :(.
-- 
Zito
-- 
RT Training - Dallas May 20-21
http://bestpractical.com/training

Re: [rt-users] FullTextSearch the parts of word

2014-04-11 Thread Václav Ovsík
I think I found the critical point.
The problem is in the method RT::Record::_EncodeLOB().
I run a little script feeding message into the RT under debugger:
last actions was to setup breakpoint on b RT::Record::_EncodeLOB
and there is several steps:


  DB45 v
788:} elsif (!$RT::Handle-BinarySafeBLOBs
789$Body =~ /\P{ASCII}/
790!Encode::is_utf8( $Body, 1 ) ) {
791==$ContentEncoding = 'quoted-printable';
792 }
793 
794 #if the attachment is larger than the maximum size
795:if ( ($MaxSize) and ( $MaxSize  length($Body) ) ) {
796 
797 # if we're supposed to truncate large attachments
  DB45 x $Body
0  'Пример текста содержащего кирилицу и латиницу.
This is example of cyrillic and latin text in th body.
Encode as quoted-printable.

--
Arkady Glazov
'
  DB46 p Encode::is_utf8( $Body, 1 ) ? true : false
false


For some reason Encode::is_utf8(...) returns false :(.

Maybe the problem is with the libmime-tools-perl (I'm running on the Debian), I
have version 5.503-1.
-- 
Zito
-- 
RT Training - Dallas May 20-21
http://bestpractical.com/training

Re: [rt-users] FullTextSearch the parts of word

2014-04-11 Thread Václav Ovsík
On Fri, Apr 11, 2014 at 01:47:53PM +0200, Václav Ovsík wrote:
... 
 Maybe the problem is with the libmime-tools-perl (I'm running on the Debian), 
 I
 have version 5.503-1.
 ...

correction, I have localy installed version 5.505...

zito@rt2:~/migration/rt$ make testdeps |fgrep -i mime
MIME::Entity = 5.504 ...found
zito@rt2:~/migration/rt$ perl -MMIME::Entity -e ' print 
$MIME::Entity::VERSION\n;'
5.505

-- 
Zito
-- 
RT Training - Dallas May 20-21
http://bestpractical.com/training


Re: [rt-users] FullTextSearch the parts of word

2014-04-11 Thread Václav Ovsík
FYI: The problem has ticket
http://issues.bestpractical.com/Ticket/Display.html?id=29735
I found a temporary workaround - patch attached...
-- 
Zito
diff --git a/lib/RT/Record.pm b/lib/RT/Record.pm
index 788aa42..304ac6a 100644
--- a/lib/RT/Record.pm
+++ b/lib/RT/Record.pm
@@ -787,7 +787,7 @@ sub _EncodeLOB {
 # Some databases (postgres) can't handle non-utf8 data
 } elsif (!$RT::Handle-BinarySafeBLOBs
$Body =~ /\P{ASCII}/
-   !Encode::is_utf8( $Body, 1 ) ) {
+   !utf8::valid($Body) ) {
   $ContentEncoding = 'quoted-printable';
 }
 
-- 
RT Training - Dallas May 20-21
http://bestpractical.com/training

Re: [rt-users] FullTextSearch the parts of word

2014-04-09 Thread Václav Ovsík
Hi,

On Wed, Apr 09, 2014 at 05:53:35PM +0400, Arkady Glazov wrote:
 Hi,
 
 I have installed RT 4.2 with PostgreSQL FullTextSearch. In base i can
 search only the whole word. Is in possible search only for parts of word?

I did this in the past for RT 3.8.x and I have configuration ready to
use it in the 4.2.3 too. But this is
 * a bit hacky
 * there is redundant information in the database

the wiki page is outdated
http://requesttracker.wikia.com/wiki/PostgreSQLFullTextTrgm

give me some time to prepare up-to-date instructions please.
-- 
Zito
-- 
RT Training - Dallas May 20-21
http://bestpractical.com/training


Re: [rt-users] FullTextSearch the parts of word

2014-04-09 Thread Václav Ovsík
Please look at https://github.com/zito/rt-pgsql-fttrgm
I hope it will work, also I didn't try to run the script rt-mysql2pg on
RT4 database (I did simple upgrade of RT3 database with indexes already
setup). Let me know if it works.
Thanks
-- 
Zito
-- 
RT Training - Dallas May 20-21
http://bestpractical.com/training


Re: [rt-users] Apache FastCGI and # of db connections

2014-01-13 Thread Václav Ovsík
On Fri, Jan 10, 2014 at 07:13:28PM -0500, Alex Vandiver wrote:
 On Fri, 2014-01-10 at 11:45 +0100, Václav Ovsík wrote:
  I have a question regarding behaviour of FastCGI processes and database
  connections. While preparing upgrade of RT 3.8.16 to RT 4.2.1 I noticed
  a change of FastCGI usage in the RT.
 
 Thanks for bringing this to our attention.  This is a bug which affects
 RT 4.0 and 4.2; the 4.0/fewer-active-handles branch resolves the issue.

I have applied changes from your new branch and database handles are OK
now.


  I did not study Plack too much - only look at
  http://search.cpan.org/~miyagawa/Plack-1.0030/lib/Plack/Handler/FCGI.pm
  and there is approach a bit different. One daemon is started standalone
  (forking to a number of processes) and Apache is configured to connect
  to this daemon
   FastCgiExternalServer /tmp/myapp.fcgi -socket /tmp/fcgi.sock
 
 This is a valid alternate deployment strategy, which comes with the
 benefit that one can restart one RT server without affecting others, by
 restarting its fastcgi process.  However, it requires additional system
 configuration to ensure that these processes are started at server
 startup time, and so forth, which is why this is not the deployment
 suggested in RT's documentation, which we aim to keep as simple as
 possible.

I understand it.


  Should I really increase PostgreSQL max_connections to 100 and don't
  bother with it?
 
 Until a version of RT is released with the above branch, increasing the
 max_connections is likely your best bet.  The additional connections
 will consume some small amount of memory each, but this is likely
 negligible, and will have no performance impact.
  - Alex

I have GIT versioned installed RT files, so no problem to apply some
changes directly into installed RT instances.
Thanks for quick response and good job!
Best Regards
-- 
Zito


[rt-users] Apache FastCGI and # of db connections

2014-01-10 Thread Václav Ovsík
Hi,

Thanks very much for Request Tracker!

I have a question regarding behaviour of FastCGI processes and database
connections. While preparing upgrade of RT 3.8.16 to RT 4.2.1 I noticed
a change of FastCGI usage in the RT.

I am using Debian Wheezy (with some packages from Jessie/Sid to satisfy
dependencies) and PostgreSQL as database backend on it.

I have a four instances of RT connecting to one PostgreSQL server.
FastCGI processes of my old instances (3.8.16) was configured in the
Apache:

 FastCgiServer /opt/eu/bin/mason_handler.fcgi -idle-timeout 400 -processes 9 
-init-start-delay 1
 FastCgiServer /opt/interni/bin/mason_handler.fcgi -idle-timeout 400 -processes 
2 -init-start-delay 2
 FastCgiServer /opt/nis/bin/mason_handler.fcgi -idle-timeout 400 -processes 9 
-init-start-delay 2
 FastCgiServer /opt/RT/bin/mason_handler.fcgi -idle-timeout 400 -processes 9 
-init-start-delay 1

So I have a total number of 3 * 9 + 2 = 29 FastCGI processes and the
same number of database connections:

 rt=# select count(*) from pg_stat_activity where usename like 'rt_%';
  count 
 ---
 29
 (1 row)

Note: all RT DB accounts starts with `rt_'.

I have limited number of database connection on the server:
rt:~# grep ^max_connections /etc/postgresql/9.1/main/postgresql.conf
max_connections = 48# (change requires restart)

And this number (48) is sufficient so.


During testing of RT 4.2.1 instances on the testing box with modified
configuration according to web_deployment.pod I got after a while exhausted
database connections.
I noticed every FastCGI process forks one child (Placks default?), that
delays database connection to HTTP request time, so I ended up with two
times number of DB connections than before.

I tried to configure one fresh instance:

 Directory /opt/rt4-preview 
 Options FollowSymLinks ExecCGI
 AllowOverride None
 /Directory
 FastCgiServer /opt/rt4-preview/sbin/rt-server.fcgi -processes 5 -idle-timeout 
300
 ScriptAlias /rt4 /opt/rt4-preview/sbin/rt-server.fcgi

this is according to docs/web_deployment.pod I hope.

During Apache startup, it is possible for a moment to see processes:
 \_ /usr/bin/perl -w /opt/rt4-preview/sbin/rt-server.fcgi
which quickly transforms into daemonized couple:
 \_ perl-fcgi-pm 
 |   \_ perl-fcgi

After a while the situation is (pstree):
 ├─apache2─┬─apache2─┬─5*[perl-fcgi-pm───perl-fcgi]
 │ │ └─3*[rt-index.fcgi]
 │ └─5*[apache2]

Till no request is handled the connections are:
 zito=# select count(*) from pg_stat_activity where usename like 'rt_%';
  count 
 ---
  5
 (1 row)

The number of connections grows during requests on web interface and ends with:
zito=# select count(*) from pg_stat_activity where usename like 'rt_%';
 count 
---
10
(1 row)

All children processes are connected to database...

Is this setup really ok?

I did not study Plack too much - only look at
http://search.cpan.org/~miyagawa/Plack-1.0030/lib/Plack/Handler/FCGI.pm
and there is approach a bit different. One daemon is started standalone
(forking to a number of processes) and Apache is configured to connect
to this daemon
 FastCgiExternalServer /tmp/myapp.fcgi -socket /tmp/fcgi.sock

But I don't know technology.
Should I really increase PostgreSQL max_connections to 100 and don't
bother with it?
Best Regards
-- 
Zito


Re: [rt-users] RT Include substrings with Full-Text Indexing (PostgreSQL)

2012-07-27 Thread Václav Ovsík
On Thu, Jul 26, 2012 at 06:59:49PM +, Betz, Gregory wrote:
 Hello all,
 
 Is there any way to setup full-text indexing (with PostgreSQL) to index 
 substrings?

I have setup this for RT 3.8.10:
http://requesttracker.wikia.com/wiki/PostgreSQLFullTextTrgm

This setup addresses exactly the problem you have.
My wiki-page is marked as outdated now. I did not try to upgrade to RT
4.x to say (I don't have examined the data model of RT4.x if it
changed). This setup a bit wastes your database space of course. You
need to extract all possible trigrams from contents

-- 
Zito
#!/usr/bin/env perl
use strict;
use DBI;
use Data::Dumper;
use DBD::Pg qw(PG_BYTEA);
use Encode qw(is_utf8);
use Encode::Guess qw(utf8 latin2 ascii);
use File::LibMagic;
use Getopt::Long;
use MIME::Base64 qw(encode_base64);

my $version='1.0';

my ($verbose, $dryrun);

my $tsvector_column = 'trigrams';

my $flm = File::LibMagic-new();

sub usage
{
my ($exitcode) = @_;
print STDERR EOF;
rt-mysql2pg [options]

Version: $version

Converts Request Tracker database from MySQL to PostgreSQL

Do a copy of every record in user tables from a source database to
a destination database. A destination schema must exists (same as source
schema), so inserts can be done without error. Tables in destination database
are deleted before doing a copy!

options:

 -c,--copy  copy data, --src-dsn  --dst-dsn must be present

--[no]fulltext  setup/remove fulltext support, --dst-dsn must be
present, can be combined with --copy

--src-dsn dsn   perl DBI data source name (e.g. 
dbi:mysql:dbname=rt3)
--src-user user perl DBI user name
--src-password pass perl DBI password

--dst-dsn dsn   perl DBI data source name (e.g. dbi:Pg:dbname=rt3)
--dst-user user perl DBI user name
--dst-password pass perl DBI password

 -n,--dry-run   dry run (no db modifications)

 -v,--verbose   run verbosly (incremental)
 -h,--help  help usage

EOF
exit($exitcode) if defined $exitcode;
}

sub user_tables
{
my ($dbh) = @_;
my $sth = $dbh-table_info(undef, undef, undef, 'TABLE');
$sth-execute();
my @user_tables;
while ( my $r = $sth-fetchrow_arrayref() )
{
my ($table_cat, $table_schem, $table_name, $table_type, $remarks) = @$r;
next unless $table_type eq 'TABLE';
next if $dbh-{Driver}-{Name} eq 'Pg'  $table_schem ne 'public';
push @user_tables, $table_name;
}
return \@user_tables;
}

sub user_seqs
{
my ($dbh) = @_;
return $dbh-selectcol_arrayref('SELECT sequence_name FROM 
information_schema.sequences');
}

sub column_info
{
my ($dbh, $table) = @_;
my $schema = $dbh-{Driver}-{Name} eq 'Pg' ? 'public' : undef;
my $sth = $dbh-column_info(undef, $schema, $table, undef);
$sth-execute();
my (@coln, %colt);
while ( my $r = $sth-fetchrow_arrayref() )
{
my ($table_cat, $table_schem, $table_name, $column_name, $data_type,
$type_name, $column_size, $buffer_length, $decimal_digits,
$num_prec_radix, $nullable, $remarks, $column_def,
$sql_data_type, $sql_datetime_sub, $char_octet_length,
$ordinal_position, $is_nullable, $type_name_and_size) = @$r;
push @coln, lc($column_name);
$colt{lc($column_name)} = {
'data_type' = $data_type,
'type_name' = lc($type_name),
'type_name_and_size' = lc($type_name_and_size),
};
}
return (\@coln, \%colt);
}

sub utf8_fix
{
my $s = $_[0];
return 0 if !defined $s || $s eq '';
$s =~ s/\x0+$//;
my $decoder = Encode::Guess-guess($s);
if ( defined($decoder) )
{
if ( ref($decoder) )
{
return 0 if $decoder-name eq 'ascii' || $decoder-name eq 'utf8';
$s = $decoder-decode($s);
return -1 unless Encode::is_utf8($s, 1);
}
elsif ($decoder =~ /(\S+ or .+)/)
{
my %matched = map { $_ = 1 } split(/ or /, $1);
if ( $matched{'utf8'} )
{
$s = Encode::decode('utf-8', $s, Encode::FB_DEFAULT);
return -1 unless Encode::is_utf8($s, 1);
}
else
{
die Multiple charsets? $decoder\n;
}
}
elsif ($decoder =~ m/No appropriate encodings found!/ )
{
die No appropriate encodings found!\n;
}
else
{
die What? $decoder\n;
}
$_[0] = $s;
}
return 0;
}

sub conttype_guess
{
my ($data) = @_;
return undef if $data eq '';
my $conttype_guess = $data ? $flm-checktype_contents($data) : '';
$conttype_guess =~ s/;.*//;
return $conttype_guess if $conttype_guess =~ m{^[-.a-z0-9]+/[-.a-z0-9]+};
return undef;
}

sub copy_table
{
my ($table, $dbh_src, 

Re: [rt-users] RT 3.8.9 + PostgreSQL 8.4.7 - missing/bad indexes

2011-04-15 Thread Václav Ovsík
On Fri, Apr 15, 2011 at 12:09:58PM +1000, Jesse Vincent wrote:
 ...
 I'd actually consider this a code bug. We shouldn't be LOWER()ing either 
 of those parameters.  Please open a ticket by mailing
 rt-b...@bestpractical.com

I did it. It have received ticket [rt3 #17121].

About the second problem - selecting from ObjectCustomFieldValues - I'm
not certain the adding a new index is the best solution.
I have red about multicolumn indexes in Pg docs yesterday and found for
me, that multicolumn indexes can be used for lower number of fields, but
these must be the leftmost fields. So we have already:

CREATE INDEX ObjectCustomFieldValues1 ON ObjectCustomFieldValues 
(CustomField,ObjectType,ObjectId,Content); 
CREATE INDEX ObjectCustomFieldValues2 ON ObjectCustomFieldValues 
(CustomField,ObjectType,ObjectId); 

So instead of adding a new index I did
CREATE INDEX ObjectCustomFieldValues_zito1 ON ObjectCustomFieldValues 
(ObjectId);
alternative approach can be to reorder index columns of 
ObjectCustomFieldValues2?
It depends on a shape of every other query on ObjectCustomFieldValues and
decision can be difficult (analyze of all possible query shapes).

Should I send it to rt-bugs also?

Best Regards
-- 
Zito


[rt-users] RT 3.8.9 + PostgreSQL 8.4.7 - missing/bad indexes

2011-04-14 Thread Václav Ovsík
Hi,
because of horrible performance of spreadsheet export of tickets I found
some badly created or missing indexes. I didn't investigate if other
RDBMS schemas has the same problem.
To discover problems I rewrote Results.tsv into command-line shape and
run it with DBI_TRACE=2 finally. 

For every row of Tickets the following additional problematic queries are
executed:



SELECT  * FROM Groups WHERE Instance = ? AND LOWER(Domain) = LOWER(?) AND 
LOWER(Type) = LOWER(?)

e.g. one my bind params: 106431 'RT::Ticket-Role' 'Requestor'

rt=# EXPLAIN ANALYZE SELECT * FROM Groups WHERE Instance = 106431 AND 
LOWER(Domain) = LOWER('RT::Ticket-Role') AND LOWER(Type) = LOWER('Requestor');
 QUERY PLAN 
 
-
 Seq Scan on groups  (cost=0.00..12925.34 rows=1 width=66) (actual 
time=64.672..64.904 rows=1 loops=1)
   Filter: ((instance = 106431) AND (lower((domain)::text) = 
'rt::ticket-role'::text) AND (lower((type)::text) = 'requestor'::text))
 Total runtime: 64.936 ms
(3 rows)

Existing indexes:
CREATE UNIQUE INDEX Groups1 ON Groups (Domain,Instance,Type,id, Name);
CREATE INDEX Groups2 On Groups (Type, Instance, Domain);

Both indexes above are case sensitive. Either problematic query should be
case sensitive (without lower()) or a new index with lower() on fields should
be created:

CREATE INDEX Groups_zito1 ON Groups (lower(Type), Instance, lower(Domain));

rt=# EXPLAIN ANALYZE SELECT * FROM Groups WHERE Instance = 106431 AND 
LOWER(Domain) = LOWER('RT::Ticket-Role') AND LOWER(Type) = LOWER('Requestor');
   QUERY PLAN   
 
-
 Index Scan using groups_zito1 on groups  (cost=0.01..8.38 rows=1 width=66) 
(actual time=0.084..0.084 rows=1 loops=1)
   Index Cond: ((lower((type)::text) = 'requestor'::text) AND (instance = 
106431) AND (lower((domain)::text) = 'rt::ticket-role'::text))
 Total runtime: 0.108 ms
(3 rows)




SELECT main.* FROM ObjectCustomFieldValues main  WHERE (main.Disabled = '0') 
AND (main.ObjectType = 'RT::Ticket') AND (main.ObjectId = '106431')

rt=# EXPLAIN ANALYZE SELECT main.* FROM ObjectCustomFieldValues main  WHERE 
(main.Disabled = '0') AND (main.ObjectType = 'RT::Ticket') AND (main.ObjectId = 
'106431');
   QUERY PLAN   


 Seq Scan on objectcustomfieldvalues main  (cost=0.00..12298.75 rows=3 
width=457) (actual time=142.497..142.887 rows=3 loops=1)
   Filter: ((disabled = 0) AND ((objecttype)::text = 'RT::Ticket'::text) AND 
(objectid = 106431))
 Total runtime: 142.924 ms
(3 rows)

Existing indexes:
CREATE INDEX ObjectCustomFieldValues1 ON ObjectCustomFieldValues 
(CustomField,ObjectType,ObjectId,Content);
CREATE INDEX ObjectCustomFieldValues2 ON ObjectCustomFieldValues 
(CustomField,ObjectType,ObjectId);

The above query selects based on fields ObjectType and ObjectId,
unfortunately ObjectCustomFieldValues2 contains additional field
CustomField. To speedup the search index with good selectivity
with only field ObjectId can suffice.

CREATE INDEX ObjectCustomFieldValues_zito1 ON ObjectCustomFieldValues 
(ObjectId);

rt=# EXPLAIN ANALYZE SELECT main.* FROM ObjectCustomFieldValues main  WHERE 
(main.Disabled = '0') AND (main.ObjectType = 'RT::Ticket') AND (main.ObjectId = 
'106431');
  QUERY 
PLAN  
--
 Index Scan using objectcustomfieldvalues_zito1 on objectcustomfieldvalues main 
 (cost=0.00..8.58 rows=3 width=457) (actual time=0.059..0.063 rows=3 loops=1)
   Index Cond: (objectid = 106431)
   Filter: ((disabled = 0) AND ((objecttype)::text = 'RT::Ticket'::text))
 Total runtime: 0.090 ms
(4 rows)

or index with two corresponding fields:
CREATE INDEX ObjectCustomFieldValues_zito2 ON ObjectCustomFieldValues 
(ObjectType, ObjectId);
My planner uses always objectcustomfieldvalues_zito1, because I have
only one ObjectType value:

rt=# select distinct objecttype from objectcustomfieldvalues;
 objecttype 

 RT::Ticket
(1 

Re: [rt-users] Certificate based access instead of username/pw

2011-02-22 Thread Václav Ovsík
On Mon, Feb 21, 2011 at 09:24:38AM +0100, Adrian Stel wrote:
 Hi,
 I would like to change standard access to RT from username/pw to
 certificates authorization. Is there any simple way to do that ? Or
 any additions to the RT ?

I'm testing a simple alternative to External Auth.
I have a Callback placed under
local/html/Callbacks/ICZAuth/autohandler/Auth

-snip-
%# vim:set sw=4 sts=4 ft=mason:
%init
use Crypt::OpenSSL::X509;

unless ( $session{'CurrentUser'}  $session{'CurrentUser'}-id() || 
!$ENV{'SSL_CLIENT_CERT'} )
{
my $x509 = Crypt::OpenSSL::X509-new_from_string($ENV{'SSL_CLIENT_CERT'});
my $email = lc($x509-email());

$session{'CurrentUser'} = RT::CurrentUser-new();
$session{'CurrentUser'}-LoadByEmail($email) if $email;
$RT::Logger-debug(ICZAuth Email from X509 cert: $email);
}

/%init
-snip-

You must install Crypt::OpenSSL (e.g. libcrypt-openssl-x509-perl in
Debian). An email address from the Subject Alternative Name of the X.509
certificate is extracted and a corresponding user is loaded based on
an email address.

Interesting part on Apache configuration:

SSLVerifyClient require
SSLOptions +ExportCertData +StdEnvVars

Best Regards
-- 
Zito


Re: [rt-users] see others tickets

2010-11-04 Thread Václav Ovsík
On Thu, Nov 04, 2010 at 12:22:11AM +0100, Josef wrote:
 
  Thank you very much,
 I have copied these files into my
 /usr/share/request-tracker3.6/html/SelfService/, but I'm having this


I forgot to mention files from the tarball should be extracted into
installation directory of RT 3.8.8. So you should end with:

local/html/SelfService
local/html/SelfService/QueueTickets.html
local/html/SelfService/Elements
local/html/SelfService/Elements/Tickets
local/html/SelfService/Elements/Tabs

in the directory where you install RT version 3.8.8.

The reasons for creating this SelfService extension was, that we don't
want to give customers full UI. We only want the customer can view
tickets for his company. We have groups of users with permissions to
view corresponding queues.

This code should replace old extension GroupService that exist in the
past for RT 2.x after we move to latest RT.

I wrote the code using loc(), so full localisation is possible and I did
it for Czech language. Menu option Queue tickets is not shown in the
case the user has no permission to view tickets in any queue.

Can someone review the changes to include this into share/html upstream?




 * SelfService/Elements/Tabs
   modification of original file

--- share/html/SelfService/Elements/Tabs2010-05-10 15:36:53.0 
+0200
+++ local/html/SelfService/Elements/Tabs2010-07-01 15:07:01.0 
+0200
@@ -58,14 +58,21 @@
 my $queues = RT::Queues-new($session{'CurrentUser'});
 $queues-UnLimit;
 
-my $queue_count = 0;
-my $queue_id = 1;
+my $queue_create_count = 0;
+my $queue_create_id = 1;
+my $queue_show_count = 0;
+my $queue_show_id = 1;
 
 while (my $queue = $queues-Next) {
-  next unless $queue-CurrentUserHasRight('CreateTicket');
-  $queue_id = $queue-id;
-  $queue_count++;
-  last if ($queue_count  1);
+   if ( $queue-CurrentUserHasRight('CreateTicket') ) {
+   $queue_create_id = $queue-id;
+   $queue_create_count++;
+   }
+   if ( $queue-CurrentUserHasRight('ShowTicket') ) {
+   $queue_show_id = $queue-id;
+   $queue_show_count++;
+   }
+   last if $queue_create_count  1  $queue_show_count  1;
 }
 
 if ($Title) {
@@ -83,13 +90,19 @@
},
};
 
-if ($queue_count  1) {
+if ($queue_show_count) {
+   $tabs-{B2} = { title = loc('Queue tickets'),
+  path = 'SelfService/QueueTickets.html'
+  };
+}
+
+if ($queue_create_count  1) {
 $tabs-{C} = { title = loc('New ticket'),
path = 'SelfService/CreateTicketInQueue.html'
};
 } else {
 $tabs-{C} = { title = loc('New ticket'),
-   path = 'SelfService/Create.html?Queue=' . $queue_id
+   path = 'SelfService/Create.html?Queue=' . 
$queue_create_id
};
 }
 



 * SelfService/Elements/Tickets
   is modification of SelfService/Elements/MyRequests

--- share/html/SelfService/Elements/MyRequests  2010-05-10 15:36:53.0 
+0200
+++ local/html/SelfService/Elements/Tickets 2010-07-08 12:13:04.0 
+0200
@@ -45,41 +45,65 @@
 %# those contributions and any derivatives thereof.
 %# 
 %# END BPS TAGGED BLOCK }}}
+%once
+#my $Format = RT-Config-Get('DefaultSelfServiceSearchResultFormat');
+my $Format = qq{
+   'BA 
HREF=__WebPath__/SelfService/Display.html?id=__idid__/a/B/TITLE:#',
+   QueueName,
+   'BA 
HREF=__WebPath__/SelfService/Display.html?id=__idSubject__/a/B/TITLE:Subject',
+   Status,
+   Requestors,
+   OwnerName};
+/%once
 | /Widgets/TitleBox, title =  $title 
  /Elements/CollectionList, Title   = $title,
 Format  = $Format, 
 Query   = $Query, 
-Order   = @Order, 
-OrderBy = @OrderBy,
+Order   = [...@order],
+OrderBy = [...@orderby],
 BaseURL = $BaseURL,
-GenericQueryArgs = $GenericQueryArgs,
-AllowSorting = $AllowSorting,
-Class   = 'RT::Tickets',
- Rows= $Rows,
-Page= $Page 
+AllowSorting   = $AllowSorting,
+Class  = 'RT::Tickets',
+Rows   = $Rows,
+Page   = $Page,
+queue  = [...@queue],
+status = [...@status],
+PassArguments  = [qw(Page Order OrderBy queue 
status)],
+
+
 /
 
 %INIT
 my $id = $session{'CurrentUser'}-id;
+
+...@queue = grep($_, @queue);
+
+unless ( @queue ) {
+my $queues = RT::Queues-new($session{'CurrentUser'});
+$queues-UnLimit;
+while (my $queue = $queues-Next) {
+   push @queue, $queue-id
+   if 

Re: [rt-users] see others tickets

2010-11-04 Thread Václav Ovsík
On Thu, Nov 04, 2010 at 09:37:42AM +0100, Richard Pijnenburg wrote:
 Wouldn't it be easier if you could create a group per customer and let
 all the users in that group see the tickets from each other?
 Of course there are different approaches possible to the same issue :-)

I hope, that it is exactly what I did. :)
Users from the same customer group can view each others tickets.
The SelfService with this modification can be used for it, no need for
full UI. Users can be unprivileged, no need to cope the more complex UI
for end users.

-- 
Zito


Re: [rt-users] see others tickets

2010-11-03 Thread Václav Ovsík
Hi,

On Wed, Nov 03, 2010 at 09:46:36PM +0100, Josef wrote:
 
  Yes, but only priviledged users see this... I don't want end users
 to change my settings. Unpriviledged user sees only its requests, or
 is there a way to add Newest Unowned dashboard to them?

I have modified the SelfService a bit to see Queue Tickets. Attached is
an archive. I believe, this can be added into upstream to.
Regards
-- 
Zito


selfservice-queue.tar.gz
Description: Binary data


[rt-users] PostgreSQL fulltext based on trigrams

2010-07-09 Thread Václav Ovsík
Hi,
for those not satisfied with PostgreSQL full text search (no substring
support)...
http://wiki.bestpractical.com/view/PostgreSQLFullTextTrgm
Limitations
 * minimal word length 3 chars,
 * maybe false matches at some conditions
Any comments appreciated
Regards
-- 
Zito

Discover RT's hidden secrets with RT Essentials from O'Reilly Media.
Buy a copy at http://rtbook.bestpractical.com


Re: [rt-users] case insensitive searching in Content?

2010-06-08 Thread Václav Ovsík
... 3rd try to mail ...

On Sun, Jun 06, 2010 at 02:34:58PM -0500, Kenneth Marshall wrote:
 Hi Zito,
 
 You can look in the mailing list archives for several similar discoveries on
 how MySQL works with content searches and the longblob data type. I would
 definitely recommend using PostgreSQL for its full-text indexing available
 in the wiki. Of course, I am partial... :)

Hi Ken,
thanks for the suggestion. I red the wiki page
http://wiki.bestpractical.com/view/PostgreSQLFullText already.
I'm definitely going to try this way, also patching DBIx::SearchBuilder
is a bit hack, but ILIKE is too slow on Pg.
Regards
-- 
Zito

Discover RT's hidden secrets with RT Essentials from O'Reilly Media.
Buy a copy at http://rtbook.bestpractical.com


Re: [rt-users] case insensitive searching in Content?

2010-06-08 Thread Václav Ovsík
Hi Ken,

On Tue, Jun 08, 2010 at 07:48:15AM -0500, Kenneth Marshall wrote:
 ILIKE is fine on PostgreSQL as long as the search is anchored on
 the left and there is an index. Otherwise it will do a sequential
 scan of the attachments table to find matching records. The wiki
 entries for full-text indexing with PostgreSQL and Oracle can avoid
 the sequential scan through the use of the database's full-text

thanks for your work on PostgreSQLFullText wiki page! Good starting
point. I'm playing with the fulltext on one instance already. I must
discuss the possibility to use the fulltext on Pg with my leader. The
results are different from those get from ILIKE. I tried to find 'cisco'
for example :). Normal search returned 25 tickets, but fulltext only 15.
Fulltext parser did tokens from URL and www.cisco.com was a lexem as is
(not broken further). I can't say this is fine or not, it is difference.
Users must know this - how to write the query correctly.

 index support. I do agree that patching the actual DBIx::SearchBuilder
 is not preferable. Unfortunately, since that is the API that is used
 to provide a database abstraction layer and it does not yet include
 the ability to specify full-text indexing details, you have to patch
 it. If it is any consolation, the patch is much, much simpler (trivial)
 for the PostgreSQL support versus the version that is needed for
 OracleText. :)

I completely understand. I put the package libdbix-searchbuilder-perl
(Debian) on hold in aptitude, so sysadmin should remember, that this
package needs a special handling. I did a copy of SearchBuilder.pm file
into rt-prefix/local/lib/DBIx/ and did a modification. This perl path
is searched first, so a modification is beside other RT code...

I have prepared a script for converting data from Mysql to Pg and for
adding/removing fulltext support based on your wiki page. Maybe it can
be added to wiki for someone else to play with fulltext and Pg. I must
to test it a bit and switch some fixed parts into conditional blocks
controlled by command-line options.

Regards
-- 
Zito

Discover RT's hidden secrets with RT Essentials from O'Reilly Media.
Buy a copy at http://rtbook.bestpractical.com


[rt-users] case insensitive searching in Content?

2010-06-04 Thread Václav Ovsík
Hi,
I'm preparing migration from RT 3.2.2 + Mysql 3.x to RT 3.8.8
+ Mysql or Pg.
I just discovered a problem with the case insensitive searching in the
column Attachments.Content (longblob) while playing with a new RT
instance.

Ticket SQL query
Content like 'outlook'
on our current (old) RT instance, returns some result set. Column Content has
data-type longtext in the old database schema, so every letter case
combination are found.

On a new testing platform RT 3.8.8 + Mysql only partial set of tickets is
returned with the same query. I must to modify the query to 
Content like 'outlook' or Content like 'Outlook'
at least to see the most of the tickets.

On the other hand RT 3.8.8 + Pg has Content column data-type text, so
case-insensitive searching works, but there is another issue with non
UTF-8 attachments encoded in quoted-printable, but this is a minor
problem. Pg database behaves better than Mysql in this aspect.

Is there any intention to detach non textual data from the Content
column into e.g. ContentB in future?

Have anyone some knowledge or suggestion about this?

Best Regards
-- 
Zito

Discover RT's hidden secrets with RT Essentials from O'Reilly Media.
Buy a copy at http://rtbook.bestpractical.com


Re: [rt-users] Migration from MySQL to PostgreSQL - Pg datatypes for blobs?

2010-05-18 Thread Václav Ovsík
On Mon, May 17, 2010 at 02:10:43PM +0200, Emmanuel Lacour wrote:
 Maybe there is a better way as of recent postgres?

FYI: fwd

-- 
Zito
---BeginMessage---
* Václav Ovsík:

 Is it possible that binding of binary data-type (bytea) without
 data-type specification would behave like in MySQL in some future
 versions of DBD::Pg? Binding binary values works correctly without type
 specification in DBD::mysql.

PostgreSQL's on-the-wire protocol has issues which make this very
difficult to implement, unfortunately.

---End Message---

Discover RT's hidden secrets with RT Essentials from O'Reilly Media.
Buy a copy at http://rtbook.bestpractical.com

Re: [rt-users] Migration from MySQL to PostgreSQL - Pg datatypes for blobs?

2010-05-17 Thread Václav Ovsík
On Sat, May 15, 2010 at 01:35:36PM +0200, Václav Ovsík wrote:
 Hi Emmanuel,
 thanks for your reply.
 
 On Sat, May 15, 2010 at 07:23:01AM +0200, Emmanuel Lacour wrote:
  With Pg, this content is handled specifically, see _EncodeLOB in
  lib/RT/Record.pm.
  
  (see also mysql2Pg page on RTwiki).
 
 I saw Base64 encoding in script on wiki page and also elsewhere, but
 considered that as workaround to store binary data into Pg text
 type-field.
 
 Are you sure, that _EncodeLOB is used only for Pg?

Oh yes, I was blind that RT uses DBIx::SearchBuilder. There is
a subroutine BinarySafeBLOBs defined in
DBIx::SearchBuilder::Handle::Pg.pm and returns undef. That is - Pg is
considered not capable handling binary in safe manner.

Every value must be converted using base64 :(.

Regards
-- 
Zito

Discover RT's hidden secrets with RT Essentials from O'Reilly Media.
Buy a copy at http://rtbook.bestpractical.com


Re: [rt-users] Migration from MySQL to PostgreSQL - Pg datatypes for blobs?

2010-05-17 Thread Václav Ovsík
On Mon, May 17, 2010 at 02:10:43PM +0200, Emmanuel Lacour wrote:
 On Mon, May 17, 2010 at 01:53:43PM +0200, Václav Ovsík wrote:
  
  Oh yes, I was blind that RT uses DBIx::SearchBuilder. There is
  a subroutine BinarySafeBLOBs defined in
  DBIx::SearchBuilder::Handle::Pg.pm and returns undef. That is - Pg is
  considered not capable handling binary in safe manner.
  
  Every value must be converted using base64 :(.
  
 
 yes :(
 
 Maybe there is a better way as of recent postgres?

PostgreSQL can handle blob, but special handling is needed during
bind_param. Alternatively the quoting of value may be used.
This is the output from attached test script:


z...@bobek:~/pokusy/devel/perl/dbi$ ./pg-blob 

synthetized binary value:0504030201000102030405

 WITHOUT bind_param type spec...
data from database:0504030201

 WITH bind_param type spec...
data from database:0504030201000102030405

 WITH quoted value...
quoted value: E'\\005\\004\\003\\002\\001\\000\\001\\002\\003\\004\\005'
data from database:0504030201000102030405


 AW, I did a migration from mysql to postgres 8.3 with base64 conversion
 and everything is ok.

Fine. I'm now testing added base64 encoding.

Regards
-- 
Zito
#!/usr/bin/env perl
use strict;
use DBI;
use DBD::Pg qw(PG_BYTEA);

my $dbh = DBI-connect('dbi:Pg:dbname=zito', undef, undef,
{   
'RaiseError' = 1,
'AutoCommit' = 1,
}
);
$dbh-do('CREATE TEMP TABLE b ( b bytea )');
my $v = pack('H*', '0504030201000102030405');

print \nsynthetized binary value:, unpack('H*', $v), \n\n;

print  WITHOUT bind_param type spec...\n;
$dbh-do('INSERT INTO b VALUES (?)', undef, $v);
my ($a) = $dbh-selectrow_array('SELECT * FROM b');
print data from database:, unpack('H*', $a), \n;
print \n;

$dbh-do('DELETE FROM b');

print  WITH bind_param type spec...\n;
my $sth = $dbh-prepare('INSERT INTO b VALUES (?)');
$sth-bind_param(1, $v, { pg_type = PG_BYTEA });
$sth-execute();
my ($a) = $dbh-selectrow_array('SELECT * FROM b');
print data from database:, unpack('H*', $a), \n;
print \n;

$dbh-do('DELETE FROM b');

print  WITH quoted value...\n;
my $qv = $dbh-quote($v, { pg_type = PG_BYTEA });
print quoted value: $qv\n;
$dbh-do(INSERT INTO b VALUES ($qv));
my ($a) = $dbh-selectrow_array('SELECT * FROM b');
print data from database:, unpack('H*', $a), \n;
print \n\n;

$dbh-disconnect();


Discover RT's hidden secrets with RT Essentials from O'Reilly Media.
Buy a copy at http://rtbook.bestpractical.com

Re: [rt-users] Migration from MySQL to PostgreSQL - Pg datatypes for blobs?

2010-05-15 Thread Václav Ovsík
Hi Emmanuel,
thanks for your reply.

On Sat, May 15, 2010 at 07:23:01AM +0200, Emmanuel Lacour wrote:
 With Pg, this content is handled specifically, see _EncodeLOB in
 lib/RT/Record.pm.
 
 (see also mysql2Pg page on RTwiki).

I saw Base64 encoding in script on wiki page and also elsewhere, but
considered that as workaround to store binary data into Pg text
type-field.

Are you sure, that _EncodeLOB is used only for Pg?
Running grep in lib/RT (git 3.8-trunk):

z...@bobek:/data/soft/rt/rt/lib/RT$ find . -name I18N -prune -o -type f -print 
|xargs fgrep Pg
./Handle.pm:elsif ( $db_type eq 'Pg' ) {
./Handle.pm:elsif ( $db_type eq 'Pg' ) {
./Interface/Web/Session.pm:Pg= 'Apache::Session::Postgres',
./Interface/Web/Handler.pm:unless ( RT-Config-Get('DatabaseType') =~ 
/(?:mysql|Pg)/ ) {
./Report/Tickets.pm:elsif ( $db_type eq 'Pg' ) {
./Report/Tickets.pm:# Pg 8.3 requires explicit casting
./Report/Tickets.pm:$func .= '::text' if $db_type eq 'Pg';
./Installer.pm:  } qw/mysql Pg SQLite Oracle/
./Installer.pm:Pg = 'PostgreSQL',#loc
./Test.pm:# Pg doesn't like if you issue a DROP DATABASE while still 
connected

I'm not sure.

I think, that Pg cluster initialized to ASCII can handle binary data in
the text data-type, but initialized to UTF-8 not. The correct solution
should be to change types for fields holding binary data to data-type
bytea. Maybe I'm simply not yet understanding the whole thing.


 also, are-you sure that you're mysql DB is using utf-8. If not, you
 should first convert it or convert on the fly in your script.

I hope yes. National characters are displayed fine in the web interface
and mails. The Content field causing problem is MySQL LONGBLOB type. It
can contain binary data.

BTW: I did with old data following in bash script:

make_database()
{
local pref=$1; shift
local db=$1; shift
local user=$1; shift
local pass=$1; shift

mysql -e CREATE DATABASE $db CHARACTER SET binary;
ssh z...@aslan.i.cz cat $aslan_db_dir/$db.sql.gz | zcat | mysql 
--default-character-set=binary $db
mysql -e GRANT ALL ON $db.* TO '$user'@'$RT_DB_HOST' IDENTIFIED BY 
'$pass'; $db
export RT_DB_NAME=$db
export RT_DB_USER=$user
export RT_DB_PASSWORD=$pass

mysql $db  unaccent.sql
mysql $db  update-duplic-emailaddress.sql
mysql $db  update-duplic-name.sql

{ echo 3.2.2; echo 3.7.87; echo y; } \
| $RT_SETUP_DATABASE --datadir $SCHEMA_PATH --action=upgrade

$UPGRADE_MYSQL_SCHEMA $db /tmp/queries
mysql $db /tmp/queries

{ echo 3.7.87; echo; echo y; } \
| $RT_SETUP_DATABASE --datadir $SCHEMA_PATH --action=upgrade

mysql $db  emailaddress-testing.sql

mig_$db $pref
}


Regards
-- 
Zito

Discover RT's hidden secrets with RT Essentials from O'Reilly Media.
Buy a copy at http://rtbook.bestpractical.com


[rt-users] Migration from MySQL to PostgreSQL - Pg datatypes for blobs?

2010-05-14 Thread Václav Ovsík
Dear Users and Developers,
I'm facing a migration four instances RT 3.2.2 to RT 3.8.8 with a number
of customizations.
Back-end database is MySQL and I have prepared a script for loading
database dumps from old MySQL 3.23.58 to 5.0.51 (Debian Lenny)
and then upgrading RT schema (rt-setup-database).
A next script will migrate from MySQL to PostgreSQL.
The PostgreSQL database cluster is initialized with cs_CZ.UTF-8 locale.
A problem arises with a column Attachments.Content with a pg type text.

DBD::Pg::st execute failed: ERROR:  invalid byte sequence for encoding UTF8: 
0xed2066
HINT:  This error can also happen if the byte sequence does not match the 
encoding expected by the server, which is controlled by client_encoding...

I think the correct data-type should by bytea, but this data type has
a bit strange behavior. Nevertheless a data copy ends with success after
data-type change:

alter table attachments drop column content;
alter table attachments add column content bytea;

The problem with bytea is (at least in my script), that values needs
extra handling. Binding values to parameters of type bytea must by done
according manual page of DBD::Pg... (RT already uses bytea in the tables
session).

Other fields: ObjectCustomFieldValues.LargeContent, Attributes.Content?
Any suggestions?

Regards
-- 
Zito
#!/usr/bin/env perl
use strict;
use DBI;
use Data::Dumper;
use Getopt::Long;
use DBD::Pg qw(PG_BYTEA);


my ($verbose, $dryrun);

sub usage
{
my ($exitcode) = @_;
print STDERR EOF;
rt-mysql2pg [options]

Converts Request Tracker database from MySQL to PostgreSQL

Do a copy of every record in user tables from a source database to
a destination database. A destination schema must exists (same as source
schema), so inserts can be done without error. Tables in destination database
are deleted before doing a copy!

options:

--src-dsn dsn   perl DBI data source name (e.g. dbi:Pg:dbname=ca)
--src-user user perl DBI user name
--src-password pass perl DBI password

--dst-dsn dsn   perl DBI data source name (e.g. dbi:Pg:dbname=ca)
--dst-user user perl DBI user name
--dst-password pass perl DBI password

 -n --dry-run   dry run (no db modifications)

 -v --verbose   run verbosly (incremental)
 -h --help  help usage

EOF
exit($exitcode) if defined $exitcode;
}

sub user_tables
{
my ($dbh) = @_;
my $sth = $dbh-table_info(undef, undef, undef, 'TABLE');
$sth-execute();
my @user_tables;
while ( my $r = $sth-fetchrow_arrayref() )
{
my ($table_cat, $table_schem, $table_name, $table_type, $remarks) = @$r;
next unless $table_type eq 'TABLE';
next if $dbh-{Driver}-{Name} eq 'Pg'  $table_schem ne 'public';
push @user_tables, $table_name;
}
return \...@user_tables;
}

sub user_seqs
{
my ($dbh) = @_;
return $dbh-selectcol_arrayref('SELECT sequence_name FROM 
information_schema.sequences');
}

sub column_info
{
my ($dbh, $table) = @_;
my $schema = $dbh-{Driver}-{Name} eq 'Pg' ? 'public' : undef;
my $sth = $dbh-column_info(undef, $schema, $table, undef);
$sth-execute();
my (@coln, %colt);
while ( my $r = $sth-fetchrow_arrayref() )
{
my ($table_cat, $table_schem, $table_name, $column_name, $data_type,
$type_name, $column_size, $buffer_length, $decimal_digits,
$num_prec_radix, $nullable, $remarks, $column_def,
$sql_data_type, $sql_datetime_sub, $char_octet_length,
$ordinal_position, $is_nullable, $type_name_and_size) = @$r;
push @coln, lc($column_name);
$colt{lc($column_name)} = {
'data_type' = $data_type,
'type_name' = lc($type_name),
'type_name_and_size' = lc($type_name_and_size),
};
}
return (\...@coln, \%colt);
}

sub copy_table
{
my ($table, $dbh_src, $dbh_dst) = @_;
my ($scoln, $scolt) = column_info($dbh_src, $table);
my ($dcoln, $dcolt) = column_info($dbh_dst, lc($table));
my ($s_nrows) = $dbh_src-selectrow_array(SELECT count(*) FROM $table);
my $ncols = @$dcoln;
my $slist = join(',', sort keys %$scolt);
my $dlist = join(',', sort keys %$dcolt);
if ( $slist ne $dlist )
{
die qq|\nerror: columns of $table on source and destination 
differs!\n|
.qq|src: $slist\ndst: $dlist\n|;
}
my $chunked = $scolt-{'id'}{'type_name'} eq 'int' ? 1 : 0;
my ($id_min, $id_max) = (0, 1);
my ($id_step) = 100;
if ( $chunked )
{
($id_min, $id_max) = $dbh_src-selectrow_array(SELECT min(id), max(id) 
FROM $table);
}
my $c_sel = SELECT  . join(',', @$dcoln) .  FROM $table;
$c_sel .=  WHERE id = ? AND id = ? if $chunked;
$verbose  2 || print \n\texecuting on source:\n\t$c_sel\n;
my @colattr = map {
$dcolt-{$_}{'type_name'} eq 'bytea' ? { pg_type = PG_BYTEA } : 
undef;