[GENERAL] Strange situation on slave server

2015-07-01 Thread Condor


Hello,

I have master - slave replication hot standby. Both server are linux 
slackware64 current with postgresql 9.4.4.


Today when I logged to check some things on slave server I see on top 
memory taken 26%. That was strange for me and I restart server.
Well, I checked after 40 minute and I see again replication process is 
little growing. For 10 min about 1.0.



postgres 14162  0.0  0.7 6514656 190684 ?  S11:50   0:00 
/usr/bin/postgres -D /var/lib/postgres/database -F
postgres 14163  0.4  5.4 6517856 1341268 ? Ss   11:50   0:26  \_ 
postgres: startup process   recovering 000105E5005E
postgres 14166  0.1  5.4 6517876 1335980 ? Ss   11:50   0:09  \_ 
postgres: checkpointer process
postgres 14168  0.0  0.2 6517732 53276 ?   Ss   11:50   0:01  \_ 
postgres: writer process
postgres 14180  0.0  0.0  29888  2864 ?Ss   11:50   0:01  \_ 
postgres: stats collector process
postgres 14183  0.2  0.0 6522488 5584 ?Ss   11:50   0:15  \_ 
postgres: wal receiver process   streaming 5E5/5EE36000



Today our system is little bit average but slave server receive one wal 
file every minute. As you can see checkpointer process and recovering is 
5.4% and until I write the email,

I check them again and now is 6.0


postgres 14162  0.0  0.7 6514656 190684 ?  S11:50   0:00 
/usr/bin/postgres -D /var/lib/postgres/database -F
postgres 14163  0.4  6.0 6517856 1499868 ? Ss   11:50   0:30  \_ 
postgres: startup process   recovering 000105E50072
postgres 14166  0.1  6.0 6517876 1497064 ? Ss   11:50   0:10  \_ 
postgres: checkpointer process
postgres 14168  0.0  0.2 6517732 53276 ?   Ss   11:50   0:01  \_ 
postgres: writer process
postgres 14180  0.0  0.0  29888  2864 ?Ss   11:50   0:01  \_ 
postgres: stats collector process
postgres 14183  0.2  0.0 6522488 5584 ?Ss   11:50   0:18  \_ 
postgres: wal receiver process   streaming 5E5/72B49718



I have free memory:
Mem:  24634464k total, 14674532k used,  9959932k free,   324108k buffers
Swap:  6589196k total, 1872k used,  6587324k free, 11819140k cached


I try to use strace on process:

# strace -p 14163

lseek(9, 0, SEEK_END)   = 381943808
lseek(23, 0, SEEK_END)  = 376832
lseek(5, 3563520, SEEK_SET) = 3563520
read(5, 
"~\320\5\0\1\0\0\0\0`6C\345\5\0\0\344\7\0\0\0\0\0\0\0\0\0\0\333I\f\0"..., 
8192) = 8192
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(9, 0, SEEK_END)   = 381943808
lseek(5, 3571712, SEEK_SET) = 3571712
read(5, 
"~\320\5\0\1\0\0\0\0\2006C\345\5\0\0^\6\0\0\0\0\0\0Ja.\0\20\0\220h"..., 
8192) = 8192
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(10, 0, SEEK_END)  = 186392576
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(11, 0, SEEK_END)  = 182566912
read(3, 0x7ffd2d7a60d0, 16) = -1 EAGAIN (Resource 
temporarily unavailable)
poll([{fd=3, events=POLLIN}], 1, 5000)  = ? ERESTART_RESTARTBLOCK 
(Interrupted by signal)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=14183, 
si_uid=1000} ---

write(4, "\0", 1)   = 1
rt_sigreturn({mask=[]}) = -1 EINTR (Interrupted system 
call)

read(3, "\0", 16)   = 1
read(6, 0x7ffd2d7a611f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(5, 3579904, SEEK_SET) = 3579904
read(5, 
"~\320\5\0\1\0\0\0\0\2406C\345\5\0\0\232\5\0\0\0\0\0\0\0\0\0\0*\231\1\0"..., 
8192) = 8192
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(12, 0, SEEK_END)  = 203612160
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(13, 0, SEEK_END)  = 331071488
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(14, 0, SEEK_END)  = 193331200
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(15, 0, SEEK_END)  = 271171584
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(16, 0, SEEK_END)  = 187580416
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(17, 0, SEEK_END)  = 193257472
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(18, 0, SEEK_END)  = 277381120
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(19, 0, SEEK_END)  = 199884800
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily unavailable)

lseek(20, 0, SEEK_END)  = 193396736
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource 
temporarily una

Re: [GENERAL] Strange situation on slave server

2015-07-02 Thread Condor

On 01-07-2015 13:53, Condor wrote:

Hello,

I have master - slave replication hot standby. Both server are linux
slackware64 current with postgresql 9.4.4.

Today when I logged to check some things on slave server I see on top
memory taken 26%. That was strange for me and I restart server.
Well, I checked after 40 minute and I see again replication process is
little growing. For 10 min about 1.0.


postgres 14162  0.0  0.7 6514656 190684 ?  S11:50   0:00
/usr/bin/postgres -D /var/lib/postgres/database -F
postgres 14163  0.4  5.4 6517856 1341268 ? Ss   11:50   0:26  \_
postgres: startup process   recovering 000105E5005E
postgres 14166  0.1  5.4 6517876 1335980 ? Ss   11:50   0:09  \_
postgres: checkpointer process
postgres 14168  0.0  0.2 6517732 53276 ?   Ss   11:50   0:01  \_
postgres: writer process
postgres 14180  0.0  0.0  29888  2864 ?Ss   11:50   0:01  \_
postgres: stats collector process
postgres 14183  0.2  0.0 6522488 5584 ?Ss   11:50   0:15  \_
postgres: wal receiver process   streaming 5E5/5EE36000


Today our system is little bit average but slave server receive one
wal file every minute. As you can see checkpointer process and
recovering is 5.4% and until I write the email,
I check them again and now is 6.0


postgres 14162  0.0  0.7 6514656 190684 ?  S11:50   0:00
/usr/bin/postgres -D /var/lib/postgres/database -F
postgres 14163  0.4  6.0 6517856 1499868 ? Ss   11:50   0:30  \_
postgres: startup process   recovering 000105E50072
postgres 14166  0.1  6.0 6517876 1497064 ? Ss   11:50   0:10  \_
postgres: checkpointer process
postgres 14168  0.0  0.2 6517732 53276 ?   Ss   11:50   0:01  \_
postgres: writer process
postgres 14180  0.0  0.0  29888  2864 ?Ss   11:50   0:01  \_
postgres: stats collector process
postgres 14183  0.2  0.0 6522488 5584 ?Ss   11:50   0:18  \_
postgres: wal receiver process   streaming 5E5/72B49718


I have free memory:
Mem:  24634464k total, 14674532k used,  9959932k free,   324108k 
buffers

Swap:  6589196k total, 1872k used,  6587324k free, 11819140k cached


I try to use strace on process:

# strace -p 14163

lseek(9, 0, SEEK_END)   = 381943808
lseek(23, 0, SEEK_END)  = 376832
lseek(5, 3563520, SEEK_SET) = 3563520
read(5,
"~\320\5\0\1\0\0\0\0`6C\345\5\0\0\344\7\0\0\0\0\0\0\0\0\0\0\333I\f\0"...,
8192) = 8192
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(9, 0, SEEK_END)   = 381943808
lseek(5, 3571712, SEEK_SET) = 3571712
read(5,
"~\320\5\0\1\0\0\0\0\2006C\345\5\0\0^\6\0\0\0\0\0\0Ja.\0\20\0\220h"...,
8192) = 8192
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(10, 0, SEEK_END)  = 186392576
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(11, 0, SEEK_END)  = 182566912
read(3, 0x7ffd2d7a60d0, 16) = -1 EAGAIN (Resource
temporarily unavailable)
poll([{fd=3, events=POLLIN}], 1, 5000)  = ? ERESTART_RESTARTBLOCK
(Interrupted by signal)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=14183, 
si_uid=1000} ---

write(4, "\0", 1)   = 1
rt_sigreturn({mask=[]}) = -1 EINTR (Interrupted system 
call)

read(3, "\0", 16)   = 1
read(6, 0x7ffd2d7a611f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(5, 3579904, SEEK_SET) = 3579904
read(5,
"~\320\5\0\1\0\0\0\0\2406C\345\5\0\0\232\5\0\0\0\0\0\0\0\0\0\0*\231\1\0"...,
8192) = 8192
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(12, 0, SEEK_END)  = 203612160
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(13, 0, SEEK_END)  = 331071488
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(14, 0, SEEK_END)  = 193331200
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(15, 0, SEEK_END)  = 271171584
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(16, 0, SEEK_END)  = 187580416
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(17, 0, SEEK_END)  = 193257472
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(18, 0, SEEK_END)  = 277381120
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(19, 0, SEEK_END)  = 199884800
read(6, 0x7ffd2d7a672f, 1)  = -1 EAGAIN (Resource
temporarily unavailable)
lseek(20, 0, SEEK_END)  = 193396736
read(6, 0x7ffd2d7a672f, 1)  = -1 

Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-26 Thread Condor

On 26-08-2015 10:13, Allan Kamau wrote:

On Wed, Aug 26, 2015 at 5:23 AM, rob stone 
wrote:


On Tue, 2015-08-25 at 20:17 -0400, Melvin Davidson wrote:

I think a lot of people here are missing the point. I was trying

to

give examples of natural keys, but a lot of people are taking

great

delight
in pointing out exceptions to examples, rather than understanding

the

point.
So for the sake of argument, a natural key is something that in
itself is unique and the possibility of a duplicate does not

exist.

Before ANYONE continues to insist that a serial id column is good,
consider the case where the number of tuples will exceed a bigint.
Don't say it cannot happen, because it can.
However, if you have an alphanumeric field, let's say varchar 50,

and

it's guaranteed that it will never have a duplicate, then THAT is

a

natural primary
key and beats the hell out of a generic "id" field.

Further to the point, since I started this thread, I am holding to

it

and will not discuss "natural primary keys" any further.

Other suggestions for good PostgreSQL Developer database (not web
app) guidelines are still welcome.



Funny how Melvin's attempt to bring order to the chaos ended up as a
discussion about primary keys.

We once hired a "genius" to design an application to handle fixed
assets. Every table had a primary key named "id". Some were integer
and
some were character. So the foreign key columns in child tables had
to
be named differently. Writing the joins was complex.

I also know of an airline reservation system where you are unable to
alter your e-mail address. It apparently needs a DBA type person to
make the change. I can only guess that your e-mail address is used
as a
foreign key in one or more tables. As well as assigning you a
frequent
flyer number they also assign another integer identifier. A bit of
common sense goes a long way when designing an application.

Cheers,
rob

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general [1]


I am in favour of using BIGINT "id" for the primary key in each table
I create.
I found out that in the fields in my tables that I thought would be
unique end up not being so in the longer term.
Also these values may need to be updated for some reason.

I have been using PRIMARY KEY(id) where id is of type BIGINT on each
table I create.
I use a sequence to provide a default value to this field.
I create one such sequence DB object per table and the use it in the
table definition.
For example if I have a sequenceDB "some_schema.some_table_seq" for
table "some_schema.some_table".
In the table definition of "some_schema.some_table" I have the field
"id" as follows.

id BIGINT NOT NULL DEFAULT NEXTVAL('some_schema.some_table_seq')

When I use this "id" field as a foreign key in another table, I would
prefix it with the name of its parent table followed by a couple of
underscores as shown below.
FOREIGN KEY(some_table__id)REFERENCES some_schema.some_table(id)ON
UPDATE CASCADE ON DELETE CASCADE

For the composite keys that are unique (for now) I create a unique
constraint.

Allan.



I recall the words of my professor at last lecture of Databases was 
telling us that model of thinking as he told: nomenclature is wrong and 
not good and we should avoid it in any cost if we can.



Cheers,
Hristo




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Condor

On 26-07-2016 21:04, Dorian Hoxha wrote:

Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/

On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:


Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.

https://eng.uber.com/mysql-migration/

Thoughts?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade 
to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary 
and library, need to do full dump and restore that take time and disk 
space.



Regards,
Hristo S.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-29 Thread Condor

On 29-07-2016 20:33, Jerry Sievers wrote:

Condor  writes:


On 26-07-2016 21:04, Dorian Hoxha wrote:


Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/

On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  
wrote:



Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.

https://eng.uber.com/mysql-migration/

Thoughts?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days
pg_upgrade to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary
and library, need to do full dump and restore that take time and disk
space.


Yeah, very hard indeed when done by unskilled DBAs :-)

I've done several ~7TB pg_upgrades and with the hard link option and a
framework that parallelizes the post-analyzer phase...

...45 minutes till completion.




Regards,
Hristo S.


GL to you


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ALTER TABLE without long waiting is possibly ?

2016-10-11 Thread Condor


Hello,

today I need to alter one of our biggest tables to add two new columns 
with default value 0.
Server version: PostgreSQL 9.5.4 on x86_64-slackware-linux-gnu, compiled 
by x86_64-slackware-linux-gcc (GCC) 5.3.0, 64-bit


when I do: ALTER TABLE stocks ADD COLUMN promo INTEGER DEFAULT 0;

long waiting is coming, so I try to find a way how to avoid that 
waiting. I know isn't possibly to alter table without lock it,

but Im wondering do will be more fast if I do:

ALTER TABLE stocks ADD COLUMN promo INTEGER;
UPDATE TABLE stocks SET promo = 0;
ALTER TABLE stocks ALTER COLUMN promo SET DEFAULT 0;


Unfortunately I can't test on product servers, so Im looking for some 
advice or some one to point me the right direction how I can alter table 
today without clients to notice their query is locked and need to wait.



Regards,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TABLE without long waiting is possibly ?

2016-10-11 Thread Condor

On 11-10-2016 15:59, Vitaly Burovoy wrote:

On 10/11/16, Condor  wrote:


Hello,

today I need to alter one of our biggest tables to add two new columns
with default value 0.
Server version: PostgreSQL 9.5.4 on x86_64-slackware-linux-gnu, 
compiled

by x86_64-slackware-linux-gcc (GCC) 5.3.0, 64-bit

when I do: ALTER TABLE stocks ADD COLUMN promo INTEGER DEFAULT 0;

long waiting is coming,


The doc[1] explicitly explains why:
"Adding a column with a DEFAULT clause or changing the type of an
existing column will require the entire table and its indexes to be
rewritten."


so I try to find a way how to avoid that
waiting. I know isn't possibly to alter table without lock it,
but Im wondering do will be more fast if I do:

ALTER TABLE stocks ADD COLUMN promo INTEGER;
UPDATE TABLE stocks SET promo = 0;
ALTER TABLE stocks ALTER COLUMN promo SET DEFAULT 0;


You are close to the best solution but you should use "SET DEFAULT"
before update and split "UPDATE" into several commands to update
smaller parts of the table at a time, in the other case you ends up
with full rewrite of the table at once as the original "ALTER TABLE"
does.
All rows which has been updated are locked until the UPDATE commits,
so when your code tries to update or delete it, commands wait until
the UPDATE completes.

Usual solution looks like this (change _pk_column_ to a column(s) name
mentioned in the primary key of the table):

ALTER TABLE stocks ADD COLUMN promo INTEGER;
ALTER TABLE stocks ALTER COLUMN promo SET DEFAULT 0;  -- set for
inserted columns
CREATE INDEX CONCURRENTLY set_default_idx_tmp
ON stocks(_pk_column_) WHERE promo IS NULL;
-- repeat the next command (five lines!) until it returns 0 affected
rows (you can choose different LIMIT value):
UPDATE stocks s SET promo = 0 FROM (
SELECT _pk_column_ FROM stocks
WHERE promo IS NULL ORDER BY _pk_column_
FOR UPDATE LIMIT 1
)t WHERE s._pk_column_=t._pk_column_;


The "ORDER BY" clause allows you to decrease chance to block current
transactions by the UPDATE which sets the default value.

If you have PG 9.5 and higher, add "SKIP LOCKED" just after the "FOR
UPDATE" clause.

If your table is big enough you may run:
VACUUM VERBOSE stocks;
when 1/2 or 1/3 (and 2/3) table is done to mark old tuples as free
space and reuse it for new tuples generated by the next UPDATEs (and
prevent bloating table).

P.S.: then DROP INDEX CONCURRENTLY set_default_idx_tmp;

P.P.S.: If you have to add two columns you can update both of them by
one UPDATE:

ALTER TABLE stocks ADD COLUMN promo INTEGER;
ALTER TABLE stocks ADD COLUMN column2 INTEGER;
ALTER TABLE stocks ALTER COLUMN promo SET DEFAULT 0;  -- set for
inserted columns
ALTER TABLE stocks ALTER COLUMN column2 SET DEFAULT 65536;  -- whatever 
you need


CREATE INDEX CONCURRENTLY set_default_idx_tmp
ON stocks(_pk_column_) WHERE promo IS NULL AND column2 IS NULL;

-- repeat the next command (six lines!) until it returns 0 affected
rows (you can choose different LIMIT value):
UPDATE stocks s SET promo = DEFAULT, column2 = DEFAULT FROM (
SELECT _pk_column_ FROM stocks
WHERE promo IS NULL AND column2 IS NULL
ORDER BY _pk_column_
FOR UPDATE LIMIT 1
)t WHERE s._pk_column_=t._pk_column_;



Unfortunately I can't test on product servers, so Im looking for some
advice or some one to point me the right direction how I can alter 
table
today without clients to notice their query is locked and need to 
wait.


[1] 
https://www.postgresql.org/docs/current/static/sql-altertable.html#AEN75605


--
Best regards,
Vitaly Burovoy




Thanks,
something like that was rotating in my mind, just was not sure do im in 
right direction.


Thanks again.

Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump all the indexes/constraints/roles

2016-10-18 Thread Condor

On 18-10-2016 06:32, Patrick B wrote:

Hi guys,

I need to export an entire database to another server, for testing
purpose.

Is there any way to export all indexes and constraints ?
Postgres 9.2
Patrick



Hello,

pg_dump database is that you need, but if you asking do you can export 
data + indexes like binary data of indexes to can
you restore data fast without to wait all indexes to be create / rebuild 
on another server answer is: NO, you can't export them.



Regards,
Hristo S


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unable to match same value in field.

2016-03-10 Thread Condor


Hello,

I using postgresql 9.5.1 and I have problem to match value in one field. 
Both tables are text:


=# \d list_cards_tbl;

  Column   |  Type   |  Modifiers
---+-+--
 recid | integer | not null default 
nextval('list_cards_tbl_recid_seq'::regclass)

 imsi  | text|
Indexes:
"imsi_list_cards_tbl" btree (imsi)


=# \d list_newcard_tbl;
   Column   |  Type   |   Modifiers
+-+---
 recid  | integer | not null default 
nextval('list_newcard_tbl_recid_seq'::regclass)

 serial | text|
 imsi   | text|
Indexes:
"list_newcard_tbl_pkey" PRIMARY KEY, btree (recid)
"list_newcard_ser_idx" btree (serial)



=# select imsi, md5(imsi), bit_length(imsi) from list_newcards_tbl where 
imsi = '28411123315';

  imsi   |   md5| bit_length
-+--+
 28411123315 | b438e984c97483bb942eaaed5c0147f3 |120
(1 row)



So far so good, value of table list_newcard_tbl is fine, problem is in 
table list_cards_tbl


=# select imsi from list_cards_tbl where imsi = '28411123315';
 imsi
--
(0 rows)

No value, lets change to LIKE

=# select imsi, md5(imsi), bit_length(imsi) from list_cards_tbl where 
imsi like '28411123315%';

  imsi   |   md5| bit_length
-+--+
 28411123315 | b438e984c97483bb942eaaed5c0147f3 |120
(1 row)


Both have the same MD5 sum, also bit length.

With EXPLAIN:

=# explain analyse select imsi from list_cards_tbl where imsi = 
'28411123315';

  QUERY PLAN
--
 Index Only Scan using imsi_list_card_tbl on list_cards_tbl  
(cost=0.28..4.30 rows=1 width=16) (actual time=0.021..0.021 rows=0 
loops=1)

   Index Cond: (imsi = '28411123315'::text)
   Heap Fetches: 0
 Planning time: 0.080 ms
 Execution time: 0.045 ms
(5 rows)

I see only index scan, so I do:

=# reindex table list_cards_tbl;
REINDEX
=# vacuum list_cards_tbl;
VACUUM
=# select imsi, md5(imsi), bit_length(imsi) from list_cards_tbl where 
imsi = '28411123315';

 imsi | md5 | bit_length
--+-+
(0 rows)


Still cant find value.

Some settings:

 enable_bitmapscan   | on
 | Enables the planner's use of bitmap-scan plans.
 enable_hashagg  | on
 | Enables the planner's use of hashed aggregation 
plans.
 enable_hashjoin | on
 | Enables the planner's use of hash join plans.
 enable_indexonlyscan| on
 | Enables the planner's use of index-only-scan 
plans.
 enable_indexscan| on
 | Enables the planner's use of index-scan plans.
 enable_material | on
 | Enables the planner's use of materialization.
 enable_mergejoin| on
 | Enables the planner's use of merge join plans.
 enable_nestloop | on
 | Enables the planner's use of nested-loop join 
plans.
 enable_seqscan  | on
 | Enables the planner's use of sequential-scan 
plans.
 enable_sort | on
 | Enables the planner's use of explicit sort steps.
 enable_tidscan  | on
 | Enables the planner's use of TID scan plans.
 client_encoding | UTF8  
 | Sets the client's character set encoding.
 lc_collate  | bg_BG.utf8
 | Shows the collation order locale.
 lc_ctype| bg_BG.utf8
 | Shows the character classification and case 
conversion locale.
 lc_messages | bg_BG.utf8
 | Sets the language in which messages are 
displayed.
 lc_monetary | bg_BG.utf8
 | Sets the locale for formatting monetary amounts.
 lc_numeric  | bg_BG.u

Re: [GENERAL] Unable to match same value in field.

2016-03-11 Thread Condor

On 10-03-2016 15:37, Adrian Klaver wrote:

On 03/10/2016 01:09 AM, Condor wrote:


Hello,

I using postgresql 9.5.1 and I have problem to match value in one 
field.

Both tables are text:

=# \d list_cards_tbl;

   Column   |  Type   |  Modifiers
---+-+--

  recid | integer | not null default
nextval('list_cards_tbl_recid_seq'::regclass)
  imsi  | text|
Indexes:
 "imsi_list_cards_tbl" btree (imsi)


=# \d list_newcard_tbl;
Column   |  Type   |   Modifiers
+-+---

  recid  | integer | not null default
nextval('list_newcard_tbl_recid_seq'::regclass)
  serial | text|
  imsi   | text|
Indexes:
 "list_newcard_tbl_pkey" PRIMARY KEY, btree (recid)
 "list_newcard_ser_idx" btree (serial)



=# select imsi, md5(imsi), bit_length(imsi) from list_newcards_tbl 
where

imsi = '28411123315';
   imsi   |   md5| bit_length
-+--+
  28411123315 | b438e984c97483bb942eaaed5c0147f3 |120
(1 row)



So far so good, value of table list_newcard_tbl is fine, problem is in
table list_cards_tbl

=# select imsi from list_cards_tbl where imsi = '28411123315';
  imsi
--
(0 rows)

No value, lets change to LIKE

=# select imsi, md5(imsi), bit_length(imsi) from list_cards_tbl where
imsi like '28411123315%';
   imsi   |   md5| bit_length
-+--+
  28411123315 | b438e984c97483bb942eaaed5c0147f3 |120
(1 row)


Both have the same MD5 sum, also bit length.

With EXPLAIN:

=# explain analyse select imsi from list_cards_tbl where imsi =
'28411123315';
   QUERY 
PLAN

--

  Index Only Scan using imsi_list_card_tbl on list_cards_tbl
(cost=0.28..4.30 rows=1 width=16) (actual time=0.021..0.021 rows=0 
loops=1)

Index Cond: (imsi = '28411123315'::text)
Heap Fetches: 0
  Planning time: 0.080 ms
  Execution time: 0.045 ms
(5 rows)

I see only index scan, so I do:

=# reindex table list_cards_tbl;
REINDEX
=# vacuum list_cards_tbl;
VACUUM
=# select imsi, md5(imsi), bit_length(imsi) from list_cards_tbl where
imsi = '28411123315';
  imsi | md5 | bit_length
--+-+
(0 rows)


Still cant find value.



So is the above the only value that is hidden?

What happens if for a session you do?:

SET enable_indexonlyscan=OFF;

Basically a variation of Karsten's idea

Is the same process populating both tables?

Where is the data coming from?

Lastly, what happens if you populate the field in list_cards_tbl with
the data from list_newcards_tbl?


--
Adrian Klaver
adrian.kla...@aklaver.com



=# SET enable_indexonlyscan=OFF;
SET
=# select imsi, md5(imsi), bit_length(imsi) from list_cards_tbl where 
imsi = '28411123315';

 imsi | md5 | bit_length
--+-+
(0 rows)


=# explain analyse select imsi, md5(imsi), bit_length(imsi) from 
list_cards_tbl where imsi = '28411123315';

   QUERY PLAN
-
 Index Scan using imsi_list_cards_tbl on list_cards_tbl  
(cost=0.28..8.30 rows=1 width=16) (actual time=0.015..0.015 rows=0 
loops=1)

   Index Cond: (imsi = '28411123315'::text)
 Planning time: 0.106 ms
 Execution time: 0.040 ms
(4 rows)


Same result.

 =# SET enable_indexscan = off;
SET
=# select imsi, md5(imsi), bit_length(imsi) from list_cards_tbl where 
imsi = '28411123315';

 imsi | md5 | bit_length
--+-+
(0 rows)

=# explain analyse select imsi, md5(imsi), bit_length(imsi) from 
list_cards_tbl where imsi = '28411123315';

QUERY PLAN
--
 Bitmap Heap Scan on list_cards_tbl  (cost=4.29..8.31 rows=1 width=16) 
(actual time=0.016..0.016 rows=0 loops=1)

   Recheck Cond: (imsi = '28411123315'::text)
   ->  Bitmap Index Scan on imsi_list_cards_tbl  (cost=0.00..4.29 rows=1 
width=0) (actual time=0.015..0.015 rows=0 loops=1)

 Index Cond: (imsi = '28411123315'::text)
 Planning time: 0.109 ms
 Execution time: 0.046 ms
(6 rows)


Finally.


=# SET en

[GENERAL] How to access array element in pgsql after array_agg

2014-06-17 Thread Condor


Hello,

I wanna ask how I can access array element in array_agg ?

I do select array_agg(ids) from x;
in ids I have int and result is :

   array_agg
  -
{3843,2,3543,33}

I want to access one element or first one direct in sql query like:

select array_agg(ids)[1] from x;

and to receive int value 2

Any hints how I can do it ?

And also for performance did that is good I to do it in SQL or I should 
do it in language that I use ? The result in query will return about 2, 
3 million rows.



Thank you,

Cheers,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to access array element in pgsql after array_agg

2014-06-17 Thread Condor

On 17-06-2014 22:44, François Beausoleil wrote:

Le 2014-06-17 à 14:22, Condor  a écrit :


I do select array_agg(ids) from x;
in ids I have int and result is :

  array_agg
 -
   {3843,2,3543,33}

I want to access one element or first one direct in sql query like:

select array_agg(ids)[1] from x;

and to receive int value 2

Any hints how I can do it ?


This works for me in 9.1:

psql (9.1.13)
Type "help" for help.

svanalytics=> select (array_agg(x))[1] from
(values(3843),(2),(3543),(33)) t1(x) ;
 array_agg
---
  3843

Note the use of the extra parens around the array_agg call. This is
probably a parser issue more than anything else.

And also for performance did that is good I to do it in SQL or I 
should do it in language that I use ? The result in query will return 
about 2, 3 million rows.


Do you mean you will have an array of 2, 3 million elements, or 2, 3
million rows with a couple dozen elements each? I’m not sure which
will be easier / faster. I routinely work with million element result
sets (rows) and have no issues.

Hope that helps!
François



Ah, double brackets and works, I did not expect and try
select (array_agg(ids))[1] from x;
to work but its work.
I try:
select array_agg(ids)[1] from x;


Thank you


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Strange result using pg_dump gzip or split.

2014-11-10 Thread Condor


Hello,

I found strange result when I use pg_dump described on postgresql site: 
http://www.postgresql.org/docs/9.3/static/backup-dump.html


I have a database with 30 gb data and decide to archive it, postgresql 
is 9.3.5 x64_86, ext4 file system, kernel 3.14.18 Slackware 14.2 
(current)



First I use gzip with : pg_dump logdb | gzip > log.sql.gz

After a few minute I have log.sql.gz with size 2 170 016 226
Well, that is strange and I dump database again with:

pg_dump logdb | split -b 1024m - log.sql

20 files is generated and I zip them with:

zip -r log.sql.zip logdir (because I move them in logdir)

file size is : 2 170 020 867

Almost the same, but if I check size in archives there is a huge 
difference.



$ gzip -l log.sql.gz
 compresseduncompressed  ratio uncompressed_name
 2170016226  3060688725  29.1% log_to.sql

and


$ unzip -v log.sql.zip
*** snip ***
  ---  ------
20240557909 2170020867  89%20 files


Here is difference: with gzip I have 29.1% compress ratio and 
uncompressed size is 3 060 688 725 which means 3 GB
and with zip I have 89% compress ratio and uncompressed size is 20 240 
557 909 witch mean 20 GB. That is 7 times bigger.


My question is: Is there some special config params that is not 
described in documentation here: 
http://www.postgresql.org/docs/9.3/static/backup-dump.html

Or something need to be configured on my linux.

And most important question for me is: Did the database dump is corrupt 
or not ?




Regards,

Hristo Simeonov


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux Distribution Preferences?

2013-01-13 Thread Condor

On 2013-01-14 00:44, Gavin Flower wrote:

On 14/01/13 07:27, Shaun Thomas wrote:


Hey guys,

I'm not sure the last time I saw this discussion, but I was somewhat 
curious: what would be your ideal Linux distribution for a nice solid 
PostgreSQL installation? We've kinda bounced back and forth between 
RHEL, CentOS, and Ubuntu LTS, so I was wondering what everyone else 
thought.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ [1] for terms and 
conditions related to this email

 I would tend use Fedora for development, but would consider CentOS
(or RHEL, if we had the budget) for production - I avoid Ubuntu like
the plague.

 Cheers,
 Gavin


Links:
--
[1] http://www.peak6.com/email_disclaimer/



I use Slackware and for me it's the perfect one. Some words are 
rotating in my mind:
There is no good or bad linux, exists only one that which you know and 
can work.


Cheers,
Hristo


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with records that disappear.

2013-01-14 Thread Condor

Hello,

from some time I have a very strange problem with my postgresql 9.2.2 
64bit.

I make a few changes with an plp function:

BEGIN
  UPDATE table SET X = X where id = aid;
  UPDATE table_2 SET Y=Y where id = aid;
  IF aid > 0 THEN
SELECT INTO ids id FROM table_3 WHERE x = x;
IF aid IS NULL THEN
  INSERT INTO table_3 (id) VALUES (x);
ELSE
  UPDATE table_3 SET id = id + 1 WHERE x = X;
END IF;
  END IF;
  RETURN 200;
END;


When I call the function from php everything it's seems to work,
but some time modify records just disappear. I don't have any ideas for 
the moment
what is the problem in my postgresql or in my php code. I'm sure there 
was a recording was made.

I run in middle night vacuum, reindex on tables and vacuum analyze;
I check my postgresql logs, but no any errors. I check the php logs, 
but no errors.
My vacuum also did not log any error messages. This problem happened 
one or two times per month
and I can't track him. Is not happened very often. My question is: Is 
there any chance this function
to stay opened or some thing like that and on the night when vacuum is 
started to rollback changes ?

Also any other suggestions are welcome.

Cheers,
Hristo C.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with records that disappear.

2013-01-14 Thread Condor

On 2013-01-14 10:53, Vlad Arkhipov wrote:

On 01/14/2013 05:15 PM, Condor wrote:


Hello,

from some time I have a very strange problem with my postgresql 
9.2.2 64bit.

I make a few changes with an plp function:

BEGIN
  UPDATE table SET X = X where id = aid;
  UPDATE table_2 SET Y=Y where id = aid;
  IF aid > 0 THEN
    SELECT INTO ids id FROM table_3 WHERE x = x;
    IF aid IS NULL THEN
  INSERT INTO table_3 (id) VALUES (x);
    ELSE
  UPDATE table_3 SET id = id + 1 WHERE x = X;
    END IF;
  END IF;
  RETURN 200;
END;



This problem happened one or two times per month


 It's very likely that the main problem of your code is a race
condition inside IF clause. Even if "IF aid IS NULL" condition is
false, somebody can delete the record before the UPDATE of table_3.
There is an example of what you trying to do in the documentation:

http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE
[1].

Links:
--
[1]

http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE



Is that possible to be done without any errors ?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with records that disappear.

2013-01-14 Thread Condor

On 2013-01-14 10:45, John R Pierce wrote:

On 1/14/2013 12:15 AM, Condor wrote:


When I call the function from php everything it's seems to work,
but some time modify records just disappear. I don't have any ideas 
for the moment
what is the problem in my postgresql or in my php code. I'm sure 
there was a recording was made.

I run in middle night vacuum, reindex on tables and vacuum analyze;
I check my postgresql logs, but no any errors. I check the php logs, 
but no errors.
My vacuum also did not log any error messages. This problem happened 
one or two times per month
and I can't track him. Is not happened very often. My question is: 
Is there any chance this function
to stay opened or some thing like that and on the night when vacuum 
is started to rollback changes ?

Also any other suggestions are welcome.


are you calling these functions within the context of a larger
transaction, or just as standalone statements without an epxlicit
BEGIN TRANSACTION ?if they are being called from within a
transaction, and something else in that transaction triggers a
rollback, then ALL of the changes made in that transaction go away.
once a transaction is committed, nothing can undo it, other than
restoring a backup or changing the data explicitly in another
transaction, or something.


It's a standalone statements without BEGIN TRANSACTION. An update is 
easy to explain with another
but insert with missing row and there is no DELETE command in whole 
code and no one other have access to server.
No any error logs, only access log system insert that user make the 
changes (no info what changes).





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with records that disappear.

2013-01-14 Thread Condor

On 2013-01-14 11:41, John R Pierce wrote:

On 1/14/2013 1:34 AM, Condor wrote:
It's a standalone statements without BEGIN TRANSACTION. An update is 
easy to explain with another
but insert with missing row and there is no DELETE command in whole 
code and no one other have access to server.
No any error logs, only access log system insert that user make the 
changes (no info what changes).



as I see it, that INSERT can't ever happen.  if AID is NULL, then the
first IF AID > 0 will be FALSE, so the 2nd IF AID IS NULL will never
get evaluated.   the two UPDATE's up front will not happen either if
AID is null.

and, what is SET X=X about?!?   thats a big noop anyways.

is this 'redacted' code that's been sanitized?   the more I look at
it, the more I'm cringing.   WHERE x=x on that SELECT INTO ids will
return the whole table, since X = X is always true unless X is NULL.



BEGIN
  UPDATE table SET X = X where id = aid;
  UPDATE table_2 SET Y=Y where id = aid;
  IF aid > 0 THEN
SELECT INTO ids id FROM table_3 WHERE x = x;
IF aid IS NULL THEN
  INSERT INTO table_3 (id) VALUES (x);
ELSE
  UPDATE table_3 SET id = id + 1 WHERE x = X;
END IF;
  END IF;
  RETURN 200;
END;



Everything after the IF aid > 0 THEN is log statistic information and 
is not important.

The first two updates are important, they actually make user changes:
UPDATE table SET X = X where id = aid;
UPDATE table_2 SET Y = Y where id = aid;
Everything after them is just statics and is not important.
I really change this line IF aid IS NULL THEN, it's should be  IF ids 
IS NULL THEN

meaning if no record for ids in table_3 where x = X;

My mistake.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need advice to avoid ORDER BY

2013-04-04 Thread Condor

Hello,

I have one query in my postgresql 9.2.3 that took 137 ms to me executed 
and looking a way
what I can do to optimize it. I have one table generated numbers from 1 
to 1 000 000 and
I need to get first free id, meanwhile id's when is taken can be free 
(deleted data and id

is free for next job). Table is simple:


id serial,
jobid text,
valids int default 0

(Yes, I have index).


my query is: SELECT jobid FROM mytable WHERE valids = 0 ORDER BY id ASC 
LIMIT 1


I need the first id only.

My question is: Is there a way how I can avoid using ORDER BY to 
receive the first

free id from mytable ?



Cheers,
Condor



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need advice to avoid ORDER BY

2013-04-04 Thread Condor

On 2013-04-05 00:38, Merlin Moncure wrote:

On Thu, Apr 4, 2013 at 4:32 PM, Condor  wrote:

Hello,

I have one query in my postgresql 9.2.3 that took 137 ms to me 
executed and

looking a way
what I can do to optimize it. I have one table generated numbers from 
1 to 1

000 000 and
I need to get first free id, meanwhile id's when is taken can be free
(deleted data and id
is free for next job). Table is simple:


id serial,
jobid text,
valids int default 0

(Yes, I have index).


my query is: SELECT jobid FROM mytable WHERE valids = 0 ORDER BY id 
ASC

LIMIT 1

I need the first id only.

My question is: Is there a way how I can avoid using ORDER BY to 
receive the

first
free id from mytable ?


well, you can (via EXISTS()), but you can really optimize this with
partial index.

CREATE INDEX ON mytable (id) WHERE valids = 0;

then,

 SELECT jobid FROM mytable WHERE valids = 0 ORDER BY id ASC LIMIT 1;

should return in zero time since btree indexes can optimize order by
expressions and the partial index will bypass having to wade through
the rows you don't want.

merlin



Hm,
I only can say: Thank You!
Your solution is work, but Im now a little confused. I has a index
CREATE INDEX ON mytable (valids) USING BTREE (valids) and the
query to find valids = 0 tooks 137 ms.

Why, your solution is worked ? Yes, it's worked.


Cheers,
Condor


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need advice to avoid ORDER BY

2013-04-05 Thread Condor

On 2013-04-05 01:54, Merlin Moncure wrote:

On Thu, Apr 4, 2013 at 5:15 PM, Tom Lane  wrote:

Merlin Moncure  writes:

problem is that you are looking for needles (valids = 0) in the
haystack.   the problem wasn't really the order, but the fact that 
you

had to scan an arbitrary amount of rows before finding a candidate
record.  so the partial index manages this problem by creating index
entries *only for records that match a criteria*, and the planner
recognizes this and prefers that index when the criteria is also
present in the query.  In other words, index only the needles.


The other way to fix it is a two-column index on (valids, id), which
will be more useful if sometimes you need the minimum/maximum id
for some nonzero value of valids.


right -- that's a more general solution -- here we are exploiting that
A: the OP only needs access to "=0" rows and especially B: "=0" rows
are a tiny fraction of the overall set (we know this because otherwise
the query would have returned quickly anyways).  So we get to squeak
out with a tiny index pointing to only the candidate rows.

Partial indexes are an underutilized trick -- the efficiency savings
can be enormous.  They are often useful when coding ad hoc queue
operations in the database where the queued items are intermixed with
items that have been resolved.

merlin



Thank you for every one for suggestions. I'll try to make
changes tomorrow night to see what will be happened.


Cheers,
Condor


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to convert US date format to European date format ?

2013-04-10 Thread Condor

Hello ppl,

I have a database where the previous owner use US date format in date 
fields:


2009-02-18

Is there a way how to convert the fields in European format 18-02-2009.
I mean existing date in records. What's will be happened if I change 
format

in postgresql.conf ?


Cheers,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert US date format to European date format ?

2013-04-12 Thread Condor

On 2013-04-10 22:35, Thomas Kellerer wrote:

John R Pierce wrote on 10.04.2013 21:28:

On 4/10/2013 6:15 AM, Thomas Kellerer wrote:

psql (one of the possible client applications) uses the "datestyle"
parameter to decide on how to format a date column when displaying
it.

If you change the "datestyle" parameter in postgresql.conf, it will
influence the way psql displays the date values. Probably pgAdmin
will also check that setting (as I don't use pgAdmin I can't really
tell).


PSQL doesn't use that, postgres itself does.   it can be set on the
fly with SET on a per-connection basis, or with ALTER DATABASE on a
per-database basis.



But the *display* is done by the client.
And if Postgres (the server) did the conversion, I would not be able
to see a different date formatting in e.g. a JDBC based tool. So I
guess psql is reading that database/server setting.



Hello again,
what parameter should I use to have date in format:

dd-mm- ? I try to use Posgtgres, DMY and it's seems is work,
but not in my case, because I have also a field:

last_date timestamp without time zone default 
('now'::text)::timestamp(6) with time zone


and ISO, DMY show me: 2012-10-15 11:00:49.397908 if I use Postgres, DMY 
show me

Mon 15 Oct 11:00:49.397908 2012

But I want to be formatted: 11:00:49 15-10-2012

Is this possible to be done ?

Cheers,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert US date format to European date format ?

2013-04-12 Thread Condor

On 2013-04-12 10:59, John R Pierce wrote:

On 4/12/2013 12:42 AM, Condor wrote:
and ISO, DMY show me: 2012-10-15 11:00:49.397908 if I use Postgres, 
DMY show me

Mon 15 Oct 11:00:49.397908 2012

But I want to be formatted: 11:00:49 15-10-2012


use the date formatting functions, like...

select to_char(yourfield, 'HH:MI:SS DD-MM-') ...

see 
http://www.postgresql.org/docs/current/static/functions-formatting.html





--
john r pierce  37N 122W
somewhere on the middle of the left coast


Yes, I see this function but if I need to select 100 000 rows this mean 
I think,
this function will be start 100 000 times. I mean when I ask the 
question,
it's is possible to format the date how I like it without to use 
functions,
just something like: set datestyle ('postgres with my custom format 
00:00:00 dmy', DMY)

something like that. Sry that I did not explain it.


Cheers,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to prevent clear screen when query finish ?

2013-08-07 Thread Condor

Hello,

sorry for dumb question, did any one can tell me how the hell I can 
remove clear screen after finish the sql query from console ?
This probably have some idea, but for me look like very ... not good 
idea. When I run query from console like:


SELECT * FROM table_x;

I got the result ... with (END) and when I click key -> q for quit the 
result disappear and I can't scroll it back,
if I need to check something again (change console do something, get one 
line and need to scroll back) and want to see my old result again, I 
need to run query
again ... I never has this problem on Slackware, but today one of my 
partners give me a shell to his ubuntu server to fix something on 
database.



Cheers,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to prevent clear screen when query finish ?

2013-08-08 Thread Condor

On 2013-08-07 19:01, Adrian Klaver wrote:

On 08/07/2013 08:53 AM, Condor wrote:

Hello,

sorry for dumb question, did any one can tell me how the hell I can
remove clear screen after finish the sql query from console ?
This probably have some idea, but for me look like very ... not good
idea. When I run query from console like:

SELECT * FROM table_x;

I got the result ... with (END) and when I click key -> q for quit the
result disappear and I can't scroll it back,
if I need to check something again (change console do something, get 
one

line and need to scroll back) and want to see my old result again, I
need to run query
again ... I never has this problem on Slackware, but today one of my
partners give me a shell to his ubuntu server to fix something on 
database.




The pager is probably not set, see here for more info:

http://www.postgresql.org/docs/9.2/interactive/app-psql.html

pager
Controls use of a pager program for query and psql help output. If the
environment variable PAGER is set, the output is piped to the
specified program. Otherwise a platform-dependent default (such as
more) is used.

When the pager option is off, the pager program is not used. When the
pager option is on, the pager is used when appropriate, i.e., when the
output is to a terminal and will not fit on the screen. The pager
option can also be set to always, which causes the pager to be used
for all terminal output regardless of whether it fits on the screen.
\pset pager without a value toggles pager use on and off.




Thank you,
last question: How I can find where is set this ENV ?
because:

# env
TERM=xterm
SHELL=/bin/bash
SSH_CLIENT=192.68.1.111 52614 22
SSH_TTY=/dev/pts/2
USER=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;
35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
PWD=/root
LANG=en_US.UTF-8
PS1=\h:\w\$
SHLVL=1
HOME=/root
LANGUAGE=en_US
LS_OPTIONS=--color=auto
LOGNAME=root
SSH_CONNECTION=192.68.1.111 52614 192.68.1.121 22
HISTTIMEFORMAT=[%Y-%m-%d %T]
_=/usr/bin/env
OLDPWD=/root


and when I enter to db:

my_db=# \set
AUTOCOMMIT = 'on'
PROMPT1 = '%/%R%# '
PROMPT2 = '%/%R%# '
PROMPT3 = '>> '
VERBOSITY = 'default'
VERSION = 'PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc 
(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit'

DBNAME = 'my_db'
USER = 'postgres'
HOST = '127.0.0.1'
PORT = '5432'
ENCODING = 'UTF8'
my_db=#


I can't see this variable PAPER but yes, \pset paper work for 
connection.


Cheers,
Hristo S.


--
Adrian Klaver
adrian.kla...@gmail.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy database to another host without data from specific tables

2017-03-06 Thread Condor

On 07-03-2017 09:02, Panagiotis Atmatzidis wrote:

Hello,

I have 2 RDS instances on AWS running PSQL 9.4.7.

I want to make a clone of database1 which belongs to user1, to
database2 which belongs to user2. Database1 has 20+ tables. I want to
avoid copying the DATA sitting on 5 tables on database1 (many Gigs).

I've read one too many posts about how to perform the actions with
"pg_dump" and "pg_restore" but I'm trying to figure out the easiest
way to do this. The process I have in mind is this:

1) pg_dump the schema from DB1 to DB2 using --no-owner and pg_restore
with --role=user2
2) pg_dump -Fc --no-owner --data-only -t 'table1' from DB1 and then
restore with pg_restore -t 'table' --role=user2  to DB2

This procedure though is very time consuming (although it could be
scripted). Is there any better / faster / safer way to do this?

Thanks.

--
Panagiotis (atmosx) Atmatzidis

email:  a...@convalesco.org
URL:http://www.convalesco.org
GnuPG ID: 0x1A7BFEC5
gpg --keyserver pgp.mit.edu --recv-keys 1A7BFEC5

"Everyone thinks of changing the world, but no one thinks of changing
himself.” - Leo Tolstoy



Hello,

I do it with shell script, here is it:

#!/bin/sh

for table in a_tbl ab_tbl some_other_tbl
do
echo $table
psql -U data -h 192.168.1.152 second_db -c "TRUNCATE $table;"
/usr/bin/pg_dump -U postgres --no-tablespaces --no-owner -a -b 
-t $table first_db | psql -U data -h 192.168.1.152 second_db

done


# line bellow removing some data that should not be on backup.
psql -U data -h 192.168.1.152 second_db -c 'UPDATE a_tbl SET upss = 
DEFAULT;'




My servers are in local network. Of course for this way you need to 
create table structure on second_db that is the same on master.
Because I use it for backup only, I dont have index on second_db and 
process is fast.



Regards,
Hristo S


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy database to another host without data from specific tables

2017-03-06 Thread Condor

On 07-03-2017 09:02, Panagiotis Atmatzidis wrote:

Hello,

I have 2 RDS instances on AWS running PSQL 9.4.7.

I want to make a clone of database1 which belongs to user1, to
database2 which belongs to user2. Database1 has 20+ tables. I want to
avoid copying the DATA sitting on 5 tables on database1 (many Gigs).

I've read one too many posts about how to perform the actions with
"pg_dump" and "pg_restore" but I'm trying to figure out the easiest
way to do this. The process I have in mind is this:

1) pg_dump the schema from DB1 to DB2 using --no-owner and pg_restore
with --role=user2
2) pg_dump -Fc --no-owner --data-only -t 'table1' from DB1 and then
restore with pg_restore -t 'table' --role=user2  to DB2

This procedure though is very time consuming (although it could be
scripted). Is there any better / faster / safer way to do this?

Thanks.

--
Panagiotis (atmosx) Atmatzidis

email:  a...@convalesco.org
URL:http://www.convalesco.org
GnuPG ID: 0x1A7BFEC5
gpg --keyserver pgp.mit.edu --recv-keys 1A7BFEC5

"Everyone thinks of changing the world, but no one thinks of changing
himself.” - Leo Tolstoy




Hello,

I do it with shell script, here is it:

#!/bin/sh

for table in a_tbl ab_tbl some_other_tbl
do
echo $table
psql -U data -h 192.168.1.152 second_db -c "TRUNCATE $table;"
/usr/bin/pg_dump -U postgres --no-tablespaces --no-owner -a -b 
-t $table first_db | psql -U data -h 192.168.1.152 second_db

done


# line bellow removing some data that should not be on backup.
psql -U data -h 192.168.1.152 second_db -c 'UPDATE a_tbl SET upss = 
DEFAULT;'




My servers are in local network. Of course for this way you need to 
create table structure on second_db that is the same on master.
Because I use it for backup only, I dont have index on second_db and 
process is fast.



Regards,
Hristo S


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres Data Encryption Using LUKS with dm-crypt ?

2017-06-18 Thread Condor

Hello ppl,

a few years ago I asked the same question but did not receive valued 
answers and we use different way to realize the project.
Today I wanna ask did some one do it and most important for me, can some 
one share his experience ?

What I should expect, what is good and bad things that can be happened.

Im thinking the problems can be occurred if server is restarted and data 
is not synced, but for that is raid cache battery.
Also if hard drive need to be checked for bad clusters or broken index / 
files on filesystem what will happened with data?
Because postgresql does not support data level encryption, Im wanna 
realize with third party tools.



Regards,
Hristo S


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG and database encryption

2017-08-22 Thread Condor

On 22-08-2017 22:48, rakeshkumar464 wrote:
We have a requirement to encrypt the entire database.  What is the best 
tool
to accomplish this. Our primary goal is that it should be transparent 
to the
application, with no change in the application, as compared to 
un-encrypted
database. Reading about pgcrypto module, it seems it is good for few 
columns

only and using it to encrypt entire database is not a good use-case.

Is this which can be done best by file level encryption?  What are the 
good

tools on Linux (RHES), preferably open-source.

Thanks



--
View this message in context:
http://www.postgresql-archive.org/PG-and-database-encryption-tp5979618.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.



Hello,
I also was interesting about this topic. My research take me to two 
problems that can be happened
if you encrypt your hard drive partition where is stored postgresql 
data. Postgresql does not support

encryption like oracle or mssql.

The problems that I mentored is two and they are connected with 
filesystem:


1. If some is happened on your filesystem and encrypted drive need to be 
force checked. It's can damage your files.
2. If LURKS is used, if problem is happened (bad sector, cluster 
problem) and that problem / bad sector is
there where is stored your LURKS header encryption data, you cannot 
mount your encrypted partition and

if you does not have experiences what to do, your data is lost forever.

My data is too important and because I don't have much time to make more 
researches, I get decision not to use encryption.
I think there is hardware named TDS or was IDS but may be is deprecated 
but Im not sure.


If you realize encryption somehow, drop us or me email with information.



Regards,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to check streaming replication status

2017-08-31 Thread Condor


Hello,

I have a question about master - slave replication.
My version on both servers is : PostgreSQL 9.6.4 on 
x86_64-slackware-linux-gnu, compiled by x86_64-slackware-linux-gcc (GCC) 
7.2.0, 64-bit


Here is the story:

Today I create a table space and move all indexes on nvmi drives. So far 
so good.


Master server is configured as replica and start sending replication wal 
files to slave server,

after a hour I get error message on slave server:

LOG:  restored log file "0001008B00DC" from archive
LOG:  restored log file "0001008B00DD" from archive
cp: can get attribute '/archive/0001008B00DE': No such file 
or directory

LOG:  started streaming WAL from primary at 8B/DD00 on timeline 1



Question coming in my mind: Did my slave is up to date ?

I read https://wiki.postgresql.org/wiki/Streaming_Replication and know I 
can check status with:


$ psql -c "select pg_last_xlog_receive_location()" -h192.168.0.20 
(standby host)


and after I did it, got:

STATEMENT:  SELECT pg_current_xlog_location()
ERROR:  recovery is in progress
HINT:  WAL control functions cannot be executed during recovery.

My question is: How I can check the replication status when the slave 
does not accept connections ?
I know if there have some different in configurations slave does not 
accept connections, but in my case
slave have different hardware so is normal to have differences in config 
files.



Regards,
Hristo S


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check streaming replication status

2017-08-31 Thread Condor

On 31-08-2017 11:24, Glyn Astill wrote:

From: Condor 
To: "pgsql-general@postgresql.org" 
Sent: Thursday, 31 August 2017, 08:36:19 GMT+1

after a hour I get error message on slave server:

LOG:  restored log file "0001008B00DC" from archive
LOG:  restored log file "0001008B00DD" from archive
cp: can get attribute '/archive/0001008B00DE': No such

file or directory

LOG:  started streaming WAL from primary at 8B/DD00 on timeline

1

So it read all the log from the archive then started streaming, if
there are no futrher messages you're ok.

...


and after I did it, got:

STATEMENT:  SELECT pg_current_xlog_location()
ERROR:  recovery is in progress
HINT:  WAL control functions cannot be executed during recovery.

My question is: How I can check the replication status when the

slave

does not accept connections ?


That's right for a server in recovery you need to call
pg_last_xlog_receive_location() or pg_last_xlog_replay_location() to
get the current xlog position.



Yes,
but my question is how to call them when Im unable to connect with slave 
even when
replication is over. How I can ask the slave server: Are you in recovery 
mode ?
What is the last wal file send from master, which file you processing 
now ?

How far behind you ?

As I ask:  My question is: How I can check the replication status when 
the slave does not accept connections ?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check streaming replication status

2017-08-31 Thread Condor

On 31-08-2017 12:14, Glyn Astill wrote:

From: Condor 
To: Glyn Astill 
Cc: "pgsql-general@postgresql.org" ;

"pgsql-general-ow...@postgresql.org"


Sent: Thursday, 31 August 2017, 09:42:17 GMT+1
Subject: Re: [GENERAL] How to check streaming replication status



My question is: How I can check the replication status when the

slave

does not accept connections ?


That's right for a server in recovery you need to call
pg_last_xlog_receive_location() or pg_last_xlog_replay_location()

to

get the current xlog position.



Yes,
but my question is how to call them when Im unable to connect with

slave

even when
replication is over. How I can ask the slave server: Are you in

recovery

mode ?



Define "unable to connect", in your previous example you appeared to
be connected to the slave and attempting to call
pg_current_xlog_location() ...

If you want to know if postgres is in recovery call
pg_is_in_recovery()

https://www.postgresql.org/docs/current/static/functions-admin.html



What is the last wal file send from master, which file you processing
now ?
How far behind you ?

As I ask:  My question is: How I can check the replication status

when

the slave does not accept connections ?


Again I think you need to define "the slave does not accept
connections".

If you've not configured the slave to be a hot standby, then try
setting hot_standby=on in postgresql.conf on the slave.  If you don't
want to do that you can run the pg_controldata executable on the slave
to see the cluster state.

You should also be able to see streaming replication slave lag on the
master by looking at pg_stat_replication and using
pg_xlog_location_diff()

hth



Yes,
it's seems my mistake. I did not change the first part of ip address and 
trying to query test slave server which is connected to different master 
server.

It's will be wondering if it work ...

Sorry


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about memory usage of pg_dump

2017-10-09 Thread Condor


Hello,

I have a question .. okay I know it's a dump but need to ask it because 
probably I will need to tell of five of my collective bad things... :)


My version is PostgreSQL 9.6.5 on x86_64-slackware-linux-gnu, compiled 
by x86_64-slackware-linux-gcc (GCC) 7.2.0, 64-bit on server IP 10.1.1.3 
(the ip of the server is connected with question)
I do a simple backup for few small tables (9M rows each) with bash shell 
script:


for table in table1 table2 table3
do
pg_dump -U postgres --no-tablespaces --no-owner -a -b -t $table 
my_db -h 10.1.1.2 | psql -U data -h 10.1.1.1 my_local_db

done

and I see after done of each table how many rows is copied. All of them, 
but my collective trying to convince me,
they use this way to backup few tables on other project and because 
these tables contain much data (20 - 30M rows as they explain)
pg_dump took too much memory and process was fail because out of memory, 
so they rewrite every things on php.


I think that is a bullshit, they probably smoke something because I 
think with php they add one more level over the whole process because 
they do:


system command to dump the table like mine in shell and send data to 
backup server

read the output
explode output to array
reach end of array with foreach that contain how many rows are copied.

but with my shell script I do only first line:
system command to dump the table like mine in shell and send data to 
backup server


After a short introduction my question is:

How much memory take pg_dump and from witch sever ? 10.1.1.3 and / or 
10.1.1.2. Lets say our data is 100 MB.
I know the dump process lock the table on the server when is read and 
this take memory on server, after that : here I need explanation what is 
happened with few words like: server 10.1.1.2 take 100MB into memory 
then start send rows to server 10.1.1.2 that hold them all 100MB or 
start resend data that receive immediately after receive it to 10.1.1.1  
:



Thanks and sorry of dumb question, but I really need to know what is 
happened on this process to can I tell them: My solution is okay and 
work like a charm.



Cheers,
Hristo S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Log storage

2017-10-19 Thread Condor

On 18-10-2017 09:18, Ivan Sagalaev wrote:

Hello everyone,

An inaugural poster here, sorry if I misidentified a list for my 
question.


I am planning to use PostgreSQL as a storage for application logs
(lines of text) with the following properties:

- Ingest logs at high rate: 3K lines per second minimum, but the more
the better as it would mean we could use one Postgres instance for
more than one app.

- Only store logs for a short while: days, may be weeks.

- Efficiently query logs by an arbitrary time period.

- A "live feed" output, akin to `tail -f` on a file.

For context, I only used Postgres for a bog standard read-heavy web
apps, so I'm completely out of expertise for such a case. Here are my
questions:

- Is it even possible/advisable to use an actual ACID RDBMS for such a
load? Or put another way, can Postgres be tuned to achieve the
required write throughput on some mid-level hardware on AWS? May be at
the expense of sacrificing transaction isolation or something…

- Is there an efficient kind of index that would allow me to do `where
'time' between ... ` on a constantly updated table?

- Is there such a thing as a "live cursor" in Postgres for doing the
`tail -f` like output, or I should just query it in a loop (and skip
records if the client can't keep up)?

Thanks in advance for all the answers!



Hello,

not much on the topic, I had the same problem and I solved it by using a 
Redis server (memory is cheap and fast) to store the logs for an
hour / day depending on the load average and then drop them on a csv or 
sql file and insert it into Postgresql database.
My Redis record is so structured that I have the ability to review the 
current actions of each user like tail -f.
Hardware is not much, Redis server with a lot of memory and cheap server 
for database to store logs and I now even try to make different approach 
to
remove the database server, because I store every day as separate gziped 
log file for backup.


Regards,
Hristo S


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to log query's from servers ?

2011-06-06 Thread Condor

Hello ppl,
any one can tell me how i make my postgresql server to log commands 
coming from another postgres server ? I have one postgresql and give to 
2 other sql server to access my db. I want to log all query's coming 
only from one of the servers. Is that possibly ?



--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to log query's from servers ?

2011-06-06 Thread Condor

On Mon, 06 Jun 2011 18:53:57 +0800, Craig Ringer wrote:

On 6/06/2011 4:14 PM, Condor wrote:

Hello ppl,
any one can tell me how i make my postgresql server to log commands
coming from another postgres server ? I have one postgresql and give 
to

2 other sql server to access my db. I want to log all query's coming
only from one of the servers. Is that possibly ?


If the different servers log in with different user accounts or log
into different databases you could use database-level or user-level
SET commands to set log_statement for only one of them.

Otherwise: just grep the logs.


Can you explain little more how i can use database-level or user-level
SET commands to set log_statement for only one of them ?
From server that i want to log his query i log in to db on another 
server
with: psql -U user -h 192.168.1.1 master_db. Here maybe I should say 
that I wanna

monitor users that I gave them access to DB.
Every server use different username to connect. I can't use grep on 
logs because

other two servers generate around 500 MB to 1 GB log file per day.

Probably, a little filter or patch on postgresql source code file which 
manage logs to write log file

only if ip is the ip that i want will save me.

--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to log query's from servers ?

2011-06-07 Thread Condor

On Tue, 07 Jun 2011 06:44:02 +0800, Craig Ringer wrote:

On 06/06/2011 09:26 PM, Condor wrote:

Can you explain little more how i can use database-level or 
user-level

SET commands to set log_statement for only one of them ?


http://www.postgresql.org/docs/current/static/sql-set.html

http://www.postgresql.org/docs/current/static/runtime-config-logging.html
http://www.postgresql.org/docs/current/static/sql-alteruser.html
http://www.postgresql.org/docs/current/static/sql-alterdatabase.html

You might, for example:

ALTER USER user1 SET log_statement = 'all';
ALTER USER user2 SET log_statement = 'none';

or do the same with ALTER DATABASE ... SET if you wanted to log on a
per-database level.

Probably, a little filter or patch on postgresql source code file 
which

manage logs to write log file
only if ip is the ip that i want will save me.



Rather than patching PostgreSQL I would recommend configuring
PostgreSQL to log through a smarter syslog daemon like rsyslogd or
syslogd-ng . You should then be able to use regular expression 
filters
in the syslog daemon to discard log messages you are not interested 
in

before they are written to disk.

--
Craig Ringer



Thank you to you and Scott for help.
--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] perl and php connect problems to pgsql all of a sudden

2011-06-07 Thread Condor
  

On Tue, 7 Jun 2011 08:21:27 -0700, Shad Keene wrote: 

> I'm
getting some errors all of a sudden when using PHP and Perl to connect
to my postgresql database.
> 
> I'm running Red Hat Enterprise Linux
Client release 5.6 (Tikanga)
> And php-5.1.6-27.el5_5.3
>
perl-5.8.8-32.el5_5.2
> 
> Here's the php error when trying to
connect:
> PHP Warning: PHP Startup: Unable to load dynamic library
'/usr/lib64/php/modules/pdo_pgsql.so' - libpq.so.4: cannot open shared
object file: No such file or directory in Unknown on line 0
> PHP
Warning: PHP Startup: Unable to load dynamic library
'/usr/lib64/php/modules/pgsql.so' - libpq.so.4: cannot open shared
object file: No such file or directory in Unknown on line 0
> 
> And the
perl error:
> install_driver(Pg) failed: Can't locate DBD/Pg.pm in @INC
(@INC contains:
/usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl
/usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl
/usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .)
at (eval 3) line 3.
> Perhaps the DBD::Pg perl module hasn't been fully
installed,
> or perhaps the capitalisation of 'Pg' isn't right.
>
Available drivers: DBM, ExampleP, File, Proxy, Sponge, mysql.
> at
pg_connect.pl line 9
> 
> This is strange because the same exact scripts
worked about a month ago. And I do have DBD/pg.pm installed and pgsql.so
and pdo_pgsql.so are also installed.
> 
> Is there a simple fix to this?
I have read some ideas about installed libpq.so.4 to fix this, but I
wanted to run this specific problem by some experts before making any
changes.
> 
> Thanks for any help,
> 
> S

It's seems library libpq.so.4
missing on your server. This can happened if you upgrade your
postgresql. 

For perl use: 

perl -MCPAN -e shell 

and then: 

install
DBD::Pg 

For perl this is happened when you update your perl version or
postgresql library is missing or is not installed. 

--

Regards,
Condor
  

Re: [GENERAL] insert a SYSTIMESTAMP value in postgres

2011-06-20 Thread Condor
  

On Mon, 20 Jun 2011 15:32:31 -0400, Leon Match wrote: 

> Hello, 
>

> I am trying to re-create few objects from oracle into postgres. 
> 
>
I have a problem inserting a timestamp value into the table: 
> 
>
insert into request_queue (request_id, received_time 
> 
> ) 
> 
>
values (new.request_id, SYSTIMESTAMP 
> 
> ); 
> 
> How can I insert a
dynamic timestamp value in postgress, please? 
> 
> Thank you, 
> 
>
Leon 
> 
> leon.ma...@convergia.net

May be: insert into request_queue
(request_id, received_time) values (new.request_id,
(abstime(('now'::text)::timestamp(6) with time zone))::integer); 

--

Regards,
Condor
  

[GENERAL] Real type with zero

2011-06-29 Thread Condor

Hello,
how I can tell my postgresql to store last zero of real type ? I put 
value 2.30 and when I select that column i see 2.3 without zero.



--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Difference in DB size with dump and pg_database_size

2011-07-05 Thread Condor

Hello,
any one can explain me why I have difference between db size when I 
dump db, I see it's 5G and when I run SELECT 
pg_size_pretty(pg_database_size('somedatabase')) As fulldbsize; on my DB 
postgresql return: 10 GB


I run vacuum on db every night. Why is that huge difference in size ?

--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dump large DB and restore it after all.

2011-07-05 Thread Condor

Hello ppl,
can I ask how to dump large DB ? I read documentation but I has a 
problem with split that was year ago and did not use it after then. 
Problem was when I start: pg_dump dbname | split -b 1G - filename I 
unable to restore it correct. When I start restore DB i got error from 
sql he did not like one line. I make investigation and the problem was 
in last line of first file value field was something like '"This is a ' 
and here file over. I added single quotes in the example that I can pay 
the phrase to make it clear what I mean. In next file sentence was end 
correct 'simple test"' (also without single quotes) and this was not 
inserted into db. I use for now gzip but I don't know how safe is that, 
because when I check db size uncompressed is 5G and pg_database_size 
show me 10 G




--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Difference in DB size with dump and pg_database_size

2011-07-05 Thread Condor

On Tue, 5 Jul 2011 10:43:38 +0200, Magnus Hagander wrote:

On Tue, Jul 5, 2011 at 10:38, Condor  wrote:

Hello,
any one can explain me why I have difference between db size when I 
dump db,

I see it's 5G and when I run SELECT
pg_size_pretty(pg_database_size('somedatabase')) As fulldbsize; on 
my DB

postgresql return: 10 GB

I run vacuum on db every night. Why is that huge difference in size 
?


The dump does not include indexes - for each index it just contains
the CREATE INDEX statement, not the actual data in the index.

And perhaps you've also compressed your dumps? Though in that case,
the difference would probably be bigger...

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/



Thank you for your fast replay. Yes, I gzip my DB but that file size is 
when I uncompress the gziped file. Anyway, I got the answer. Thank you.


--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump large DB and restore it after all.

2011-07-05 Thread Condor

On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:

On 5/07/2011 5:00 PM, Condor wrote:

Hello ppl,
can I ask how to dump large DB ?


Same as a smaller database: using pg_dump . Why are you trying to
split your dumps into 1GB files? What does that gain you?

Are you using some kind of old file system and operating system that
cannot handle files bigger than 2GB? If so, I'd be pretty worried
about running a database server on it.


Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.



As for gzip: gzip is almost perfectly safe. The only downside with
gzip is that a corrupted block in the file (due to a hard
disk/dvd/memory/tape error or whatever) makes the rest of the file,
after the corrupted block, unreadable. Since you shouldn't be storing
your backups on anything that might get corrupted blocks, that should
not be a problem. If you are worried about that, you're better off
still using gzip and using an ECC coding system like par2 to allow
recovery from bad blocks. The gzipd dump plus the par2 file will be
smaller than the uncompressed dump, and give you much better
protection against errors than an uncompressed dump will.

To learn more about par2, go here:

  http://parchive.sourceforge.net/



Thank you for info.


--
Craig Ringer



--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql problem with update double precision

2011-08-05 Thread Condor


Hello ppl,
for few years I have problem when update double precision field. I have 
table and few double precision columns, here is example:


sumall double precision,
sumin double precision,

My php script do:

$get = 2.40

and sql code is:

UPDATE table1 SET sumall = sumall + $get WHERE id = 1 AND rd = 
CURRENT_DATE;



When I browse the table some times i see incorrect values like:

955.5998

it's should be 955.60 after these updates ... some days is fine, some 
days the value is incorrect.


I have this problem from version 7 of postgresql, Im now use 9.0.3

Anyone know what can be the problem and why some times records is fine, 
some times isnt ?


--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql problem with update double precision

2011-08-05 Thread Condor

On Fri, 05 Aug 2011 07:20:01 -0400, Jerry Sievers wrote:

Condor  writes:


Hello ppl,
for few years I have problem when update double precision field. I
have table and few double precision columns, here is example:

sumall double precision,
sumin double precision,

My php script do:

$get = 2.40

and sql code is:

UPDATE table1 SET sumall = sumall + $get WHERE id = 1 AND rd =
CURRENT_DATE;


When I browse the table some times i see incorrect values like:

955.5998

it's should be 955.60 after these updates ... some days is fine, 
some

days the value is incorrect.

I have this problem from version 7 of postgresql, Im now use 9.0.3

Anyone know what can be the problem and why some times records is
fine, some times isnt ?


That floating point data types are inexact is a well known problem
with them and not Postgres specific.

Consider switching those fields to type NUMERIC.

HTH




--
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 305.321.1144


Thank you, today I see all 3 rows is normal, but when I do select 
sum(sumall) I got 73.31 as result.

Any way how I can convert field in numeric without to lose data ?

--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] securing the sql server ?

2011-08-22 Thread Condor

Hello ppl,
any one can tell me how I can secure linux server with database 
postgres for example ?
Im thinking to make a cryptfs file system and to deploy database over 
the cryptfs. The problem
here may will be when front end need any data for in/out cpus of the 
server will aways

 decrypt/encrypt data and performance will be very low.

I remember a few months ago some one ask similar question about how he 
can crypt data that is
stored on database and problem was the key. Key is stored on the same 
server if some one

get access can decrypt data.

Any one have some ideas how to make something like crypt bubble and to 
store database there ?

Or something else ?

R.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] securing the sql server ?

2011-08-22 Thread Condor

On Mon, 22 Aug 2011 09:32:39 -0500, Merlin Moncure wrote:

On Mon, Aug 22, 2011 at 3:40 AM, Condor  wrote:

Hello ppl,
any one can tell me how I can secure linux server with database 
postgres for

example ?
Im thinking to make a cryptfs file system and to deploy database 
over the

cryptfs. The problem
here may will be when front end need any data for in/out cpus of the 
server

will aways
 decrypt/encrypt data and performance will be very low.

I remember a few months ago some one ask similar question about how 
he can

crypt data that is
stored on database and problem was the key. Key is stored on the 
same server

if some one
get access can decrypt data.

Any one have some ideas how to make something like crypt bubble and 
to store

database there ?
Or something else ?


Worrying about security without defining and understanding the 
threats

you face is a pointless exercise.  If you are worried about physical
loss of the drive, a better defense is to encrypt/decrypt sensitive
data on the client so that the server is not exposed to the key.
Obviously, this has downsides like not being able to index or ad hoc
search the data in question.  So, who are you worried about -- what
are the threats?

merlin



I did not worry about hackers attack, I worrying how to secure data
if I physical loss hard drives or server. Let's just imagine that you 
have
a server and you store on it important data like credit cards, bank 
acc,

password, clients names, addresses, social numbers, phone numbers and
some think like that ... very important information. When front end is
secured Im worry if I loss hard drives or server. I think if some one
buy so expensive server is not necessary some one with gun to watching 
it.


R.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] securing the sql server ?

2011-08-22 Thread Condor

On Mon, 22 Aug 2011 10:54:23 -0700, John R Pierce wrote:

On 08/22/11 10:34 AM, Condor wrote:


I did not worry about hackers attack, I worrying how to secure data
if I physical loss hard drives or server.


so when this remote server reboots, where does the encryption key 
come from?




--
john r pierceN 37, W 122
santa cruz ca mid-left coast



Mount over NFS ?

R.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] securing the sql server ?

2011-08-22 Thread Condor

On Mon, 22 Aug 2011 14:20:00 -0400 (EDT), Gary Chambers wrote:
so when this remote server reboots, where does the encryption key 
come

from?


Why, from a file that resides in /root on the server, of course!  :-)
That's secure, right?

-- Gary Chambers


Isn't necessary to be on the same server. The key can be stored 
somewhere
on NFS for example, if you loss the server, they cant access the key 
because

server wont have access to local NFS storage.
Another example, can be stored over flash memory and when server is
restarted the monitor guy insert flash memory. It's have so many 
variants.

I did not understand what is the point of your comment.


R.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COPY FROM how long should take ?

2011-08-26 Thread Condor

Hello,
today I try to restore on of my tables with copy from file that I made 
before.
The file is 2.4 GB, only integers ... took me 3 hours and 30 min and I 
hit
CTRL+C on i7 processor with 8 GB memory, sata 2 hard drive. I modify 
some
psql conf file values and increase memory, work, wal, temp, check point 
segments to 55
Is that normal ? Whole file is 37 mil lines. When I hit enter it was on 
line
as logs says 26 million. I run it twice and second time after 45 min I 
again hit
CTRL+C and in logs I see it was again on 26 million line. Well, the 
line number

is approximately. Is everything normal ?

Postgresql 9.0.4

--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY FROM how long should take ?

2011-08-26 Thread Condor
  

On Fri, 26 Aug 2011 17:49:35 +0530, Jayadevan M wrote: 

>> Date:
08/26/2011 05:40 PM 
>> Subject: [GENERAL] COPY FROM how long should
take ? 
>> Sent by: pgsql-general-ow...@postgresql.org 
>> 
> > Hello,
>
> today I try to restore on of my tables with copy from file that I made

> > before.
> > The file is 2.4 GB, only integers ... took me 3 hours
and 30 min and I 
> > hit
> > CTRL+C on i7 processor with 8 GB memory,
sata 2 hard drive. I modify 
> > some
> > psql conf file values and
increase memory, work, wal, temp, check point 
> > segments to 55
> > Is
that normal ? Whole file is 37 mil lines. When I hit enter it was on 
>
> line
> > as logs says 26 million. I run it twice and second time after
45 min I 
> > again hit
> > CTRL+C and in logs I see it was again on 26
million line. Well, the 
> > line number
> > is approximately. Is
everything normal ?
> > 
> > Postgresql 9.0.4
> 'It was on 26 million
line' - does this mean there were 26 million records in the table? I was
migrating data from Oracle to PostgreSQL, using ora2pg with COPY option
and in under 3 hours it easily copied a table that was 10 GB in size.
Are you seeing any errors in postgres log file? In my case, the number
of records in postgresql table stopped increasing after some time and I
realized data transfer was failing. I checked the log file for
errors,corrected the data issues in the source table (Oracle) and it
finished the load without issues. 
> Regards, 
> Jayadevan

Whole file
is approximately 32 million lines = 32 million records. I check log file
no any errors. 

-- 
Regards,
Condor
  

[GENERAL] How to make replica and use it when master is down ?

2011-10-12 Thread Condor

Hello everyone,
from a few days I want to ask how to make replica server of my database 
and when spontaneously my master server going down due to loosing power 
or has been offline more then 5 min, replica server to become
master server and accept all querys to database like select, insert, 
update and so on and when original master server become online to sync 
his db or just to stay slave / replica server until one of

the servers does not going down ?

Well, I don't know if this is possibly.



--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to make replica and use it when master is down ?

2011-10-15 Thread Condor
  

On 13.10.2011 22:47, Mark Keisler wrote: 

> It is possible. See
http://www.postgresql.org/docs/9.0/interactive/high-availability.html
[3]
> 
> On Wed, Oct 12, 2011 at 3:31 AM, Condor wrote:
> 
>> Hello
everyone,
>> from a few days I want to ask how to make replica server of
my database and when spontaneously my master server going down due to
loosing power or has been offline more then 5 min, replica server to
become
>> master server and accept all querys to database like select,
insert, update and so on and when original master server become online
to sync his db or just to stay slave / replica server until one of
>>
the servers does not going down ?
>> 
>> Well, I don't know if this is
possibly.
>> 
>> -- 
>> Regards,
>> Condor
>> 
>> -- 
>> Sent via
pgsql-general mailing list (pgsql-general@postgresql.org [1])
>> To make
changes to your subscription:
>>
http://www.postgresql.org/mailpref/pgsql-general [2]

Yеs I see this,
but I actually want user comments and problems that can arise. 

--

Regards,
Condor
  

Links:
--
[1]
mailto:pgsql-general@postgresql.org
[2]
http://www.postgresql.org/mailpref/pgsql-general
[3]
http://www.postgresql.org/docs/9.0/interactive/high-availability.html
[4]
mailto:con...@stz-bg.com


[GENERAL] Question about load balance

2012-06-07 Thread Condor

Hello ppl,

I read in internet and in this mailing list, when some one asking about 
load balance,

most of the answers is: pgpool.
I want to asking how stable is pgpool ? How much query can handle ? 
What load average ?

Im looking for something multi master solution.

Thanks,
Hristo

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Two databases on the same server, looking for idea how to restrict access.

2012-06-07 Thread Condor

Hello,

I have two databases on postgresql in different servers, one main 
database and one secondary not much important,
but the problem is they should use some tables both for reading and 
writing and the secondary postgresql should not
have access to whole database on the main database. Im thinking to 
combine both databases on same server and split them
with different schema also to make load balance and some kind of 
permissions to restrict secondary database to read whole
database on the main database or some kind of shared tables (files). 
Any one have some ideas how I can do this ? Any ideas is welcome.



Thanks,
Hristo.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about load balance

2012-06-10 Thread Condor

On 2012-06-08 08:39, Craig Ringer wrote:

On 06/07/2012 10:26 PM, Condor wrote:

Hello ppl,

I read in internet and in this mailing list, when some one asking 
about load balance,

most of the answers is: pgpool.
I want to asking how stable is pgpool ? How much query can handle ? 
What load average ?

Im looking for something multi master solution.


PgPool-II doesn't offer mutli-master operation. Not much does.

Multi-master is very difficult to get right, and even harder to make 
fast.


Are you really sure it's what you want? Failover is often a much,
MUCH simpler and more efficient approach.

--
Craig Ringer


No,
Im not sure, just looking how to make load balance. I have a small 
database around 20 gb,
but I expect to join another database on different scheme and Im 
looking for solution
about load balance or some cache mechanism. Bad part is one row from db 
is read once
in a month, in worst scenario 3-4 times in month and I think cache is 
not good option,

but I don't have idea how will work.


H.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about load balance

2012-06-11 Thread Condor

On 2012-06-11 09:35, John R Pierce wrote:

On 06/10/12 11:26 PM, Condor wrote:
Im not sure, just looking how to make load balance. I have a small 
database around 20 gb,
but I expect to join another database on different scheme and Im 
looking for solution
about load balance or some cache mechanism. Bad part is one row from 
db is read once
in a month, in worst scenario 3-4 times in month and I think cache 
is not good option,

but I don't have idea how will work.



whats the problem you're trying to solve?   so far, doesn't sound
like you have anything that a decent database server couldn't handle
easily.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast



Maybe my fault that I have divided the issues into two separate e-mail,
 one for load balance and one bound by rules on how to bind together 
two bases of different schemes.
As I wrote my base is small, and the server keeps a small load average, 
but if bind together the two databases since
 both must use the recording and reading 3-4 tables only, I seek advice 
if the server load is too much what I could do to it landed.
If I run load balance with pgpool how stable will be my system, I run 
stream replication but I see when master send data to
slave and in this time I query slave server, slave server break query. 
Did I will have same problems with pgpool.

Basically I want to be prepared what options I have if this happens.

H.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about load balance

2012-06-11 Thread Condor

On 2012-06-11 10:23, John R Pierce wrote:

On 06/11/12 12:11 AM, Condor wrote:
I seek advice if the server load is too much what I could do to it 
landed.


I recommend a faster server for this.  more CPU cores, more memory,
faster storage.   that will take you a LONG ways, much simpler than
complex and fragile database cluster schemes


--
john r pierceN 37, W 122
santa cruz ca mid-left coast



Yes, I now but these parameters can't be increase forever. It's can but 
isn't cheep.

For that reason I looking some other ways.

H.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about load balance

2012-06-11 Thread Condor

On 2012-06-11 21:03, John R Pierce wrote:

On 06/11/12 2:11 AM, Condor wrote:
Yes, I now but these parameters can't be increase forever. It's can 
but isn't cheep.

For that reason I looking some other ways.



why don't you worry about that when you get there, rather than before
you even start?


May be because some times when some one start a new business does not 
have 20k $ for
a new server and resource of the server is enough for the moment and as 
I planed
is enough for this year. My question was how stable is pgpool, what 
problems I can
expect, and pure curiosity what is the technique for managing large 
databases.
CPU and memory to the second coming or are there other techniques for 
scattering

 applications on other servers.



I've got a 2U dual xeon X5660 server w/ 48GB ram, and built in 20 x
15k raid10 that can handle like 5000 TPS-C style transactions/second,
(pg_bench).   these are update transactions.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast



H.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about load balance

2012-06-11 Thread Condor

On 2012-06-11 22:47, John R Pierce wrote:

On 06/11/12 12:17 PM, Condor wrote:
May be because some times when some one start a new business does 
not have 20k $ for
a new server and resource of the server is enough for the moment and 
as I planed

is enough for this year.


and when you start a new business, you don't lease a campus large
enough for 10,000 employees, you deal with that when you need it.

if your app actually ends up needing to scale to google size, plan on
having to redesign it a few times.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--- cut ---
My question was how stable is pgpool, what problems I can
expect, and pure curiosity what is the technique for managing large 
databases.
CPU and memory to the second coming or are there other techniques for 
scattering

 applications on other servers.


--- cut ---

I think I'm trying to learn information what is the technique for 
managing large databases

not to philosophize what was my server.

H.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about load balance

2012-06-12 Thread Condor

On 2012-06-12 10:48, John R Pierce wrote:

On 06/11/12 11:29 PM, Condor wrote:
I think I'm trying to learn information what is the technique for 
managing large databases

not to philosophize what was my server.


you handle large databases with a lot of fast disk, and memory, this
gets you into the terabytes.

clustering/load balancing would not do for this, other than needing
MORE fast disk (N replicas require N times the disk system of one
database).   clustering can provide active/slave failover for high
availability, or it can provide replicas for balancing read queries.
updates have to be made to all the replicas, so they wont be any
faster than a single server (in fact, will be slower due to the
overhead of replication

--
john r pierceN 37, W 122
santa cruz ca mid-left coast



Thanks, I thought so, but I was not sure whether this is a better 
option.


H.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help with SQL query and finding NULL array_agg

2012-08-01 Thread Condor

Hello ppl,

I have a problem with one sql query, can some one help me. My query is:

SELECT array_agg(month) AS month, array_agg(status) AS status, 
array_agg(service) AS service, case when array_upper(array_agg(phone), 
1) is not null THEN array_agg(phone) else array_agg(mobile) END FROM 
bills WHERE status > 1 GROUP BY mobile


I try with simple query to identify did array_agg(phone) is empty or 
null and if is it to return me mobile field, if not empty to return me 
phone. Mobile field exist always, but phone may exists for that mobile 
may not exists.

One mobile can have few services like:


  mob servicephone
1321543434  64
1321543434  66   1325


I try few thing but sql only return me records that phone is not empty, 
but I need them both.


Any one has ideas what I can do ?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help with SQL query and finding NULL array_agg

2012-08-02 Thread Condor

On 2012-08-01 23:59, David Johnston wrote:

-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
ow...@postgresql.org] On Behalf Of Condor
Sent: Wednesday, August 01, 2012 4:16 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Need help with SQL query and finding NULL 
array_agg


Hello ppl,

I have a problem with one sql query, can some one help me. My query 
is:


SELECT array_agg(month) AS month, array_agg(status) AS status,
array_agg(service) AS service, case when 
array_upper(array_agg(phone),
1) is not null THEN array_agg(phone) else array_agg(mobile) END FROM 
bills

WHERE status > 1 GROUP BY mobile

I try with simple query to identify did array_agg(phone) is empty or 
null and if
is it to return me mobile field, if not empty to return me phone. 
Mobile field

exist always, but phone may exists for that mobile may not exists.
One mobile can have few services like:


   mob servicephone
1321543434  64
1321543434  66   1325


I try few thing but sql only return me records that phone is not 
empty, but I

need them both.

Any one has ideas what I can do ?


ARRAY_AGG() is never an empty array since there is always at least a
single record that is going to be aggregated.  In your case your 
array

will have NULL "values" when phone numbers are missing but the upper
bound will still show a positive number.

SELECT array_upper(ARRAY[NULL]::varchar[], 1) == 1

You would need to write a custom aggregation that ignores NULL and
thus could return an empty array if no valid phone numbers are
present.

The proper logic would be:

CASE WHEN array_upper(phone_agg, 1) = 1 AND phone_agg[1] IS NULL THEN 
... END


You also likely want to use:

ARRAY_AGG(DISTINCT column) -- this makes sure duplicates are only
present a single time and  ensure that an all-NULL situation results
in a single element instead of one NULL for each input record.

Hopefully this help because I couldn't make heads nor tails as to
what exactly your issue is.  The lack of input data, the current
output, and the desired output limits my ability to understand and
help.

One last comment:  I would generally avoid naming the output of an
ARRAY_AGG(column) the same name as the input column.  I generally, at
minimum, make the output column name plural to reflect the fact that
it contains multiple values of whatever is stored in the source
column.

David J.



Hello,
I understand what is the problem, but I can't combine your example with 
my case.


I write my example in middle of the night and may be I miss to say 
explain much more

about the structure:


mob servicephone month
 132999  64 1
 1321543434  66   1325  1
 1321543434  67   1325  2

First record when phone is empty and mob is 132999 the number is 
correct.
Second two records also is correct, but the phone is not empty so I 
need

that filed phone, they have services and month when to start.
I'm unable to use phone_agg[1] IS NULL  because sql return me error 
that can't

use the phone_agg[1]

Im expect that result:
   month | status  | service | array_agg
-+-+-+---
   {07}  |  {0}|  {64}   | {132999}
 {08,07} | {0,0}   | {66,67} | {1325,1325}


In the end I will have arrays for every phone which service will use.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help with SQL query and finding NULL array_agg

2012-08-03 Thread Condor

On 2012-08-02 21:32, David Johnston wrote:

-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
ow...@postgresql.org] On Behalf Of Condor
Sent: Thursday, August 02, 2012 4:35 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help with SQL query and finding NULL
array_agg

On 2012-08-01 23:59, David Johnston wrote:
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
>> ow...@postgresql.org] On Behalf Of Condor
>> Sent: Wednesday, August 01, 2012 4:16 PM
>> To: pgsql-general@postgresql.org
>> Subject: [GENERAL] Need help with SQL query and finding NULL
>> array_agg
>>
>> Hello ppl,
>>
>> I have a problem with one sql query, can some one help me. My 
query

>> is:
>>
>> SELECT array_agg(month) AS month, array_agg(status) AS status,
>> array_agg(service) AS service, case when
>> array_upper(array_agg(phone),
>> 1) is not null THEN array_agg(phone) else array_agg(mobile) END 
FROM

>> bills WHERE status > 1 GROUP BY mobile
>>
>> I try with simple query to identify did array_agg(phone) is empty 
or
>> null and if is it to return me mobile field, if not empty to 
return

>> me phone.
>> Mobile field
>> exist always, but phone may exists for that mobile may not 
exists.

>> One mobile can have few services like:
>>
>>
>>mob servicephone
>> 1321543434  64
>> 1321543434  66   1325
>>
>>
>> I try few thing but sql only return me records that phone is not
>> empty, but I need them both.
>>
>> Any one has ideas what I can do ?
>
> ARRAY_AGG() is never an empty array since there is always at least 
a
> single record that is going to be aggregated.  In your case your 
array
> will have NULL "values" when phone numbers are missing but the 
upper

> bound will still show a positive number.
>
> SELECT array_upper(ARRAY[NULL]::varchar[], 1) == 1
>
> You would need to write a custom aggregation that ignores NULL and
> thus could return an empty array if no valid phone numbers are
> present.
>
> The proper logic would be:
>
> CASE WHEN array_upper(phone_agg, 1) = 1 AND phone_agg[1] IS NULL
THEN
> ... END
>
> You also likely want to use:
>
> ARRAY_AGG(DISTINCT column) -- this makes sure duplicates are only
> present a single time and  ensure that an all-NULL situation 
results

> in a single element instead of one NULL for each input record.
>
> Hopefully this help because I couldn't make heads nor tails as to 
what
> exactly your issue is.  The lack of input data, the current 
output,

> and the desired output limits my ability to understand and help.
>
> One last comment:  I would generally avoid naming the output of an
> ARRAY_AGG(column) the same name as the input column.  I generally, 
at
> minimum, make the output column name plural to reflect the fact 
that

> it contains multiple values of whatever is stored in the source
> column.
>
> David J.


Hello,
I understand what is the problem, but I can't combine your example 
with my

case.

I write my example in middle of the night and may be I miss to say 
explain

much more about the structure:


 mob servicephone month
  132999  64 1
  1321543434  66   1325  1
  1321543434  67   1325  2

First record when phone is empty and mob is 132999 the number is
correct.
Second two records also is correct, but the phone is not empty so I
need
that filed phone, they have services and month when to start.
I'm unable to use phone_agg[1] IS NULL  because sql return me error
that can't
use the phone_agg[1]

Im expect that result:
month | status  | service | array_agg
-+-+-+---
{07}  |  {0}|  {64}   | {132999}
  {08,07} | {0,0}   | {66,67} | {1325,1325}


In the end I will have arrays for every phone which service will 
use.





Hopefully this will help.

The first thing I did was break up the query into parts
0) data
1) aggregation
2) conditional return

Note I am using the ability for the CTE to provide column names so
the contained queries are not cluttered with "AS alias" constructs.

In order to make things simpler I avoid storing NULL in the "phones"
array and instead store "N/A" if the phone is missing.  This lets me
use " op ANY/ALL(array)" later on to check on the contents of the
array.  The result of that condition is called "final_phones" and it
either matches the "mobs" or the "phones" array depending on whether
all of the "phone" numbers are missing.

Another option i

[GENERAL] How to analyze load average ?

2012-08-06 Thread Condor

Hello,

can some tell me, how I can analyze from where my server bring up load 
average ?


I have one server with 128 GB memory, 32 CPU x86_64, RAID5 - 3 15k SAS 
HDD ext4 fs. That is my produce server,
also is configured to send wal files over the net. Here is my 
configuration:



max_connections = 500
shared_buffers = 32GB
work_mem = 192MB
maintenance_work_mem = 6GB
max_stack_depth = 6MB
bgwriter_delay = 200ms
bgwriter_lru_maxpages = 100
bgwriter_lru_multiplier = 2.0
wal_level = hot_standby
fsync = on
synchronous_commit = on
wal_sync_method = fdatasync
full_page_writes = on
wal_buffers = -1
checkpoint_segments = 32
checkpoint_timeout = 5min
checkpoint_completion_target = 0.5
max_wal_senders = 5
wal_sender_delay = 1s
wal_keep_segments = 64

enable_bitmapscan = on
enable_hashagg = on
enable_hashjoin = on
enable_indexscan = on
enable_material = on
enable_mergejoin = on
enable_nestloop = on
enable_seqscan = on
enable_sort = on
enable_tidscan = on

seq_page_cost = 1.0
random_page_cost = 2.0
cpu_tuple_cost = 0.01
cpu_index_tuple_cost = 0.005
cpu_operator_cost = 0.0025
effective_cache_size = 64GB

autovacuum = on


My on board raid cache write trough is OFF.

When I connect to server i see only 2 query with select * from 
pg_stat_activity;

that is not complicated, select rid from table where id = 1;
Both tables have index on most frequently columns. When I check my 
server load average is 0.88 0.94 0.87
Im trying to check from where that load avg is so high, only postgres 
9.1.4 is working on that server.


Can some one point me from where I should start digging ? I think my 
configuration about connections, shared buffers is right as I read 
documentation,
I think this slow down can be because mu cache is on the raid card is 
OFF. As I read on postgres wiki pages,
if I turn ON that setting on some fall I might lost some of my data, 
well the company has UPS and I also have stream replicator so I won't 
lose much data.


My iostat show:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0.900.001.060.000.00   98.04

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda   0.00 0.00 0.00  0  0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   1.920.001.060.000.00   97.02

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda   0.00 0.00 0.00  0  0


And my vmstat:

procs ---memory-- ---swap-- -io -system-- 
cpu
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy 
id wa
 0  0  0 99307408 334300 3114470800 11810  
1  1 98  0
 0  0  0 99303808 334300 3114471600 0 0  926  715  
0  0 99  0
 0  0  0 99295232 334300 3114471600 0 0  602  532  
0  0 99  0
 4  0  0 99268160 334300 3114471600 032  975  767  
2  2 96  0
 1  0  0 99298544 334300 3114471600 0 0  801  445  
3  2 95  0
 0  0  0 99311336 334300 3114471600 0 0  320  175  
1  0 98  0
 2  0  0 99298920 334300 3114471600 0 0 1195  996  
1  1 97  0
 0  0  0 99307184 334300 3114471600 0 0  843  645  
0  1 98  0
 0  0  0 99301024 334300 3114471600 012 1346 1040  
2  2 96  0


Any one can tell me how I can find from where that load average is so 
high ?


Thanks


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to analyze load average ?

2012-08-06 Thread Condor

On 2012-08-06 17:38, Tomas Vondra wrote:

On 6 Srpen 2012, 16:23, Condor wrote:

Hello,

can some tell me, how I can analyze from where my server bring up 
load

average ?

...

When I connect to server i see only 2 query with select * from
pg_stat_activity;
that is not complicated, select rid from table where id = 1;
Both tables have index on most frequently columns. When I check my
server load average is 0.88 0.94 0.87

...

Any one can tell me how I can find from where that load average is 
so

high ?


Errr, what? Why do you think the load average is high?

Load average is defined as a number of processes in the run queue 
(i.e.
using or waiting for a CPU). So the load average "0.88 0.94 0.87" 
means

there was less than one process waiting for CPU most of the time. I
wouldn't call that "high load average", especially not on a 32-core
system.

Tomas



I think load avg is high because before I change the servers my produce 
server
was on 16 cpu, 24 gb memory and load avg on that server was 0.24. 
Database is the same,
users that use the server is the same, nothing is changed. I dump the 
DB from old server
and import it to new one before few days ago and because that is the 
new server with more
resource I monitor his load avg and I think is too high. For that 
reason Im asking is there
a way to detect why my load avg is 0.88. When I run select * from 
pg_stat_activity;
did not see more then 3-4 query that isn't much complicated and I 
already try them with

explain to see what is the result.

I know what load average mean, I was OpenBSD user a few years, now I 
use Slackware with kernel 3.5.



Hristo

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to analyze load average ?

2012-08-17 Thread Condor

On , Tomas Vondra wrote:
I think load avg is high because before I change the servers my 
produce

server
was on 16 cpu, 24 gb memory and load avg on that server was 0.24.
Database is the same,
users that use the server is the same, nothing is changed. I dump 
the DB

from old server
and import it to new one before few days ago and because that is the 
new

server with more
resource I monitor his load avg and I think is too high. For that 
reason

Im asking is there
a way to detect why my load avg is 0.88. When I run select * from
pg_stat_activity;
did not see more then 3-4 query that isn't much complicated and I
already try them with
explain to see what is the result.


Well, the load average is a bit difficult to analyze because of the
exponential damping. Also, I find it a bit artificial and if there 
are

no sudden peaks or slowdowns I wouldn't bother analyzing this.

A wild quess is that the new server has more CPUs but at lower
frequency, therefore the tasks run longer and impact the load average
accordingly. There are other such things (e.g. maintenance of larger
shared buffers takes more time).

Have you verified that the performance of the new hardware matches
expectations and that it's actually faster than the old server?


I know what load average mean, I was OpenBSD user a few years, now I
use Slackware with kernel 3.5.


So you do have 3.5 on production? Wow, you're quite adventurous.


Yep, that's me :)



Tomas



Hello to every one again,
sorry for my late replay but I found the problem (I think).
I change the Default IO scheduler from (No-op) to Deadline and
my load average dropped down to 0.23



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Result from Having count

2012-08-23 Thread Condor

Hello ppl,

I try to make query and see how many ids have more then one row.

few records is:

ids | val | some
 a  | 1   | x
 a  | 1   | v
 b  | 1   | x
 b  | 2   | c


I focus on ids and val with:

SELECT ids, val FROM table WHERE ids = 'a' GROUP BY ids, val HAVING 
COUNT(ids) > 1;


and result is:

ids | val
 a  |   1

Well in this condition pgsql shold not return me positive result 
because on documentation I read having count work on group clause,
and when I group these two records based on ids = 'a' they become to 
one row and my condition is if the result after grouping is greeter then 
1.


I use postgresql 9.1.4 x64


Any one can tell me what I miss ?


Regards,
Hristo Simeonov



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Result from Having count

2012-08-23 Thread Condor

On , Frank Lanitz wrote:

Am 23.08.2012 09:52, schrieb Condor:

Hello ppl,

I try to make query and see how many ids have more then one row.

few records is:

ids | val | some
 a  | 1   | x
 a  | 1   | v
 b  | 1   | x
 b  | 2   | c


I focus on ids and val with:

SELECT ids, val FROM table WHERE ids = 'a' GROUP BY ids, val HAVING
COUNT(ids) > 1;

and result is:

ids | val
 a  |   1

Well in this condition pgsql shold not return me positive result 
because

on documentation I read having count work on group clause,
and when I group these two records based on ids = 'a' they become to 
one
row and my condition is if the result after grouping is greeter then 
1.


I use postgresql 9.1.4 x64


Any one can tell me what I miss ?



Not sure I understand you correct, but maybe count() is working for 
you.

Maybe you would need some primary key for good values.

cheers,
Frank



Sorry for my email,
after some thinking I understand my error and change query to:

SELECT COUNT(DISTINCT val), ids FROM table WHERE ids = 'a' GROUP BY ids 
HAVING COUNT(DISTINCT val) > 1;


and it's work.

Thanks.
Hristo C.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Strange problem with string and select

2012-08-30 Thread Condor

Hello,
can I ask is exist some kind of automatic escape string in postgresql ?
I use pgsql 9.1.5 and I have very interest problem, I have field with 
text string that I cant find normally.

Variable encoding from variables:

 server_encoding | WIN1251
 lc_collate  | bg_BG.CP1251
 lc_ctype| bg_BG.CP1251
 lc_messages | bg_BG.CP1251
 lc_monetary | bg_BG.CP1251
 lc_numeric  | bg_BG.CP1251
 lc_time | bg_BG.CP1251
 client_encoding | WIN1251

Here is examples (I replace in example Cyrillic encoding because most 
of the ppl don't have cp1251 encoding)


select * from postcodes where namejr LIKE 'LULIN V%';
id | namejr
21 | LULIN VIII
22 | LULIN VII
23 | LULIN VIII
24 | LULIN VI
25 | LULIN VII
26 | LULIN V

buf if I do:
select * from postcodes where namejr LIKE 'LULIN VII%';
result is:
22 | LULIN VII
25 | LULIN VII


as I can see ids 21 and 23 missing that should be: LULIN VIII
I dump follow records to text file and make hex compare,
both ids 21 and 23 is the same (equal).

Any one can give me a little help?


Hristo C.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with string and select

2012-08-30 Thread Condor

On , Alban Hertroys wrote:

On 30 August 2012 10:12, Condor  wrote:

Hello,
can I ask is exist some kind of automatic escape string in 
postgresql ?
I use pgsql 9.1.5 and I have very interest problem, I have field 
with text

string that I cant find normally.


Here is examples (I replace in example Cyrillic encoding because 
most of the

ppl don't have cp1251 encoding)

select * from postcodes where namejr LIKE 'LULIN V%';
id | namejr
21 | LULIN VIII
22 | LULIN VII
23 | LULIN VIII
24 | LULIN VI
25 | LULIN VII
26 | LULIN V

buf if I do:
select * from postcodes where namejr LIKE 'LULIN VII%';
result is:
22 | LULIN VII
25 | LULIN VII


as I can see ids 21 and 23 missing that should be: LULIN VIII
I dump follow records to text file and make hex compare,
both ids 21 and 23 is the same (equal).

Any one can give me a little help?


Perhaps the records with ids 21 and 23 have a lower-case 'l' ('L')
instead of an upper-case 'I' ('i'), or something similar? Are the
hex-codes for ids 21, 22, 23 and 25 the same for the substring 
reading

'VII'?



Ah, you are absolute right, after an exhausting search I forgot to 
compare hex value of other strings.

in LULIN VIII last three hex strings is 0xb2 0xb2 0xb2
and in  LULIN VII last two hex strings is 0x49 0x49.

Thank you for the idea.


Hristo C.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about permissions on database.

2012-09-22 Thread Condor

Hello,
I wanna ask: is there a short way to giver permission to one user to 
select/insert (all privileges) on whole database ?
Im create a user and try to give him all permission on existing 
database, but when I try to select always got:

ERROR:  permission denied for relation table_name

I simple do:
GRANT ALL PRIVILEGES ON DATABASE my_db TO my_user;
and when I do that my idea and what Im trying to do is to give all 
privileges on for select, insert, update ... using sequences, exec 
functions

to one user, but when I try to select, I receive error message:
ERROR:  permission denied for relation table_name

I look at documentation and remained less scarred about how many grants 
I should do for tables, for sequences, execution.

Im using postgresql 9.2


C.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about permissions on database.

2012-09-22 Thread Condor

On 2012-09-22 23:47, Ryan Kelly wrote:

On Sat, Sep 22, 2012 at 11:35:00PM +0300, Condor wrote:

Hello,
I wanna ask: is there a short way to giver permission to one user to
select/insert (all privileges) on whole database ?
Im create a user and try to give him all permission on existing
database, but when I try to select always got:
ERROR:  permission denied for relation table_name

I simple do:
GRANT ALL PRIVILEGES ON DATABASE my_db TO my_user;
and when I do that my idea and what Im trying to do is to give all
privileges on for select, insert, update ... using sequences, exec
functions
to one user, but when I try to select, I receive error message:
ERROR:  permission denied for relation table_name

I look at documentation and remained less scarred about how many
grants I should do for tables, for sequences, execution.
Im using postgresql 9.2
You don't want to GRANT on the database. That doesn't do what you 
think

it does. You, however, can do:

GRANT ALL ON ALL TABLES IN SCHEMA public TO your_user;


Before I write first email I try this query:
GRANT ALL PRIVILEGES ON  ALL TABLES IN SCHEMA public TO my_user;
but result was the same error:
ERROR:  permission denied for relation table_name



This is documented clearly here:
http://www.postgresql.org/docs/9.2/static/sql-grant.html

This is generally a bad idea.

You can alternatively make the user a super user:

ALTER ROLE your_user WITH SUPERUSER;

But this is an even worse idea.

If one role owns all the tables in that database, you can make your 
role

a member of that role:

GRANT owner_role TO your_role;

But are you really sure that your user needs permissions on 
everything?


Yes, well not for everything but for 90% of the tables. The person 
before me
use postgres user in front end apache to storing and fetching data from 
tables. Front end
work with whole database, insert new data on tables, fetching. I want 
to replace usage of
postgres super user. Last night when read documentation I understand it 
exact as what David J
explain in second mail on the list. Why I say "little scared" because 
tables in database is over 150
plus sequences .. almost 200 objects. Giving separate permission on 
every single table to user,
this it's seems bad idea for me, because in future if I need to change 
something or add/remove
new tables I always should give permissions and if I forgot front end 
will display errors.
From other side is a very good that I can say what user in which tables 
and fields have permissions,

but it's seems in this case I can't use it.
Last think that coming in my mind is to change owner to all tables in 
database my_db from current user (postgres) to

my new user (my_user).

Any advice what I can do ?

Cheers,
C


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade from 8.4.13 to 9.2.0.1 successful but it still displays 8.4.13 if SELECT version();

2012-09-23 Thread Condor

On 2012-09-23 14:47, a...@hsk.hk wrote:

Hi,

I have upgraded postgresql 8.4.13 to 9.2.0.1 O/S Ubuntu, restarted
postgresql, it displayed my postgresql is 9.2 but when I log into
postgresql, show version, it is still 8.4.13, see a) and b) below,

a)
* Restarting PostgreSQL 9.2 database server
   ...done.


AFAIK,
postgres does not have any similar message like that :

 * Restarting PostgreSQL 9.2 database server
...done.


This message probably is from your OS upgrade tool, not from postgres.


b)
psql (8.4.13)
Type "help" for help.
postgres=# SELECT version();
   version


--
 PostgreSQL 8.4.13 on i486-pc-linux-gnu, compiled by GCC gcc-4.4.real
(Ubuntu 4.4.3-4ubuntu5.1) 4.4.3, 32-bit
(1 row)




That is the true version that you have installed.





Anything I have missed in my upgrade? please help.



Check logs of your installation tool, the problems can be different:
Like bug in your installation tool, bug in restarting part of that tool 
unable to restart server.


Cheers,
C.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to raise index points when equal and like is used with gist ?

2012-10-11 Thread Condor

Hello,
I have a problem with query and index scan based on pg_trgm module.

Here is few examples:
First example is with equal:


 explain analyze SELECT * FROM table WHERE firstname = 'OLEG' AND 
middlename || lastname LIKE '%KUZNICOV%IGORU%';

 QUERY PLAN
-
 Bitmap Heap Scan on abonats_tbl  (cost=34.42..6043.65 rows=1 
width=601) (actual time=2.885..14.062 rows=1 loops=1)

   Recheck Cond: (firstname = 'OLEG'::text)
   Filter: ((middlename || lastname) ~~ '%KUZNICOV%IGORU%'::text)
   Rows Removed by Filter: 1731
   ->  Bitmap Index Scan on table_firstname_idx  (cost=0.00..34.42 
rows=1690 width=0) (actual time=0.699..0.699 rows=1732 loops=1)

 Index Cond: (firstname = 'OLEG'::text)
 Total runtime: 14.126 ms
(7 rows)

But if I add one like with gist index result is not filtered here is 
example:



explain analyze SELECT * FROM table WHERE phone LIKE '12%' AND 
firstname = 'OLEG' AND middlename || lastname LIKE '%KUZNICOV%IGORU%';
 
QUERY PLAN

-
 Bitmap Heap Scan on abonats_tbl  (cost=1638.89..1816.65 rows=1 
width=601) (actual time=219.793..219.793 rows=0 loops=1)
   Recheck Cond: ((firstname = 'OLEG'::text) AND (phone ~~ 
'12%'::text))

   Filter: ((middlename || lastname) ~~ '%KUZNICOV%IGORU%'::text)
   Rows Removed by Filter: 65
   ->  BitmapAnd  (cost=1638.89..1638.89 rows=45 width=0) (actual 
time=219.197..219.197 rows=0 loops=1)
 ->  Bitmap Index Scan on table_firstname_idx  
(cost=0.00..34.42 rows=1690 width=0) (actual time=0.867..0.867 rows=1732 
loops=1)

   Index Cond: (firstname = 'OLEG'::text)
 ->  Bitmap Index Scan on table_phonegist_idx  
(cost=0.00..1604.22 rows=33995 width=0) (actual time=217.639..217.639 
rows=33256 loops=1)

   Index Cond: (phone ~~ '12%'::text)
 Total runtime: 220.426 ms


My question is: Is there any way how to make postgresql first to search 
from field that is with equal I have index there and then to filter 
result based to other conditions first gist and then other.

I think may be I should play with index points.


Im using postgresql 9.2.1 x86_64


Regards,
C.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to raise index points when equal and like is used with gist ?

2012-10-12 Thread Condor

On 2012-10-12 01:14, Sergey Konoplev wrote:

On Thu, Oct 11, 2012 at 2:23 AM, Condor  wrote:
explain analyze SELECT * FROM table WHERE phone LIKE '12%' AND 
firstname =

'OLEG' AND middlename || lastname LIKE '%KUZNICOV%IGORU%';
 
QUERY

PLAN

-
 Bitmap Heap Scan on abonats_tbl  (cost=1638.89..1816.65 rows=1 
width=601)

(actual time=219.793..219.793 rows=0 loops=1)
   Recheck Cond: ((firstname = 'OLEG'::text) AND (phone ~~ 
'12%'::text))

   Filter: ((middlename || lastname) ~~ '%KUZNICOV%IGORU%'::text)
   Rows Removed by Filter: 65
   ->  BitmapAnd  (cost=1638.89..1638.89 rows=45 width=0) (actual
time=219.197..219.197 rows=0 loops=1)
 ->  Bitmap Index Scan on table_firstname_idx  
(cost=0.00..34.42

rows=1690 width=0) (actual time=0.867..0.867 rows=1732 loops=1)
   Index Cond: (firstname = 'OLEG'::text)
 ->  Bitmap Index Scan on table_phonegist_idx  
(cost=0.00..1604.22
rows=33995 width=0) (actual time=217.639..217.639 rows=33256 
loops=1)

   Index Cond: (phone ~~ '12%'::text)
 Total runtime: 220.426 ms


My question is: Is there any way how to make postgresql first to 
search from
field that is with equal I have index there and then to filter 
result based

to other conditions first gist and then other.
I think may be I should play with index points.


What about dropping table_phonegist_idx index? Is it used somewhere 
else?



Ill try this night, no isn't used elsewhere.


ps. BTW how do you cope with the pg_trgm ASCII alphanumeric
restriction? Transliteration?


The tel field has alphanumeric values and md5 hash values sometimes of 
some phone.

Server is setup and started in CP1251 encoding.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to raise index points when equal and like is used with gist ?

2012-10-12 Thread Condor

On 2012-10-12 03:27, Tom Lane wrote:

Condor  writes:

explain analyze SELECT * FROM table WHERE phone LIKE '12%' AND
firstname = 'OLEG' AND middlename || lastname LIKE 
'%KUZNICOV%IGORU%';



QUERY PLAN

-
  Bitmap Heap Scan on abonats_tbl  (cost=1638.89..1816.65 rows=1
width=601) (actual time=219.793..219.793 rows=0 loops=1)
Recheck Cond: ((firstname = 'OLEG'::text) AND (phone ~~
'12%'::text))
Filter: ((middlename || lastname) ~~ '%KUZNICOV%IGORU%'::text)
Rows Removed by Filter: 65
->  BitmapAnd  (cost=1638.89..1638.89 rows=45 width=0) (actual
time=219.197..219.197 rows=0 loops=1)
  ->  Bitmap Index Scan on table_firstname_idx
(cost=0.00..34.42 rows=1690 width=0) (actual time=0.867..0.867 
rows=1732

loops=1)
Index Cond: (firstname = 'OLEG'::text)
  ->  Bitmap Index Scan on table_phonegist_idx
(cost=0.00..1604.22 rows=33995 width=0) (actual 
time=217.639..217.639

rows=33256 loops=1)
Index Cond: (phone ~~ '12%'::text)
  Total runtime: 220.426 ms


You sure that server is 9.2?  Because that looks like a planner bug 
we

squelched some time ago, wherein it was way too enthusiastic about
adding more indexes to a BitmapAnd.


Yes, Im sure:
 PostgreSQL 9.2.1 on x86_64-slackware-linux-gnu, compiled by 
x86_64-slackware-linux-gcc (GCC) 4.7.1, 64-bit




If it is 9.2, please send a self-contained test case, that is some 
test

data (and settings, if you're using nondefault ones) that makes it do
this.



Hm ... strange problem I catch. When I try to reproduce the problem, 
with test table, I made a very little table


http://pastebin.com/nEK3cRr2

When I run the same type of query results is different:
 Seq Scan on users  (cost=0.00..1.12 rows=1 width=26) (actual 
time=0.014..0.016 rows=1 loops=1)
   Filter: ((tel ~~ '09%'::text) AND (firstname = 'GREG'::text) AND 
((middlename || lastname) ~~ '%%'::text))

   Rows Removed by Filter: 5
 Total runtime: 0.042 ms
(4 rows)


Okay, may be the problem is because I use cp1251 encoding .. lets 
change the data values, drop table, insert cp1251 values,
start vacuum and result was the same speed  Total runtime: 0.052 ms the 
same type of scan was used:


 Seq Scan on users  (cost=0.00..1.14 rows=1 width=132) (actual 
time=0.019..0.021 rows=1 loops=1)
   Filter: ((tel ~~ '09%'::text) AND (firstname = 'CP1251 CHARS 
HERE'::text) AND ((middlename || lastname) ~~ '%%'::text))

   Rows Removed by Filter: 6
 Total runtime: 0.052 ms

Even without tel filed result and type of scan is the same (Seq Scan).

Now first name is write in cyrillic and mean "GREG" (I replace it with 
CP1251 CHARS HERE, because some ppl did not have cyrillic encoding). 
When I run the same query on the same database but different table that 
give strange result Bitmap Heap Scan. Index field is the same like test 
table from pastebin, no difference.



And here I must say the history of the table. That table was made on 
psql 7.3 version and migrate on every major upgrade of the server that 
require dump/restore of database if that information is valuable.


Any one has ideas what is going wrong on that table ? Why the same 
query on two different table with the same data gives me different scan 
results ?


Regards,
C




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to raise index points when equal and like is used with gist ?

2012-10-14 Thread Condor

On 2012-10-12 11:30, Sergey Konoplev wrote:

On Fri, Oct 12, 2012 at 1:20 AM, Condor  wrote:
Even without tel filed result and type of scan is the same (Seq 
Scan).


This is because your table has to few rows and it is easier to seq
scan. Add more rows, eg. 100 000, then ANALYZE the table and run
tests. Use random() and generate_series() to generate the data.



You was right,
when I read documentation of pg_trgm I see how much time will take to
search in 100 000 rows, but I was misled myself because did not expect
to change the search scan. Seq to Bitmap. I understand my mistake and 
change

query to:

EXPLAIN ANALYZE WITH AS ab (SELECT * FROM tables WHERE firstname = 
'OLEG' AND middlename || lastname LIKE '%KUZNICOV%IGORU%')

SELECT * FROM ab WHERE tel LIKE '12%';

 CTE Scan on ab  (cost=6490.15..6531.14 rows=9 width=965) (actual 
time=2.256..20.017 rows=43 loops=1)

   Filter: (tel ~~ '12%'::text)
   Rows Removed by Filter: 1690
   CTE ab
 ->  Bitmap Heap Scan on tables (cost=39.87..6490.15 rows=1822 
width=600) (actual time=1.789..17.817 rows=1733 loops=1)

   Recheck Cond: (firstname = 'OLEG'::text)
   Filter: ((middlename || lastname) ~~ 
'%KUZNICOV%IGORU%'::text)
   ->  Bitmap Index Scan on tables_firstname_idx  
(cost=0.00..39.42 rows=1823 width=0) (actual time=1.178..1.178 rows=1733 
loops=1)

 Index Cond: (firstname = 'OLEG'::text)
 Total runtime: 20.278 ms



Now is much better 20 ms vs 220 ms.


Thanks for your help.


Cheers,
C


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] return query execute SQL-problem

2012-10-16 Thread Condor

On 2012-10-16 10:44, Maximilian Tyrtania wrote:

Hi there,

here is something I don't quite grasp (PG 9.1.3): This function:

CREATE OR REPLACE FUNCTION f_aliastest()
  RETURNS setof text AS
$BODY$
declare sql text;
begin
  sql:='SELECT ''sometext''::text as alias';
  return query execute SQL;
end;
$BODY$
LANGUAGE plpgsql VOLATILE;

returns its result as:

contactking=# select * from f_aliastest();

 f_aliastest
-
 sometext
(1 row)

I was hoping I'd get the data back as 'alias', not as 'f_aliastest'. 
If I do:


contactking=# select alias from f_aliastest();
ERROR:  column "alias" does not exist
LINE 1: select alias from f_aliastest();

Is there a way that I can make my function return the field aliases?

Best wishes from Berlin,

Maximilian Tyrtania
http://www.contactking.de


You can use AS
select f_aliastest() AS alias;


Regards,
C


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] character encoding

2012-12-06 Thread Condor

On 2012-12-06 17:30, Adrian Klaver wrote:

On 12/06/2012 07:20 AM, Doug Kunzman wrote:
I'm trying to support an automatic character encoding to UNICODE so 
Java

strings with none ASCII character can be stored in a table.

I've edited my postgressql.conf with the following command,
PGCLIENTENCODING=UNICODE

And I'm getting this error message,

FATAL:  unrecognized configuration parameter "PGCLIENTENCODING"

Any ideas?  your help would be appreciated.



http://www.postgresql.org/docs/9.2/interactive/runtime-config-client.html#RUNTIME-CONFIG-CLIENT-FORMAT

client_encoding (string)
Sets the client-side encoding (character set). The default is to
use the database encoding. The character sets supported by
the PostgreSQL server are described in Section 22.3.1.

I believe PGCLIENTENCODING is the env setting.



Thanks, doug



--
Adrian Klaver
adrian.kla...@gmail.com


PGCLIENTENCODING is env setting. You can set it before logging in to 
database like:


PGCLIENTENCODING=win1251
export PGCLIENTENCODING

and then: psql -U x database


Regards,
Hristo Simeonov




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query and index ... unexpected result need advice.

2012-12-08 Thread Condor

Hello,
Yesterday when I read the email I came across an issue and I thought 
very interesting result. The topic is: "Table with million rows - and 
PostgreSQL 9.1 is not using the index". I decided to try it because from 
250 ms with millions rows to 15 ms is very good, but I did not get the 
expected result, rather worse. I am interested to know where is my 
mistake or something wrong with server which I doubt. Here is my current 
query with explain:
(I change names to XXX YYY ZZZ because original names is written on 
CP1251 and most ppl in list can't read them)


db=# explain analyze SELECT *,COALESCE(firstname,'') || ' ' || 
COALESCE(middlename,'') || ' ' || COALESCE(lastname, '') AS name FROM 
clients_tbl WHERE firstname = 'XXX' AND middlename || lastname LIKE 
'%YYY%ZZZ%';
   QUERY 
PLAN


 Bitmap Heap Scan on clients_tbl  (cost=230.90..32648.50 rows=1 
width=602) (actual time=12.649..35.919 rows=1 loops=1)

   Recheck Cond: (firstname = 'XXX'::text)
   Filter: ((middlename || lastname) ~~ '%YYY%ZZZ%'::text)
   Rows Removed by Filter: 11727
   ->  Bitmap Index Scan on clients_tbl_firstname_idx  
(cost=0.00..230.90 rows=11886 width=0) (actual time=5.415..5.415 
rows=11728 loops=1)

 Index Cond: (firstname = 'XXX'::text)
 Total runtime: 35.988 ms
(7 rows)

35 ms isn't bad, but it's will be good if I can optimize it more.
firstname, middlename, lastname is declarated as TEXT;

create index clients_tbl_firstname_idx on clients_tbl using btree 
(firstname);
create index clients_tbl_middlename_idx on clients_tbl using btree 
(middlename);
create index clients_tbl_lastname_idx on clients_tbl using btree 
(lastname);


I dropped both indexes and create new one:

create index clients_tbl_firstname_idx on clients_tbl using btree 
(firstname COLLATE "bg_BG" text_pattern_ops);
create index clients_tbl_middlename_idx on clients_tbl using btree 
(middlename COLLATE "bg_BG" text_pattern_ops);
create index clients_tbl_lastname_idx on clients_tbl using btree 
(lastname COLLATE "bg_BG" text_pattern_ops);


My server is in CP1251 encoding:

 
List of databases
Name|  Owner   | Encoding |   Collate|Ctype |   
Access privileges   |  Size   | Tablespace |Description

+--+--+--+--+---+-++
  db| postgres | WIN1251  | bg_BG.CP1251 | bg_BG.CP1251 | 
=Tc/postgres +| 121 GB  | pg_default |


I run the same query again:

db=# explain analyze SELECT *,COALESCE(firstname,'') || ' ' || 
COALESCE(middlename,'') || ' ' || COALESCE(lastname, '') AS name FROM 
clients_tbl WHERE firstname = 'XXX' AND middlename || lastname LIKE 
'%YYY%ZZZ%';

  QUERY PLAN
---
 Seq Scan on clients_tbl  (cost=0.00..105444.47 rows=1 width=602) 
(actual time=56.343..381.068 rows=1 loops=1)
   Filter: ((firstname = 'XXX'::text) AND ((middlename || lastname) ~~ 
'%YYY%ZZZ%'::text))

   Rows Removed by Filter: 1279568
 Total runtime: 381.137 ms
(4 rows)

381 ms ...

Any one have ides ?

Thanks,
H.S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query and index ... unexpected result need advice.

2012-12-09 Thread Condor

On 2012-12-10 00:31, Jeff Janes wrote:

On Sat, Dec 8, 2012 at 5:54 AM, Condor  wrote:


I am interested to know where is my mistake or something wrong
with server which I doubt. Here is my current query with explain:
(I change names to XXX YYY ZZZ because original names is written on 
CP1251

and most ppl in list can't read them)

db=# explain analyze SELECT *,COALESCE(firstname,'') || ' ' ||
COALESCE(middlename,'') || ' ' || COALESCE(lastname, '') AS name 
FROM

clients_tbl WHERE firstname = 'XXX' AND middlename || lastname LIKE
'%YYY%ZZZ%';


What is the meaning/purpose of the "middlename || lastname LIKE 
'%YYY%ZZZ%'" ?


At least in my culture, that doesn't seem like a sensible thing to 
do.

 Is it trying to compensate for some known dirtiness in the data that
has not yet been cleaned up?

In any event, in order to benefit from an index on that query, you
would need to create an index on the concatenated columns, not on the
individual columns.

create index on clients_tbl ((middlename||lastname) 
text_pattern_ops);


But that still won't work because your patterns starts with a wild
card, and that type of pattern cannot benefit from btree indexes.


...


The point is that the first server should fulfill the condition which 
is equal sign and then move on to the rest condition. I can use it as a 
above example or query like bellow:
SELECT *,COALESCE(firstname,'') || ' ' || COALESCE(middlename,'') || ' 
' || COALESCE(lastname, '') AS name FROM clients_tbl WHERE firstname = 
'XXX' AND middlename || lastname LIKE

 '%ZZZ%';
In this case I don't know where actually is ZZZ in the middle or in 
lastname because that is the input. Also can be:


SELECT *,COALESCE(firstname,'') || ' ' || COALESCE(middlename,'') || ' 
' || COALESCE(lastname, '') AS name FROM clients_tbl WHERE firstname = 
'XXX' AND middlename || lastname LIKE

 '%Y%ZZZ%';

First part of the middle name only Y not YYY full middle name.

And it's work fine.




I dropped both indexes and create new one:

create index clients_tbl_firstname_idx on clients_tbl using btree 
(firstname

COLLATE "bg_BG" text_pattern_ops);


I don't understand why that is legal.  I would think that
text_pattern_ops implies something that contradicts COLLATE "bg_BG".
In any event, the inclusion of both of those seems to prevent the
index from being used for equality, while the inclusion of just one 
or

the other property does not.  (That is why the query got slower.)



I was thinking when I add COLLATE "bg_BG" text_pattern_ops it's will 
help to indexer to understand that data there is in specific encoding 
and

will speed up like clause. When i make index like:

create index on clients_tbl (middlename text_pattern_ops);
or
create index on clients_tbl (firstname text_pattern_ops);

there is not different result ... 35 ms but I expect to dropped from 35 
to 20 or 10 ms :)




Since firstname is used as equality in your example, there is no
reason to change this index to "text_pattern_ops" in order to support
your example.



Understand that, but if I need to do like in firstname what is the 
solution ?

To make two indexes one with "text_pattern_ops" other without it ?


Cheers,

Jeff



Regards,
H.S.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to add conversion between LATIN1 and WIN1251 ?

2011-11-25 Thread Condor


Hello,
early postgresql versions (< 9.1) did not show this error message: 
FATAL conversion between LATIN1 and WIN1251 is not supported
and connect to db. I access server over the network and every time when 
I try to login because I work on latin1 I should export 
LANG="ru_RU.CP1251"
for example. With few terminals on few different databases that is 
pretty disgustingly.

Any way how I can avoid it ?


--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add conversion between LATIN1 and WIN1251 ?

2011-11-25 Thread Condor

On 26.11.2011 00:42, Adrian Klaver wrote:

On Friday, November 25, 2011 1:19:29 am Condor wrote:

Hello,
early postgresql versions (< 9.1) did not show this error message:
FATAL conversion between LATIN1 and WIN1251 is not supported
and connect to db. I access server over the network and every time 
when

I try to login because I work on latin1 I should export
LANG="ru_RU.CP1251"
for example. With few terminals on few different databases that is
pretty disgustingly.
Any way how I can avoid it ?


Did the 9.1 database get created with a different character set then
on previous
versions?

See here for automatic conversions:

http://www.postgresql.org/docs/9.1/interactive/multibyte.html#AEN32070

Going back to 8.3 at least I do not see that it has changed.

Going to release notes:
http://www.postgresql.org/docs/9.1/interactive/release-9-1.html

I see:
"
Have psql set the client encoding from the operating system locale by
default
(Heikki Linnakangas)

This only happens if the PGCLIENTENCODING environment variable is not 
set.

"
This led me to:
http://www.postgresql.org/docs/9.1/interactive/runtime-config-
client.html#RUNTIME-CONFIG-CLIENT-FORMAT
"
client_encoding (string)

Sets the client-side encoding (character set). The default is to 
use the
database encoding. The character sets supported by the PostgreSQL 
server are

described in Section 22.3.1.
"


--
Adrian Klaver
adrian.kla...@gmail.com


No, charset of databases is the same. I use the same ENV when I upgrade 
sql servers

and recreate psql database directory.

About client encoding, I never ever has before a configured postgresql 
on my work station
where I connect to servers. Even postgres user and config file did not 
exists and this

worked fine in psql versions below 9.1


--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add conversion between LATIN1 and WIN1251 ?

2011-11-28 Thread Condor

On 26.11.2011 22:33, Adrian Klaver wrote:

On Friday, November 25, 2011 11:28:06 pm Condor wrote:



No, charset of databases is the same. I use the same ENV when I 
upgrade

sql servers
and recreate psql database directory.

About client encoding, I never ever has before a configured 
postgresql

on my work station
where I connect to servers. Even postgres user and config file did 
not

exists and this
worked fine in psql versions below 9.1


That is why I included a link to the Release Notes. There has been a
change in behavior in 9.1.
 I am assuming that you are using psql to connect.

If you want the details here is the commit:

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=02e14562a806a96f38120c96421d39dfa7394192

--
Adrian Klaver
adrian.kla...@gmail.com



Sorry, my bad. I read it now.

--
Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?

2012-04-09 Thread Condor

On 09.04.2012 13:33, 乔志强 wrote:

I use postgresql-9.1.3-1-windows-x64.exe on windows 2008 R2 x64.

1 master and 1 standby. The standby is a synchronous standby use
streaming replication (synchronous_standby_names = '*', archive_mode 
=

off), the master output:
   standby "walreceiver" is now the synchronous standby with 
priority 1

the standby output:
   LOG:  streaming replication successfully connected to primary

Then run the test program to write and commit large blob(10 to 1000
MB bytes rand size) to master server use 40 threads(40 sessions) in
loop,
The Master and standby is run on the same machine, and the client run
on another machine with 100 mbps network.


But after some minutes the master output:
   requested WAL segment XXX has already been removed
the standby output:
   FATAL:  could not receive data from WAL stream: FATAL:
requested WAL segment XXX
has already been removed


Question:
Why the master deletes the WAL segment before send to standby in
synchronous mode? It is a streaming replication bug ?


I see if no standby connect to master when synchronous_standby_names 
= '*',

all commit will delay to standby connect to master. It is good.

Use a bigger wal_keep_segments?  But I think the master should keep
all WAL segments not sent to online standby (sync or async).
wal_keep_segments shoud be only for offline standby.

If use synchronous_standby_names for sync standby, if no online
standby, all commit will delay to standby connect to master,
So wal_keep_segments is only for offline async standby actually.





master server output:
LOG:  database system was interrupted; last known up at 2012-03-30
15:37:03 HKT
LOG:  database system was not properly shut down; automatic recovery
in progress

LOG:  redo starts at 0/136077B0
LOG:  record with zero length at 0/17DF1E10
LOG:  redo done at 0/17DF1D98
LOG:  last completed transaction was at log time 2012-03-30 
15:37:03.148+08

FATAL:  the database system is starting up
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
   / the standby is a synchronous standby
 LOG:  standby "walreceiver" is now the synchronous standby with
priority 1
   /
LOG:  checkpoints are occurring too frequently (16 seconds apart)
HINT:  Consider increasing the configuration parameter 
"checkpoint_segments".

LOG:  checkpoints are occurring too frequently (23 seconds apart)
HINT:  Consider increasing the configuration parameter 
"checkpoint_segments".

LOG:  checkpoints are occurring too frequently (24 seconds apart)
HINT:  Consider increasing the configuration parameter 
"checkpoint_segments".

LOG:  checkpoints are occurring too frequently (20 seconds apart)
HINT:  Consider increasing the configuration parameter 
"checkpoint_segments".

LOG:  checkpoints are occurring too frequently (22 seconds apart)
HINT:  Consider increasing the configuration parameter 
"checkpoint_segments".

FATAL:  requested WAL segment 00010032 has already
been removed
FATAL:  requested WAL segment 00010032 has already
been removed
FATAL:  requested WAL segment 00010032 has already
been removed
LOG:  checkpoints are occurring too frequently (8 seconds apart)
HINT:  Consider increasing the configuration parameter 
"checkpoint_segments".

FATAL:  requested WAL segment 00010032 has already
been removed




standby server output:
LOG:  database system was interrupted while in recovery at log time
2012-03-30 1
4:44:31 HKT
HINT:  If this has occurred more than once some data might be
corrupted and you
might need to choose an earlier recovery target.
LOG:  entering standby mode
LOG:  redo starts at 0/16E4760
LOG:  consistent recovery state reached at 0/12D984D8
LOG:  database system is ready to accept read only connections
LOG:  record with zero length at 0/17DF1E68
LOG:  invalid magic number  in log file 0, segment 50, offset 
6946816

LOG:  streaming replication successfully connected to primary
FATAL:  could not receive data from WAL stream: FATAL:  requested WAL
segment 00
010032 has already been removed



Well,
that is not a bug, just activate archive_mode = on on the master server 
and set also wal_keep_segments = 1000 for example
to avoid that situation. I had the same situation, after digging on 
search engines that was recomended settings. Well I forgot real
reason why, may be was too slow sending / receiving data from master / 
sleave, but this fix the problem.



Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with reading data from standby server ?

2012-04-20 Thread Condor

Hello,

when I read binary replication tutorial 
(http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial) I see on 
Hot Standby: Hot Standby is identical to Warm Standby, except that the 
Standby is available to run read-only queries.
I setup hot standby server described in tutorial and it's working fine, 
no problem with that. I have a problem when I try to start a script that 
should read whole table, error message from php is:


PHP Warning:  pg_query(): Query failed: ERROR:  canceling statement due 
to conflict with recovery
DETAIL:  User query might have needed to see row versions that must be 
removed. in dump.php on line 68
PHP Warning:  pg_fetch_array() expects parameter 1 to be resource, 
boolean given in dump.php on line 69
PHP Warning:  pg_query(): Query failed: ERROR:  canceling statement due 
to conflict with recovery
DETAIL:  User query might have needed to see row versions that must be 
removed. in dump.php on line 235
PHP Warning:  pg_fetch_array() expects parameter 1 to be resource, 
boolean given in dump.php on line 236



Script actually start sql query:

SELECT abs.id, array_accumulate(abs.status) AS status, 
array_accumulate(abs.service) AS service, stb.model FROM statuses abs, 
stb_tbl stb WHERE abs.id = stb.ser
 AND abs.service != 8 AND abs.id LIKE '432%' GROUP BY abs.id, stb.model 
ORDER BY abs.id



array_accumulate aggregate is:


CREATE AGGREGATE array_accumulate (
sfunc = array_append,
basetype = anyelement,
stype = anyarray,
initcond = '{}'
);


When data is fetched it's saved into a file after some modifications. 
This script is work a 30-40 min until all data is parsed. Well, I think 
problem is started when master server send new wal file to slave, but 
how I can resolve that problem ?




Any solutions or some one to can point me how I can resolve this 
problem ?



Regards,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with reading data from standby server ?

2012-04-20 Thread Condor

On 20.04.2012 22:01, Merlin Moncure wrote:

On Fri, Apr 20, 2012 at 3:39 AM, Condor  wrote:

Hello,

when I read binary replication tutorial
(http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial) I see 
on Hot
Standby: Hot Standby is identical to Warm Standby, except that the 
Standby

is available to run read-only queries.
I setup hot standby server described in tutorial and it's working 
fine, no
problem with that. I have a problem when I try to start a script 
that should

read whole table, error message from php is:

PHP Warning:  pg_query(): Query failed: ERROR:  canceling statement 
due to

conflict with recovery
DETAIL:  User query might have needed to see row versions that must 
be

removed. in dump.php on line 68
PHP Warning:  pg_fetch_array() expects parameter 1 to be resource, 
boolean

given in dump.php on line 69
PHP Warning:  pg_query(): Query failed: ERROR:  canceling statement 
due to

conflict with recovery
DETAIL:  User query might have needed to see row versions that must 
be

removed. in dump.php on line 235
PHP Warning:  pg_fetch_array() expects parameter 1 to be resource, 
boolean

given in dump.php on line 236


Script actually start sql query:

SELECT abs.id, array_accumulate(abs.status) AS status,
array_accumulate(abs.service) AS service, stb.model FROM statuses 
abs,

stb_tbl stb WHERE abs.id = stb.ser
 AND abs.service != 8 AND abs.id LIKE '432%' GROUP BY abs.id, 
stb.model

ORDER BY abs.id


array_accumulate aggregate is:


CREATE AGGREGATE array_accumulate (
   sfunc = array_append,
   basetype = anyelement,
   stype = anyarray,
   initcond = '{}'
);


When data is fetched it's saved into a file after some 
modifications. This
script is work a 30-40 min until all data is parsed. Well, I think 
problem
is started when master server send new wal file to slave, but how I 
can

resolve that problem ?



Any solutions or some one to can point me how I can resolve this 
problem ?


The big trade-off with HS/SR is that you have to choose between the
standby being up to date and being able to service long running
queries.  The timeouts (expressed via max_standby_archive_delay and
max_standby_streaming_delay) are the main knob to control which way
you want the replica to behave.  Basically, if your read only data
touched a page that is holding back replication for longer than
$timeout, the query gets bounced.

If your replica is mainly going to serve big reporting queries and/or
dumps, you'll need to significantly relax the timeout or disable it
completely -- just understand that this can cause your replica to be
significantly behind the master.

merlin



Mhm, it's seems my logic was wrong: I think when I have replica, my 
replica
server has all the wal files sent from master and also have permanent 
connection
to master server. Slave can check if master is down (something like 
select ping; reply pong;)
and if no response given, slave server should terminate all query's and 
apply wal files.



condor


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with reading data from standby server ?

2012-04-21 Thread Condor

On 21.04.2012 12:59, Andres Freund wrote:

On Friday, April 20, 2012 10:39:25 AM Condor wrote:

array_accumulate aggregate is:


CREATE AGGREGATE array_accumulate (
 sfunc = array_append,
 basetype = anyelement,
 stype = anyarray,
 initcond = '{}'
);

Btw, if you replace that by array_agg which is builtin in any version
support
HS/SR the whole query might finish faster and thus is less likely to 
cause

conflicts.


No, the speed is the same, I just check with explain but yes, I will 
switch to use array_agg




If youre already on 9.1 you might also want to look into
hot_standby_feedback
that can also reduce the likelihood of conflicts by informing the
master what
is going on on the standby.



This option hot_standby_feedback look interesting, but when I read what 
do more questions coming like:


This can cause database bloat on the primary for some workloads. Well 
if I run one query every hour and take
30 min to finish the job, with how much my database on primary will be 
increase and when I run vaccumdb on master do this
blob will gone. Well I watch one treat here, one guy with 540 gb 
database with huge blob and Im little scared do I
will have the same result. Also if something happened on the master 
server and it's going down do I will have all
wal files in archive directory on standby server and did postgres will 
apply them auto after finish the query or

I need to do this manually.



Greetings,

Andres


Greetings,
Condor

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Vacuum does not show in pg_stat_all_tables

2012-05-15 Thread Condor

Hello everyone,

today when I do select relname, last_autovacuum, last_vacuum from 
pg_stat_all_tables I see last_vacuum and autovacuum fields is empty.
This its seems strange for me, because every night crontab start at 
01:10 am a vacuum script that do:


reindex, vacuum full and vacuum analyze. I run vacuumdb not vacuum from 
command line. I use posgresql 9.1.3.

My question is: Is this a bug or may be my vacuum never is completed ?

Here is my vacuum script if some want a look at it.

#!/bin/sh
PSQL="/usr/bin/psql -U postgres my_db -h 10.0.0.1"

su - postgres -c "/usr/bin/vacuumdb --dbname=my_db --host=10.0.0.1 >> 
/var/lib/postgres/vacuum.log 2>&1";


# reindex


tables=`$PSQL -t -c "SELECT DISTINCT c.oid::pg_catalog.regclass FROM 
pg_catalog.pg_index x JOIN pg_catalog.pg_class c ON c.oid = x.indrelid 
JOIN pg_catalog.pg_namespace n O

N c.relnamespace = n.oid WHERE nspname NOT LIKE 'pg_%'"`

for table in $tables; do
$PSQL -c "SET autocommit TO 'on'; REINDEX TABLE $table"
[ "$?" -ne 0 ] && exit 1
done

/usr/bin/vacuumdb --full --dbname=my_db --host=10.0.0.1 >> 
/var/lib/postgres/vacuum.log 2>&1
su - postgres -c "/usr/bin/vacuumdb --analyze --dbname=my_db 
--host=10.0.0.1 >> /var/lib/postgres/vacuum.log 2>&1"



Any one can tell me why this is happened ?


Regards,
Condor





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum does not show in pg_stat_all_tables

2012-05-15 Thread Condor

On 15.05.2012 14:07, Tom Lane wrote:

Condor  writes:

today when I do select relname, last_autovacuum, last_vacuum from
pg_stat_all_tables I see last_vacuum and autovacuum fields is empty.
This its seems strange for me, because every night crontab start at
01:10 am a vacuum script that do:
reindex, vacuum full and vacuum analyze. I run vacuumdb not vacuum 
from

command line. I use posgresql 9.1.3.


I think last_vacuum tracks regular vacuums, not vacuum full.

This maintenance procedure seems like something that would have been
appropriate back with postgres 7.something, anyway.  Do you have any
evidence that you need it at all?  autovacuum works reasonably well
for most people, and in any case it seems unlikely that you need a
daily vacuum full or reindex.

regards, tom lane




I use vacuum full because I have huge tables that every night is 
deleted (truncated)
and I want my space back. I did not use autovacuum because in past some 
times I lost
data when is inserted. In past is happened once every month or two some 
record just missing.


Cheers,
 C.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general