[GENERAL] Multiple instances with same version?

2011-04-20 Thread durumdara

Dear Everybody!

I want to ask that have some way to install PGSQL 9.0 as two instances 
in one machine?


Most important question. The OS is can be Windows or Linux.

I asked this, because formerly we heard about a story. I cite this as I 
remember:


   We have some product, and in  the only one server of the customer
   (Win) have a PG9.0 version installed.
   But the developer company lost in the space, only the software
   (must) working...

   We don't know the root password, and we don't want to hack it (the
   system must work).
   But we needed to install the our version of the PG what is also 9.0
   (because of the new functions)...
   ...

We want to prepare to same situations with learn about PG.

With Firebird and MS-SQL this case is not problem, because on same 
machine we can install another instances with same version.


But I don't know that is PG supports multiple instances with same 
version or not?


Also interesting question are the users.

In our systems we create user for every real user. If they are 100, we 
have same number of db users.


But if we want to server more database in one place, we may do conflict 
on users. For example: all of the databases have user JohnM.


If we can do multiple instances, the problem is vanishing, because all 
have it's own user list.


If we cannot, then only idea if have if we make prefix on usernames 
based on short dbname.

For example:
offer_db users: off_JohnM, off_MaryK
press_db users: prs_JohnM, prs_TomR

Please help just a little to I can know the limitations of PG.

Thanks for it!

Regards:
dd


Re: [GENERAL] SSDs with Postgresql?

2011-04-20 Thread Toby Corkindale

On 14/04/11 23:25, Vick Khera wrote:

On Thu, Apr 14, 2011 at 12:19 AM, Benjamin Smith
li...@benjamindsmith.com mailto:li...@benjamindsmith.com wrote:

I was wondering if anybody here could comment on the benefits of SSD
in similar, high-demand rich schema situations?


For the last several months, I've been using Texas Memory Systems RamSAN
620 drives on my main DB servers.  Having near zero seek times has been
a tremendous boon to our performance, and will have pretty much paid for
themselves within the next couple of months.  Ie, the throw hardware at
it solution worked really well :)



hey, I wonder - could you, or someone else with some SSD drives running 
their DBs in production - check the SMART attributes for their drives?


In particular, the Media_Wearout_Indicator - this starts at 100 and goes 
down towards 1 as the erase cycles add up..


So you can calculate the total estimated lifetime by looking at how much 
has been used up over how long you've been using the drive in production.



I have a very cheap 64GB consumer SSD used in a personal server (so not 
in serious production use, but it does see some traffic), and I note 
that after a year it's still on 100%!


Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple instances with same version?

2011-04-20 Thread Magnus Hagander
2011/4/20 durumdara durumd...@gmail.com:
 Dear Everybody!

 I want to ask that have some way to install PGSQL 9.0 as two instances in
 one machine?

 Most important question. The OS is can be Windows or Linux.

 I asked this, because formerly we heard about a story. I cite this as I
 remember:

 We have some product, and in  the only one server of the customer (Win) have
 a PG9.0 version installed.
 But the developer company lost in the space, only the software (must)
 working...

 We don't know the root password, and we don't want to hack it (the system
 must work).
 But we needed to install the our version of the PG what is also 9.0 (because
 of the new functions)...
 ...

 We want to prepare to same situations with learn about PG.

 With Firebird and MS-SQL this case is not problem, because on same machine
 we can install another instances with same version.

 But I don't know that is PG supports multiple instances with same version or
 not?

It is fully supported. They need to run on different ports (even if
they are on different IPs the port needs to be different as well), and
in different data directories, but other than that, it's fully
supported.

The graphical installer on Windows will, IIRC, only set up the first
instance, but you can use the commandline tools (initdb and pg_ctl
register) to set up the second instance yourself.


 Also interesting question are the users.

 In our systems we create user for every real user. If they are 100, we have
 same number of db users.

 But if we want to server more database in one place, we may do conflict on
 users. For example: all of the databases have user JohnM.

 If we can do multiple instances, the problem is vanishing, because all have
 it's own user list.

 If we cannot, then only idea if have if we make prefix on usernames based on
 short dbname.
 For example:
 offer_db users: off_JohnM, off_MaryK
 press_db users: prs_JohnM, prs_TomR

You might want to look at the config parameter db_use_namespace - but
I wouldn't recommend using it, I've seen too many cases where it's
confused systems.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using column aliasses in the same query

2011-04-20 Thread Tore Halvorsen
On Wed, Apr 20, 2011 at 12:13 AM, Andrej andrej.gro...@gmail.com wrote:
 That's an accurate observation, but has nothing to do w/ what
 the original poster was looking for, nor does it refute Toms
 argument against the OPs suggestion.

You're right, I jumped in without thinking enough.Sorry.

I had just written some queries where a shortcut like the above
would have made it slighly easier on the eyes and misinterpreted
the discussion.

-- 
Eld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]
demo 2011 Tore Halvorsen || +052 0553034554

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help - corruption issue?

2011-04-20 Thread Phoenix Kiula
 On a fast network it should only take a few minutes.  Now rsyncing
 live 2.4 TB databases, that takes time. :)  Your raptors, if they're
 working properly, should be able to transfer at around 80 to
 100Megabytes a second.  10 to 15 seconds a gig.  30 minutes or so via
 gig ethernet.  I'd run iostat and see how well my drive array was
 performing during a large, largely sequential copy.


OK. An update.

We have changed all the hardware except disks.

REINDEX still gave this problem:

--
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
--


So I rebooted and logged back in a single user mode. All services
stopped. All networking stopped. Only postgresql started. I tried the
REINDEX again.

Same problem :(

This means the problem is likely with data?

I do have a pg_dumpall dump from 1 day before. Will lose some data,
but should have most of it.

Is it worth it for me to try and restore from there? What's the best
thing to do right now?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto Adjust Age

2011-04-20 Thread Jasen Betts
On 2011-04-06, Carlos Mennens carlos.menn...@gmail.com wrote:
 I've only been using PostgreSQL since Oct 2010 and it's my first
 experience with SQL or any ORDBMS. I've searched on the web and been
 creating my own database users, databases, tables from scratch which
 has been interesting to say the least but now I would like to know if
 this is possible in SQL or PostgreSQL since I can't find anything
 online that shows me how to do so. I've created a table called 'users'
 and I have it configured as follows:

 CREATE TABLE users
 (
id integer PRIMARY KEY UNIQUE NOT NULL, --ID
fname character varying(40) NOT NULL, --First name
lname character varying(40) NOT NULL, --Last name
email character varying NOT NULL, --email address
office integer NOT NULL, --Office number
dob date NOT NULL, --Date of birth
age integer NOT NULL --Age
 )
 ;

 Is there a way in SQL I can have the users 'age' be auto adjusted
 based on the 'id'  'dob'? 

I'm not sure how id can be used here.

Running this each morning will keep it current. for the normal
definition of human age.

  update user set age=extract('year' from age(dob))
 where age is distinct from extract('year' from age(dob));

a cleaner way is to create a view and present a computed age
column in the view

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto Adjust Age

2011-04-20 Thread Jasen Betts
On 2011-04-06, Jerry Sievers gsiever...@comcast.net wrote:
 Carlos Mennens carlos.menn...@gmail.com writes:

 CREATE TABLE users
 (
id integer PRIMARY KEY UNIQUE NOT NULL, --ID
fname character varying(40) NOT NULL, --First name
lname character varying(40) NOT NULL, --Last name
email character varying NOT NULL, --email address
office integer NOT NULL, --Office number
dob date NOT NULL, --Date of birth
age integer NOT NULL --Age
 )
 ;

 Is there a way in SQL I can have the users 'age' be auto adjusted
 based on the 'id'  'dob'? I would assume this is possible because if
 you have 100 employees, I doubt someone has time to sit and change
 everyone's age from 31  32 on their birthday. Can someone please help
 explain how this works or what the SQL code would look like assuming
 that it's possible? I have no advanced far enough to see what triggers
 and views are so perhaps it's just my level with SQL in general.

 I'd suggest not storing age but instead wrapping with a view that calls
 date_trunc('year', now() - dob).

unfortunately that doesn't work.

now() - dob gives you a number of days, and there's no reliable way to 
convert a number of days into a number of years. 

someone born 365 days ago today is 1 year old.
but in a years time someone then 365 days old would not be because of
the leap year.

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto Adjust Age

2011-04-20 Thread Bill Moran
In response to Jasen Betts ja...@xnet.co.nz:

 On 2011-04-06, Jerry Sievers gsiever...@comcast.net wrote:
  Carlos Mennens carlos.menn...@gmail.com writes:
 
  CREATE TABLE users
  (
 id integer PRIMARY KEY UNIQUE NOT NULL, --ID
 fname character varying(40) NOT NULL, --First name
 lname character varying(40) NOT NULL, --Last name
 email character varying NOT NULL, --email address
 office integer NOT NULL, --Office number
 dob date NOT NULL, --Date of birth
 age integer NOT NULL --Age
  )
  ;
 
  Is there a way in SQL I can have the users 'age' be auto adjusted
  based on the 'id'  'dob'? I would assume this is possible because if
  you have 100 employees, I doubt someone has time to sit and change
  everyone's age from 31  32 on their birthday. Can someone please help
  explain how this works or what the SQL code would look like assuming
  that it's possible? I have no advanced far enough to see what triggers
  and views are so perhaps it's just my level with SQL in general.
 
  I'd suggest not storing age but instead wrapping with a view that calls
  date_trunc('year', now() - dob).
 
 unfortunately that doesn't work.
 
 now() - dob gives you a number of days, and there's no reliable way to 
 convert a number of days into a number of years. 
 
 someone born 365 days ago today is 1 year old.
 but in a years time someone then 365 days old would not be because of
 the leap year.

use age() instead.  I assume that's what it's designed for:

SELECT age('4/29/1972', now());

Aside from that minor tweak, _DEFINITELY_ get rid of the age column and
make the view.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to uninstall PostgreSQL 8.4 on both Windows XP and Windows 7

2011-04-20 Thread Mlondolozi Ncapayi
Hi there

I installed PostgreSql 8.4 and now I  want to delete/ uninstall it
completely to start a new fresh installation.
Can you please give me clear instructions on how to do that or maybe a
script that I can run.
I am doing assignment using postgres your help would be highly appreciated.

Mlo


Re: [GENERAL] How to uninstall PostgreSQL 8.4 on both Windows XP and Windows 7

2011-04-20 Thread raghu ram
On Wed, Apr 20, 2011 at 12:32 PM, Mlondolozi Ncapayi mlon...@gmail.comwrote:

 Hi there

 I installed PostgreSql 8.4 and now I  want to delete/ uninstall it
 completely to start a new fresh installation.
 Can you please give me clear instructions on how to do that or maybe a
 script that I can run.
 I am doing assignment using postgres your help would be highly appreciated.



Please go to the postgresql server installation directory and here you will
find one uninstall-postgresql file.

-double click on Uninstall-postgresql file to run the un-installer- it will
un-install postgresql.


II)Also you can un-install it manually.Below are the steps to un-install
postgresql 8.4 manually-



i) Remove the postgresql server installation directory. (rd /s /q
C:\Program Files\PostgreSQL\8.4) Assuming default location.

ii) Delete the user 'postgres' (net user postgres /delete)

iii) Remove the Registry entries. (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL
\Installations\postgresql-8.4) and (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL
\Services\postgresql-8.4)

iv) Remove the postgresql-8.4 service. (sc delete postgresql-8.4)



This will completely remove your PostgreSQL 8.4 installation including user
and installation directories.

--Raghu Ram




 Mlo





[GENERAL] Join query including two generate_series calls causes big memory growth and crash

2011-04-20 Thread Jorge Arévalo
Hello,

I'm executing this query:

SELECT x, y, another_field FROM generate_series(1, 10) x,
generate_series(1, 10) y, my_table

The field 'another_field' belongs to 'my_table'. And that table has
36 entries. In a 64 bits machine, with 4GB RAM, Ubuntu 10.10 and
postgres 8.4.7, the query works fine. But in a 32 bits machine, with
1GB RAM, Ubuntu 9.10 and postgres 8.4.7, the query process is killed
after taking about 80% of available memory. In the 64 bits machine the
query takes about 60-70% of the available memory too, but it ends.
And this happens even if I simply get x and y:

SELECT x, y FROM generate_series(1, 10) x, generate_series(1, 10) y, my_table

Is it normal? I mean, postgres has to deal with millions of rows, ok,
but shouldn't it start swapping memory instead of crashing? Is a
question of postgres configuration?

Thanks in advance,

-- 
Jorge Arévalo
Internet  Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://es.linkedin.com/in/jorgearevalo80
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Join query including two generate_series calls causes big memory growth and crash

2011-04-20 Thread Tom Lane
=?ISO-8859-1?Q?Jorge_Ar=E9valo?= jorge.arev...@deimos-space.com writes:
 I'm executing this query:

 SELECT x, y, another_field FROM generate_series(1, 10) x,
 generate_series(1, 10) y, my_table

 The field 'another_field' belongs to 'my_table'. And that table has
 36 entries. In a 64 bits machine, with 4GB RAM, Ubuntu 10.10 and
 postgres 8.4.7, the query works fine. But in a 32 bits machine, with
 1GB RAM, Ubuntu 9.10 and postgres 8.4.7, the query process is killed
 after taking about 80% of available memory. In the 64 bits machine the
 query takes about 60-70% of the available memory too, but it ends.

You mean the backend, or psql?  I don't see any particular backend bloat
when I do that, but psql eats memory because it's trying to absorb and
display the whole query result.  

 Is it normal? I mean, postgres has to deal with millions of rows, ok,
 but shouldn't it start swapping memory instead of crashing? Is a
 question of postgres configuration?

Try \set FETCH_COUNT 1000 or so.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Join query including two generate_series calls causes big memory growth and crash

2011-04-20 Thread tv
 Hello,

 I'm executing this query:

 SELECT x, y, another_field FROM generate_series(1, 10) x,
 generate_series(1, 10) y, my_table

Well, do you realize this is a cartesian product that gives

10 x 10 x 36 = 36.000.000

rows in the end. Not sure how wide is the third table (how many columns
etc.) but this may occupy a lot of memory.

 The field 'another_field' belongs to 'my_table'. And that table has
 36 entries. In a 64 bits machine, with 4GB RAM, Ubuntu 10.10 and
 postgres 8.4.7, the query works fine. But in a 32 bits machine, with
 1GB RAM, Ubuntu 9.10 and postgres 8.4.7, the query process is killed
 after taking about 80% of available memory. In the 64 bits machine the
 query takes about 60-70% of the available memory too, but it ends.
 And this happens even if I simply get x and y:

 SELECT x, y FROM generate_series(1, 10) x, generate_series(1, 10) y,
 my_table

The result is still 36 million rows, so there's not a big difference I guess.

 Is it normal? I mean, postgres has to deal with millions of rows, ok,
 but shouldn't it start swapping memory instead of crashing? Is a
 question of postgres configuration?

I guess that's the OOM killer, killing one of the processes. See this

http://en.wikipedia.org/wiki/Out_of_memory

so it's a matter of the system, not PostgreSQL - the kernel decides
there's not enough memory, chooses one of the processes and kills it.
PostgreSQL is a victim in this case.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to realize ROW_NUMBER() in 8.3?

2011-04-20 Thread Emi Lu

Hello,

ROW_NUMBER() is only ready in 8.4. For 8.3, is there a simple way to get 
row_number


select row_number(), col1, col2...
FROM   tableName

Thanks a lot!
丁叶

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Join query including two generate_series calls causes big memory growth and crash

2011-04-20 Thread Jorge Arévalo
On Wed, Apr 20, 2011 at 5:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 =?ISO-8859-1?Q?Jorge_Ar=E9valo?= jorge.arev...@deimos-space.com writes:
 I'm executing this query:

 SELECT x, y, another_field FROM generate_series(1, 10) x,
 generate_series(1, 10) y, my_table

 The field 'another_field' belongs to 'my_table'. And that table has
 36 entries. In a 64 bits machine, with 4GB RAM, Ubuntu 10.10 and
 postgres 8.4.7, the query works fine. But in a 32 bits machine, with
 1GB RAM, Ubuntu 9.10 and postgres 8.4.7, the query process is killed
 after taking about 80% of available memory. In the 64 bits machine the
 query takes about 60-70% of the available memory too, but it ends.

 You mean the backend, or psql?  I don't see any particular backend bloat
 when I do that, but psql eats memory because it's trying to absorb and
 display the whole query result.


Yes, the memory eater is psql, not backend.

 Is it normal? I mean, postgres has to deal with millions of rows, ok,
 but shouldn't it start swapping memory instead of crashing? Is a
 question of postgres configuration?

 Try \set FETCH_COUNT 1000 or so.

                        regards, tom lane


Thanks for the tip.


Best regards,

-- 
Jorge Arévalo
Internet  Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://es.linkedin.com/in/jorgearevalo80
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to uninstall PostgreSQL 8.4 on both Windows XP and Windows 7

2011-04-20 Thread raghu ram
On Wed, Apr 20, 2011 at 8:39 PM, Mlondolozi Ncapayi mlon...@gmail.comwrote:

 Thank Sir, I have managed to uninstall it.
 Now I installed PostgreSQL 8.4.4 and PostGIS 1.4.2.

 Can you please send me command prompts to load shapefiles using Windows 7.

 I want to load cities.shp into template_postgis14 [dbname]

 This command does not work

 shp2pgsql cities.shp public.cities cities.sql followed by psql -d
 template_postgis14 -f cities.sql -U postgres





1)Convert shp to sql

user  shp2pgsql -s 4326 test_AK.shp mp_census_block  mp_census_block.sql

2)Insert into the database

user psql -U test -d DatabaseName  -f mp_census_block.sql

--Raghu Ram





 On Wed, Apr 20, 2011 at 4:31 PM, raghu ram raghuchenn...@gmail.comwrote:



   On Wed, Apr 20, 2011 at 12:32 PM, Mlondolozi Ncapayi mlon...@gmail.com
  wrote:

 Hi there

 I installed PostgreSql 8.4 and now I  want to delete/ uninstall it
 completely to start a new fresh installation.
 Can you please give me clear instructions on how to do that or maybe a
 script that I can run.
 I am doing assignment using postgres your help would be highly
 appreciated.



 Please go to the postgresql server installation directory and here you
 will find one uninstall-postgresql file.

 -double click on Uninstall-postgresql file to run the un-installer- it
 will un-install postgresql.


 II)Also you can un-install it manually.Below are the steps to un-install
 postgresql 8.4 manually-



 i) Remove the postgresql server installation directory. (rd /s /q
 C:\Program Files\PostgreSQL\8.4) Assuming default location.

 ii) Delete the user 'postgres' (net user postgres /delete)

 iii) Remove the Registry entries. (HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL
 \Installations\postgresql-8.4) and (HKEY_LOCAL_MACHINE\SOFTWARE\
 PostgreSQL\Services\postgresql-8.4)

 iv) Remove the postgresql-8.4 service. (sc delete postgresql-8.4)



 This will completely remove your PostgreSQL 8.4 installation including
 user and installation directories.

 --Raghu Ram




 Mlo








Re: [GENERAL] How to realize ROW_NUMBER() in 8.3?

2011-04-20 Thread raghu ram
On Wed, Apr 20, 2011 at 9:21 PM, Emi Lu em...@encs.concordia.ca wrote:

 Hello,

 ROW_NUMBER() is only ready in 8.4. For 8.3, is there a simple way to get
 row_number

 select row_number(), col1, col2...
 FROM   tableName



Below link will demonstrates ROW_NUMBER features in pre Postgresql-8.4::

http://www.postgresonline.com/journal/archives/79-Simulating-Row-Number-in-PostgreSQL-Pre-8.4.html


--Raghu Ram




 Thanks a lot!
 丁叶

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Join query including two generate_series calls causes big memory growth and crash

2011-04-20 Thread Jorge Arévalo
2011/4/20  t...@fuzzy.cz:
 Hello,

 I'm executing this query:

 SELECT x, y, another_field FROM generate_series(1, 10) x,
 generate_series(1, 10) y, my_table

 Well, do you realize this is a cartesian product that gives

 10 x 10 x 36 = 36.000.000

 rows in the end. Not sure how wide is the third table (how many columns
 etc.) but this may occupy a lot of memory.


Yes, I know it. But I expect memory swapping in this situation, not crashing.


 The field 'another_field' belongs to 'my_table'. And that table has
 36 entries. In a 64 bits machine, with 4GB RAM, Ubuntu 10.10 and
 postgres 8.4.7, the query works fine. But in a 32 bits machine, with
 1GB RAM, Ubuntu 9.10 and postgres 8.4.7, the query process is killed
 after taking about 80% of available memory. In the 64 bits machine the
 query takes about 60-70% of the available memory too, but it ends.
 And this happens even if I simply get x and y:

 SELECT x, y FROM generate_series(1, 10) x, generate_series(1, 10) y,
 my_table

 The result is still 36 million rows, so there's not a big difference I guess.


Yes, silly example. I only wanted to delete my table's field from equation.


 Is it normal? I mean, postgres has to deal with millions of rows, ok,
 but shouldn't it start swapping memory instead of crashing? Is a
 question of postgres configuration?

 I guess that's the OOM killer, killing one of the processes. See this

 http://en.wikipedia.org/wiki/Out_of_memory

 so it's a matter of the system, not PostgreSQL - the kernel decides
 there's not enough memory, chooses one of the processes and kills it.
 PostgreSQL is a victim in this case.

 Tomas



Ok, I got it. Anyway, my question again: could I expect some help from
postgres backend to avoid this situation? Something like I don't want
to be killed by the OOM killer because one of my threads. I'll try
this...

Maybe is my responsibility, changing some configuration parameters,
like the \set FETCH_COUNT 1000 Tomas Lane has suggested...

Thanks again,

-- 
Jorge Arévalo
Internet  Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://es.linkedin.com/in/jorgearevalo80
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trouble loading Perl modules from postgresql.conf

2011-04-20 Thread Chris Greenhill
Hello,

 

I'm having difficulty pre-loading Perl modules and my own libraries. When I
edit something like this into the postgresql.conf:

 

custom_variable_classes = 'plperl'

plperl.on_init = 'use MyModule;'

 

and restart the server it doesn't seem to load the modules (they don't
appear in %INC and my functions error out). Is there something else I need
to do? I'm using server 9.0.4 on a WinXP system with Perl 5.10. 

 

Thanks for any help

-Chris

 

 



[GENERAL] Different views of remote server

2011-04-20 Thread Bob Pawley
Hi

This is probably going to turn out to be me doing something stupid, but-

I have two computers, one of which I use as a remote server for my database. 

When I connect to the remote database through my interface there are errors 
that suggest a problem with the data in one of the tables.

Using PgAdmin to view the remote table there is indeed some information missing 
(5 out of 16 geoms).

When I use the PGAdmin on the remote computer, using a local connection, this 
information is not missing it is intact.

I don’t think it is a PgAdmin problem because when I do a ‘select the_geom’ as 
a remote query the information is missing where it is not missing the PgAdmin 
on the other computer as a local query.

Also connecting to the remote DB using Quantum GIS shows the “missing” 
information as being present and normal.

I have also checked with a server display app and the appropriate number of 
servers are present (only one present on the remote computer).

I’ve run out of ideas - Would anyone have any thoughts of what might be going 
on???

Bob

PS – I have tried one other thing - dumping the database from the remote server 
and restoring it to the local server and the information (the 5 geoms) goes 
missing sometime during this transaction.

Re: [GENERAL] REASSIGN OWNED doesn't for all objects, ALTER FUNCTION seems to fix it

2011-04-20 Thread Tom Lane
Frank van Vugt ftm.van.v...@foxi.nl writes:
 mmm, indeed it seems that some things are our of sync here
 ...
 This confirms that these 60 functions do not have a 'o' (owner) record in 
 pg_shdepend, it therefor matches what you seemed to expect: no records in 
 pg_shdepend, so reassign owned does not do anything.

 Our obvious questions now are:
   - how did we get into this
   and
   - how do we get out

I wonder whether the pg_shdepend data is actually wrong, or just the
indexes on it are at fault.  Did you try forcing that query to be done
with a seqscan, or see if reindexing pg_shdepend fixes things up?

The reason I'm wondering is that I've just found a failure mechanism
that could account for significant lossage of index entries for a system
catalog:
http://archives.postgresql.org/pgsql-hackers/2011-04/msg01070.php

To explain your problem that way would require assuming that somebody
was REINDEX'ing pg_shdepend at approximately the same time that somebody
else was rolling back DDL that had modified these same pg_shdepend
entries --- which in this case would probably mean a failed REASSIGN
OWNED for this same user ID.  Have you got background tasks that try to
REINDEX everything in sight?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Needs Suggestion

2011-04-20 Thread SUBHAM ROY
By doing \timing in psql, we enable the timing and then when we type the
query we are able to see its execution time.
Similarly, is there any way to view the number I/Os and memory usage by a
particular query.

And also the timing result that gets displayed, in which log file does it
get recorded?

-- 
Thank You,
Subham Roy,
CSE IIT Bombay.


Re: [GENERAL] Help - corruption issue?

2011-04-20 Thread Tomas Vondra
Dne 20.4.2011 12:56, Phoenix Kiula napsal(a):
 On a fast network it should only take a few minutes.  Now rsyncing
 live 2.4 TB databases, that takes time. :)  Your raptors, if they're
 working properly, should be able to transfer at around 80 to
 100Megabytes a second.  10 to 15 seconds a gig.  30 minutes or so via
 gig ethernet.  I'd run iostat and see how well my drive array was
 performing during a large, largely sequential copy.
 
 
 OK. An update.
 
 We have changed all the hardware except disks.

OK, so the card is working and the drives are fine. Have you run the
tw_cli tool to check the drives? Because it's probably the last thing
that might be faulty and was not replaced.

 REINDEX still gave this problem:
 
 --
 server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.
 The connection to the server was lost. Attempting reset: Failed.
 --

Hm, have you checked if there's something else in the logs? More details
about the crash or something like that.

I'd probably try to run strace on the backend, to get more details about
where it crashes. Just find out the PID of the backend dedicated to your
psql session, do

$ strace -p PID  crash.log 21

and then run the REINDEX. Once it crashes you can see the last few lines
from the logfile.

 So I rebooted and logged back in a single user mode. All services
 stopped. All networking stopped. Only postgresql started. I tried the
 REINDEX again.
 
 Same problem :(
 
 This means the problem is likely with data?

Well, maybe. It might be a problem with the data, it might be a bug in
postgres ...

 I do have a pg_dumpall dump from 1 day before. Will lose some data,
 but should have most of it.
 
 Is it worth it for me to try and restore from there? What's the best
 thing to do right now?

So have you done the file backup? That's the first thing I'd do.

Anyway what's best depends on how important is the missing piece of
data. We still don't know how to fix the problem, but it sure seems like
a corrupted data.

I think you already know which table is corrupted, right? In that case
you may actually try to find the bad block and erase it (and maybe do a
copy so that we can see what's wrong with it and how it might happen).
There's a very nice guide on how to do that

http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html

It sure seems like the problem you have (invalid alloc request etc.).
The really annoying part is locating the block, as you have to scan
through the table (which sucks with such big table).

And yes, if there's corruption, there might be more corrupted blocks.

regards
Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Needs Suggestion

2011-04-20 Thread raghu ram
On Wed, Apr 20, 2011 at 10:41 PM, SUBHAM ROY subham@gmail.com wrote:

 By doing \timing in psql, we enable the timing and then when we type the
 query we are able to see its execution time.
 Similarly, is there any way to view the number I/Os and memory usage by a
 particular query.

 PGsnap module will describes tables,Indexes,Sequences I/O statistics and
URL as follows

http://pgsnap.projects.postgresql.org/fluxbb_snap_20100513/bgwriter.html

pg_buffercache module will shows memory usage on particular tables.


--Raghu Ram





 And also the timing result that gets displayed, in which log file does it
 get recorded?

 --
 Thank You,
 Subham Roy,
 CSE IIT Bombay.




Re: [GENERAL] REASSIGN OWNED doesn't for all objects, ALTER FUNCTION seems to fix it

2011-04-20 Thread Frank van Vugt
Hi,

Op woensdag 20 april 2011, schreef Tom Lane:
 I wonder whether the pg_shdepend data is actually wrong, or just the
 indexes on it are at fault.  Did you try forcing that query to be done
 with a seqscan

Just did by setting enable_indexscan to false and verifying that all is used 
are seq_scans by running explain first.

Both queries return the exact same result, so it seems the indexes are not the 
problem in this case?

 or see if reindexing pg_shdepend fixes things up?

Didn't do that now, given the above result, but should you prefer it, just let 
me know.

 The reason I'm wondering is that I've just found a failure mechanism
 that could account for significant lossage of index entries for a system
 catalog:
 http://archives.postgresql.org/pgsql-hackers/2011-04/msg01070.php
 
 To explain your problem that way would require assuming that somebody
 was REINDEX'ing pg_shdepend at approximately the same time that somebody
 else was rolling back DDL that had modified these same pg_shdepend
 entries --- which in this case would probably mean a failed REASSIGN
 OWNED for this same user ID.  Have you got background tasks that try to
 REINDEX everything in sight?

Nope, nothing like that running in the background. We basically never reindex 
manually. The only DDL related stuff that does get used a fair bit, is 
creating / using / dropping temp table stuff. During the period since the last 
major postgresql update, numerous functions have been updated on numerous 
moments in time, but this is mainly done during maintenance windows. Recently 
we started a cleanup to 'correct wrong ownership and/or permissions', which 
basically was what made this show up.





-- 
Best,




Frank.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] REASSIGN OWNED doesn't for all objects, ALTER FUNCTION seems to fix it

2011-04-20 Thread Tom Lane
Frank van Vugt ftm.van.v...@foxi.nl writes:
 Op woensdag 20 april 2011, schreef Tom Lane:
 To explain your problem that way would require assuming that somebody
 was REINDEX'ing pg_shdepend at approximately the same time that somebody
 else was rolling back DDL that had modified these same pg_shdepend
 entries --- which in this case would probably mean a failed REASSIGN
 OWNED for this same user ID.  Have you got background tasks that try to
 REINDEX everything in sight?

 Nope, nothing like that running in the background.

Actually, now that I think about it, 8.4 didn't allow on-the-fly
reindexing of shared catalogs anyway.  So that couldn't be your problem
even if the test had shown the indexes didn't match the catalog.  But
it seems the rows actually disappeared from the catalog, and I have no
idea what would've caused that.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] REASSIGN OWNED doesn't for all objects, ALTER FUNCTION seems to fix it

2011-04-20 Thread Frank van Vugt
Hi,

Op woensdag 20 april 2011, schreef Tom Lane:
 Actually, now that I think about it, 8.4 didn't allow on-the-fly
 reindexing of shared catalogs anyway.  So that couldn't be your problem
 even if the test had shown the indexes didn't match the catalog.  But
 it seems the rows actually disappeared from the catalog, and I have no
 idea what would've caused that.

ok, clear

should we even try to get them back by generating them somehow?

we're planning to upgrade to v9 in a month or so


also: would there be any reason you can think of why using alter function in 
the current version in order to correct this situation would have a negative 
side-effect?






-- 
Best,




Frank.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] REASSIGN OWNED doesn't for all objects, ALTER FUNCTION seems to fix it

2011-04-20 Thread Tom Lane
Frank van Vugt ftm.van.v...@foxi.nl writes:
 Op woensdag 20 april 2011, schreef Tom Lane:
 Actually, now that I think about it, 8.4 didn't allow on-the-fly
 reindexing of shared catalogs anyway.  So that couldn't be your problem
 even if the test had shown the indexes didn't match the catalog.  But
 it seems the rows actually disappeared from the catalog, and I have no
 idea what would've caused that.

 should we even try to get them back by generating them somehow?

It's not really important --- the only difficulty with not having them
is the one you already hit, that REASSIGN OWNED misses things it should
do.  If you're planning an update or dump-and-reload shortly, I wouldn't
worry about it.  What's much more worrisome is the question of whether
the same kind of data loss happened anywhere else, and that isn't
something that the available info tells me anything about.

 also: would there be any reason you can think of why using alter function in 
 the current version in order to correct this situation would have a negative 
 side-effect?

If you do want to go around and do retail ALTER OWNER commands, you
certainly could.  I thought for a moment that changeDependencyOnOwner
would complain about the lack of a pre-existing pg_shdepend entry, but I
see it won't, so it should work fine.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Different views of remote server

2011-04-20 Thread Adrian Klaver
On Wednesday, April 20, 2011 11:09:59 am Bob Pawley wrote:
 Hi
 
 This is probably going to turn out to be me doing something stupid, but-
 
 I have two computers, one of which I use as a remote server for my
 database.
 
 When I connect to the remote database through my interface there are errors
 that suggest a problem with the data in one of the tables.
 
 Using PgAdmin to view the remote table there is indeed some information
 missing (5 out of 16 geoms).
 
 When I use the PGAdmin on the remote computer, using a local connection,
 this information is not missing it is intact.
 
 I don’t think it is a PgAdmin problem because when I do a ‘select the_geom’
 as a remote query the information is missing where it is not missing the
 PgAdmin on the other computer as a local query.
 
 Also connecting to the remote DB using Quantum GIS shows the “missing”
 information as being present and normal.
 
 I have also checked with a server display app and the appropriate number of
 servers are present (only one present on the remote computer).
 
 I’ve run out of ideas - Would anyone have any thoughts of what might be
 going on???

What is the schema for the table? What exactly is the data and do the 5 
'missing' data differ markedly from the other data?

 
 Bob

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help - corruption issue?

2011-04-20 Thread Tomas Vondra
Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
 There's a very nice guide on how to do that
 
 http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html
 
 It sure seems like the problem you have (invalid alloc request etc.).
 The really annoying part is locating the block, as you have to scan
 through the table (which sucks with such big table).
 
 And yes, if there's corruption, there might be more corrupted blocks.

BTW, there's a setting 'zero_damaged_pages' that might help with this

http://www.postgresql.org/docs/8.2/static/runtime-config-developer.html

see this talk for more details how to use it

http://www.casitconf.org/casitconf11/Tech_track_2_files/cascadia_postgres_rbernier.pdf

Anyway don't play with this without the file backup, as this will zero
the blocks.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_reorg

2011-04-20 Thread Scott Mead
On Tue, Apr 19, 2011 at 9:48 AM, Jens Wilke j...@wilke.org wrote:

 On Tue, Apr 19, 2011 at 04:02:01AM +0530, Vibhor Kumar wrote:

   IIRC vacuum full mode rewrites the indexes as well.
 
  Till 8.4 no. From 9.0 onwards yes. However VACUUM FULL still locks the
 table.

 Don't be confused with the vacuum full term.
 This has nothing to do with the postgresql vacuum full command.
 Both pg_reorg's vacuum full and cluster mode do the pretty same thing.
 They rewrite the table and all their indexes. They use triggers to update
 the new table during the reorganisation.
 The only difference is that cluster does an additional order by.
 Both of them lock the original table at the end of the reorganisation just
 for the switch.
 If the lock is not granted within -T seconds, the backends holding locks
 are canceled.

 If you run out of diskspace, it's possible to reorg table by table.
 And yes, pg_reorg does only work with tables with a primary key.
 This will change in future releases, IIRC


How does it do with tables that have huge amounts (50 - 100 GB ) of TOASTed
data?





 regards, Jens

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Different views of remote server

2011-04-20 Thread Scott Marlowe
On Wed, Apr 20, 2011 at 1:09 PM, Bob Pawley rjpaw...@shaw.ca wrote:
 Hi

 This is probably going to turn out to be me doing something stupid, but-

 I have two computers, one of which I use as a remote server for my database.

 When I connect to the remote database through my interface there are errors
 that suggest a problem with the data in one of the tables.

 Using PgAdmin to view the remote table there is indeed some information
 missing (5 out of 16 geoms).

 When I use the PGAdmin on the remote computer, using a local connection,
 this information is not missing it is intact.

 I don’t think it is a PgAdmin problem because when I do a ‘select the_geom’
 as a remote query the information is missing where it is not missing the
 PgAdmin on the other computer as a local query.

 Also connecting to the remote DB using Quantum GIS shows the “missing”
 information as being present and normal.

 I have also checked with a server display app and the appropriate number of
 servers are present (only one present on the remote computer).

 I’ve run out of ideas - Would anyone have any thoughts of what might be
 going on???

You're likely connecting to a different database than you think you
are.  What do your connection credentials look like in each case?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help - corruption issue?

2011-04-20 Thread Phoenix Kiula
On Thu, Apr 21, 2011 at 7:27 AM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
 There's a very nice guide on how to do that

 http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html

 It sure seems like the problem you have (invalid alloc request etc.).
 The really annoying part is locating the block, as you have to scan
 through the table (which sucks with such big table).

 And yes, if there's corruption, there might be more corrupted blocks.

 BTW, there's a setting 'zero_damaged_pages' that might help with this

 http://www.postgresql.org/docs/8.2/static/runtime-config-developer.html

 see this talk for more details how to use it

 http://www.casitconf.org/casitconf11/Tech_track_2_files/cascadia_postgres_rbernier.pdf

 Anyway don't play with this without the file backup, as this will zero
 the blocks.

 Tomas






Thanks Tomas. Very handy info.

FIRST: is there anyone on this list who offers PG admin support?
Please write to me directly.

Second, for the strace, which process should I use?


ps auxwww|grep ^postgres
postgres  4320  0.0  0.1 440192 10824 ?  Ss   08:49   0:00
/usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
postgres  4355  0.0  0.0 11724  964 ?Ss   08:49   0:00
postgres: logger process
postgres  4365  0.0  0.0 440396 3268 ?   Ss   08:49   0:00
postgres: writer process
postgres  4366  0.0  0.0 11860 1132 ?Ss   08:49   0:00
postgres: stats collector process
postgres 15795  0.0  0.0  7136 1440 pts/0S22:44   0:00 -bash
postgres 15900  0.0  0.0  7860 1956 pts/0S+   22:44   0:00 psql -h
localhost MYDOMAIN -E MYDOMAIN_MYDOMAIN
postgres 15901  0.0  0.0 441124 3072 ?   Ss   22:44   0:00
postgres: MYDOMAIN_MYDOMAIN MYDOMAIN 127.0.0.1(34346) idle


Third, I have the backup in two ways:

1. I took a backup of the entire /pgsql/data folder. PG was shutdown
at the time.
2. I have a pg_dumpall file but it is missing one day's data (still
useful as last resort).

Will #1 have corrupt data in it?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help - corruption issue?

2011-04-20 Thread Phoenix Kiula
On Thu, Apr 21, 2011 at 11:49 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
 On Thu, Apr 21, 2011 at 7:27 AM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
 There's a very nice guide on how to do that

 http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html

 It sure seems like the problem you have (invalid alloc request etc.).
 The really annoying part is locating the block, as you have to scan
 through the table (which sucks with such big table).

 And yes, if there's corruption, there might be more corrupted blocks.

 BTW, there's a setting 'zero_damaged_pages' that might help with this

 http://www.postgresql.org/docs/8.2/static/runtime-config-developer.html

 see this talk for more details how to use it

 http://www.casitconf.org/casitconf11/Tech_track_2_files/cascadia_postgres_rbernier.pdf

 Anyway don't play with this without the file backup, as this will zero
 the blocks.

 Tomas






 Thanks Tomas. Very handy info.

 FIRST: is there anyone on this list who offers PG admin support?
 Please write to me directly.

 Second, for the strace, which process should I use?


 ps auxwww|grep ^postgres
 postgres  4320  0.0  0.1 440192 10824 ?      Ss   08:49   0:00
 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
 postgres  4355  0.0  0.0 11724  964 ?        Ss   08:49   0:00
 postgres: logger process
 postgres  4365  0.0  0.0 440396 3268 ?       Ss   08:49   0:00
 postgres: writer process
 postgres  4366  0.0  0.0 11860 1132 ?        Ss   08:49   0:00
 postgres: stats collector process
 postgres 15795  0.0  0.0  7136 1440 pts/0    S    22:44   0:00 -bash
 postgres 15900  0.0  0.0  7860 1956 pts/0    S+   22:44   0:00 psql -h
 localhost MYDOMAIN -E MYDOMAIN_MYDOMAIN
 postgres 15901  0.0  0.0 441124 3072 ?       Ss   22:44   0:00
 postgres: MYDOMAIN_MYDOMAIN MYDOMAIN 127.0.0.1(34346) idle


 Third, I have the backup in two ways:

 1. I took a backup of the entire /pgsql/data folder. PG was shutdown
 at the time.
 2. I have a pg_dumpall file but it is missing one day's data (still
 useful as last resort).

 Will #1 have corrupt data in it?




Tomas,

I did a crash log with the strace for PID of the index command as you
suggested.

Here's the output:
http://www.heypasteit.com/clip/WNR

Also including below, but because this will wrap etc, you can look at
the link above.

Thanks for any ideas or pointers!



Process 15900 attached - interrupt to quit
read(0, r, 1) = 1
write(1, r, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, e, 1) = 1
write(1, e, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, i, 1) = 1
write(1, i, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, n, 1) = 1
write(1, n, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, d, 1) = 1
write(1, d, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, e, 1) = 1
write(1, e, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, x, 1) = 1
write(1, x, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0,  , 1) = 1
write(1,  , 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, l, 1) = 1
write(1, l, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, i, 1) = 1
write(1, i, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, n, 1) = 1
write(1, n, 1)= 1
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(0, \177, 1)  = 1
write(1, \10\33[K, 4) = 4
rt_sigprocmask(SIG_BLOCK, NULL,