[GENERAL] Trigger to update table info

2013-08-12 Thread Arvind Singh
Hello friends,

I have a table with the following structure



Create table postablestatus

(

tablename
varchar(30) NOT NULL,

updatetime timestamp,

reccount
int,

size
int,

   CONSTRAINT postablestatus_pkey PRIMARY KEY(tablename)

)
WITH (oids = false);



where
a) tablename - is the name of table where a record is inserted,updated or 
deleted
b) updatetime - the time when table was updated (w/o timezone)
c) reccount - records in table
d) size - bytesize of the table

--
for example, if a table called item_master had 1 record and a
 new record is inserted and it's size is 2000 bytes

 postablestatus must have a entry like

item_master08/12/2013 12:40:00  2 2000
--

I request to know how to write a trigger in PLSQL so that the postablestatus 
is updated whenever a record is inserted,updated or deleted in item_master

PLEASE NOTE : a new record must be created if the item_master is not present in 
postablestatus

thanks in advance
arvind


  

[GENERAL] Postgres 9.3 beta log

2013-08-10 Thread Arvind Singh
Hello and namaste,

I have installed the latest version an running tests on it.

Since i analyze logs , i have found three new entries in log for each insert 
statement
which isn't present in postgres 9.0 log

---
2013-08-10 11:59:27.031 
IST,postgres,aeppos,2780,192.168.10.19:50935,5205dd5b.adc,352787,idle,2013-08-10
 11:57:39 IST,5/129358,0,LOG,0,PARSER STATISTICS,! system usage stats:
!0.00 elapsed 0.00 user 0.00 system sec
![43.227877 user 19.250523 sys total],insert into loginmaster 
   (loginid,fullname,empid,passwd) values (
'login88193', 'fullname88193', 'empid88193', 'passwd88193'),,,
2013-08-10 11:59:27.032 
IST,postgres,aeppos,2780,192.168.10.19:50935,5205dd5b.adc,352788,INSERT,2013-08-10
 11:57:39 IST,5/129358,0,LOG,0,PARSE ANALYSIS STATISTICS,! system usage 
stats:
!0.00 elapsed 0.00 user 0.00 system sec
![43.227877 user 19.250523 sys total],insert into loginmaster 
   (loginid,fullname,empid,passwd) values (
'login88193', 'fullname88193', 'empid88193', 'passwd88193'),,,
2013-08-10 11:59:27.032 
IST,postgres,aeppos,2780,192.168.10.19:50935,5205dd5b.adc,352789,INSERT,2013-08-10
 11:57:39 IST,5/129358,0,LOG,0,REWRITER STATISTICS,! system usage stats:
!0.00 elapsed 0.00 user 0.00 system sec
![43.227877 user 19.250523 sys total],insert into loginmaster 
   (loginid,fullname,empid,passwd) values (
'login88193', 'fullname88193', 'empid88193', 'passwd88193'),,,

---

as you can see it is for a single record login88193 it places

PARSER STATISTICS
PARSE ANALYSIS STATISTICS
REWRITER STATISTICS

I know it appers because i have enabled log_parser_* options in log

But what i request to know is the meaning of these statistics and how usefull 
is can be for a server load  evaluation

thanking you
greetings from india
arvind





  

[GENERAL] new instance of postgres on windows

2013-07-18 Thread Arvind Singh


Hello and greetings,


I want to install postgresql for use as the backend to a Windows application.




This seems to be no problem if postgresql is NOT already installed on the 
system. which is not in this case.





postgresql is already installed and  unless the command line 
parameters contain the superpassword etc of the existing installation , the 
install fails. 
As I will likely never know the superpassword or other account details 
of any pre-existing postgresql instances and the machine owners may not 
either.


How to then proceed and install a new  instance that can  be used by our 
application.


Window 7
Postgres 9.3 beta




regards


  

[GENERAL] error postgresql remote access windows 7

2013-05-26 Thread Arvind Singh
I have installed postgres server 9.3 on windows 7 network for the first time.

The systems are connected on wifi network. 

Server in installed and working normal.

configured pghba with
host  all   all0.0.0.0/0md5
hostall all ::1/128 md5

configured postgresql.conf with
listen_addresses = '*'


I am using pgadmin III to access server 
- From Server it works fine
- But from Client it cannot find the Server.

The client can access shared folders on server and Even ping is Ok from both 
sides.

Any hint or a checklist will Help. 

regards
arvind
  

Re: [GENERAL] PG Stats Collector

2013-05-13 Thread Arvind Singh
oh thanks a lot, we desperately needed to work on the stats collector and since 
we were using windows platform, there is not even a decent application that 
reports such information
 
regards and greetings from india
 



Date: Sun, 12 May 2013 20:33:14 +0900
Subject: Re: [GENERAL] PG Stats Collector
From: michael.paqu...@gmail.com
To: arvin...@hotmail.com
CC: pgsql-general@postgresql.org


For such things documentation is your best friend. Please see inline.




On Sun, May 12, 2013 at 4:30 PM, Arvind Singh arvin...@hotmail.com wrote:



I need help in understanding the purpose of the following columns produced by 
PG stats collector in version 9.1
 
From  pg_stat_database
-
blks_read,
blks_hit,
_tup_returned,
tup_fetched

Meaning of columns of pg_stat_database: 

http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW
 


 
Reported In pg_stat_user_tables
-
idx_tup_fetch,
n_tup_hot_upd,
n_live_tup,
n_dead_tup

pg_stat_user_tables is similar to pg_stat_all_tables, except that it contains 
only user tables:
http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW


Thanks,-- 
Michael
  

[GENERAL] PG Stats Collector

2013-05-12 Thread Arvind Singh
Hello and greetings,
 
I need help in understanding the purpose of the following columns produced by 
PG stats collector in version 9.1
 
From  pg_stat_database
-
blks_read,
blks_hit,
_tup_returned,
tup_fetched
 
Reported In pg_stat_user_tables
-
idx_tup_fetch,
n_tup_hot_upd,
n_live_tup,
n_dead_tup
 
 
thank you,
arvind
  

[GENERAL] Update latest column in master table from transaction table

2012-11-03 Thread Arvind Singh

hello,
 
i have two tables 
customer_master
 cname
 lastreceiptdate
 lastreceiptamt
accounts
 cname
 date
 amount
 
i need help in constructing a single update query. where the customer_master 
table is updated with the latest receipt date and receipt amount for a single 
customer code (cname like FRUITX) from accounts table
 
so far we are using a select command to retrieve a record with max(Date) and 
then using another update command to update using results from the select query.
 
thanks

  

Re: [GENERAL] Improve MMO Game Performance

2012-10-26 Thread Arvind Singh

the cyclic change idea is really marvellous , thank you
 
but we store each round, because we need player actions for further analysis 
about game trends
 
returnstrikeHP is a random number in range, that is calculated when a player 
strikes back. although as you write, 
we can calculate the difference between rows, but that may increase the number 
of selects.
 
 
yours truly
arvind

 

 Date: Fri, 19 Oct 2012 13:54:02 +0200
 From: f...@meisterderspiele.de
 To: arvin...@hotmail.com
 Subject: Re: Improve MMO Game Performance
 
 Hello Arvind,
 
 i worked on MMO (and especially their performance) some time ago. I 
 really like(d) them. :)
 
  we are all aware of the popular trend of MMO games. where players face
 
  To understand my query, please consider the following scenario
  /we store game progress in a postgres table./
  A tournament starts with four players and following activity
 
  1. Each player starts with 100hitpoints
  2. player 1 makes a strike (we refer to a chart to convert blows to
  hitpoints with random-range %)
  3. player 2 has 92HP, and returns a light blow, so player1 has 98hp
 
 That is a very simple description or a very simple system.
 
  The above two round will now be in Game Progress Table, as
 
  |ROWPlayer1HP Player2HP Strikefrom StrikeTo ReturnStrikeHP Round TimeStamp 
  StrikeMethod
  1 100 100 0 0 0 0
  2 98 92 P1 P22 1
  |
 
 At this point you should reconsider if you need this much data. It is 
 faster to store less data.
 
 For example you do not need the colum ReturnStrikeHP. You can 
 calculate the damage from the difference between the rows.
 
 If there is a cyclic change of the attacker (P1-P2-P1-P2-P1-P2 ...) 
 you just need to store the starting player and could drop the strik 
 from and strike to column.
 If you need it and there are just 2 players in one fight, reduce the 
 column to attacker and store if it is player one or two. The one which 
 is not in the column is the defender.
 
 Also why do you store each round? Most time there are just reports, 
 charts or animations about the fight. You can generate them in a more 
 compact form and just store 1 row with this information. In this way you 
 will reduce the number of needed INSERT-operations a lot. And you will 
 speed-up the SELECT because less data must be read.
 
 I hope this will help you. Greetings from Germany,
 Torsten
  

Re: [GENERAL] Improve MMO Game Performance

2012-10-19 Thread Arvind Singh

ok,
 
thanks i will look into walbuffers
 
asynchronous_commit=off is a doubt for responsive environment (pg manual). for 
ex.
 
would it help if,
 
a game player , has to perform next move on basis of his/her previous move plus 
other players move.
all is sent to table and for each player a particular set is retrieved with a 
minimalistic  data portfolio
 

 

 Date: Thu, 18 Oct 2012 11:24:00 -0300
 From: alvhe...@2ndquadrant.com
 To: laurenz.a...@wien.gv.at
 CC: arvin...@hotmail.com; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Improve MMO Game Performance
 
 Albe Laurenz wrote:
  Arvind Singh wrote:
   Are there any particular settings or methods available to improve Just
  insert_table operations
  
  The obvious ones:
  - Wrap several of them in a transaction.
  - Increase checkpoint_segments.
  - Set fsync=off and hope you don't crash.
 
 I think it would work to set asynchronous_commit=off for the
 transactions that insert moves. That way, the fsync flushes happen in
 the background and are batched. Raising wal_buffers is probably a good
 idea, and keep an eye on how the walwriter is doing.
 
 -- 
 Álvaro Herrera http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
  

Re: [GENERAL] Improve MMO Game Performance

2012-10-14 Thread Arvind Singh

Oh no, it is average. I am sorry , i did not mention that Average was 
calculated PeakUsage+MinimumUsage/PeriodOfUsage

 
it is not that 500 users are always per hour, 
 
in real game scenario there are peak times within a hour, so it means  there 
can be  500 users making simultaneous move,
and there could be a surge of 500 inserts in an instant , and if there are 4000 
users logged in the figure could be 4000 inserts in a millisecond.  and at 
LowTide when there are 100 users, the number could be as low as 10 insert / ms
 
thanks
 

 Date: Sat, 13 Oct 2012 10:18:28 -0600
 Subject: Re: [GENERAL] Improve MMO Game Performance
 From: scott.marl...@gmail.com
 To: arvin...@hotmail.com
 CC: pgsql-general@postgresql.org; laurenz.a...@wien.gv.at
 
 On Sat, Oct 13, 2012 at 5:52 AM, Arvind Singh arvin...@hotmail.com wrote:
  we are all aware of the popular trend of MMO games. where players face each
  other live.
 
  My area of concern, is storage of player moves and game results.
  Using Csharp and PostgreSql
  The game client is browser based ASP.NET and calls Csharp functions for all
  database related processing
 
 
  To understand my query, please consider the following scenario
  we store game progress in a postgres table.
  A tournament starts with four players and following activity
 
  Each player starts with 100hitpoints
  player 1 makes a strike (we refer to a chart to convert blows to hitpoints
  with random-range %)
  player 2 has 92HP, and returns a light blow, so player1 has 98hp
 
  The above two round will now be in Game Progress Table, as
 
  ROW Player1HP Player2HP Strikefrom StrikeTo ReturnStrikeHP Round TimeStamp
  StrikeMethod
  1 100 100 0 0 0 0
  2 98 92 P1 P2 2 1
 
  There is a tremendous flow of sql queries,
 
  There are average/minimum 100 tournaments online per 12 minutes or 500
  players / hour
  In Game Progress table, We are storing each player move
  a 12 round tourament of 4 player there can be 48 records
  plus around same number for spells or special items
  a total of 96 per tourament or 48000 record inserts per hour (500
  players/hour)
 
 That's only about 13 inserts per second, and if you're batching them
 up in transactions then you could easily be doing only one insert per
 second or so. My laptop could handle that load easily.
  

[GENERAL] Improve MMO Game Performance

2012-10-13 Thread Arvind Singh

we are all aware of the popular trend of MMO games. where players face each 
other live.

My area of concern, is storage of player moves and game results. 
Using Csharp and PostgreSql 
The game client is browser based ASP.NET and calls Csharp functions for all 
database related processing
 
To understand my query, please consider the following scenario
 we store game progress in a postgres table.
A tournament starts with four players and following activity

Each player starts with 100hitpoints
player 1 makes a strike (we refer to a chart to convert blows to hitpoints with 
random-range %)
player 2 has 92HP, and returns a light blow, so player1 has 98hp
The above two round will now be in Game Progress Table, asROW Player1HP 
Player2HP Strikefrom StrikeTo ReturnStrikeHP Round TimeStamp StrikeMethod
1  100   100 000  0
2   9892P1P2   2  1
There is a tremendous flow of sql queries,  There are average/minimum 100 
tournaments online per 12 minutes or 500 players / hour
In Game Progress table, We are storing each player move 
a 12 round tourament of 4 player there can be 48 records
plus around same number for spells or special items
a total of 96 per tourament or 48000 record inserts per hour (500 
players/hour)
Are there any particular settings or methods available to improve Just 
insert_table operations  thanks arvind  
   

[GENERAL] Game Server Lags

2012-10-02 Thread Arvind Singh

we are all aware of the popular trend of MMO games. where players face each 
other live.
My questions are focussed on reducing load on Game database or Sql queries
 
a) How to control the surge of records into the GameProgress table. so that 
players get response quicker. The Server starts to lag at peak hours or when 
1000 players are online
 
There is a tremendous flow of sql queries, for ex. even in the current game 
version
There are average/minimum 100 tournaments online per 12 minutes or 500 players 
/ hour
In Game Progress table, We are storing each player move 
a 12 round tourament of 4 player there can be 48 records
plus around same number for spells or special items
 a total of 96 per tourament or 48000 record inserts per hour (500 
players/hour)
 
I am also considering using a background process in Csharp, that keeps moving 
expired tournament records from GameProgresstable to another Database where we 
have a server free of gameplay load.
 
b) How often should we run vaccum full of postgres . 
 
c) can we set a table to be present in some kind of cache or quick buffer for 
quicker access, for ex. we often have to authenticate user credentials or 
lookup tournament status in Table

thanks
arvind
  

[GENERAL] PG Log MultiLine Records

2012-09-21 Thread Arvind Singh

we have been working on a CSV Log parser application in csharp.
 
we recently discovered that certain log entries or records can span across 
multiple lines.
 
in the sense, that the same activity has more detail in subsequent lines. 
 
For ex, a select,insert query has
A query entry
A Duration entry
A Stats entry
 
and that is why there was a serious coding mistake in counting select entries, 
because the same select statement 
is displayed in Query column twice and once in Detail column
 
So our application counted them as three queries.
 
My query is, has any of us come across more such examples or 
:: is there a link or page explaining type of Log entries 
:: Number of log statements a logentry type occupy.
:: or other finer details that we may overlook
 
it will be helpfull as we wish to release this application for all pg users at 
no price.
 
thanks arvind
 
--What charm in words, a charm no 
words could give?
O dying words, can Music make you live
Far--far--away?
- (Alfred, Lord Tennyson in Far-Far-Away)   
  

Re: [GENERAL] application for postgres Log

2012-09-20 Thread Arvind Singh

thanx Laurenz,
 
But
 
Our CSV Log contains lot of statements like the following THREE lines. They 
appear exactly one after the other.
 
And they number in thousands for a Session (more than ten thousand)
 
 
2011-11-11 12:41:31.484 
IST,agent1,pem,524,localhost:2141,4ebccaa2.20c,754,idle,2011-11-11 
12:41:30 IST,2/308,0,LOG,0,statement: INSERT INTO pemdata.settings 
(name, setting, unit, server_id) VALUES ('xmlbinary', 'base64', 
NULL, '1')exec_simple_query, .\src\backend\tcop\postgres.c:900,

2011-11-11 12:41:31.484 
IST,agent1,pem,524,localhost:2141,4ebccaa2.20c,755,INSERT,2011-11-11 
12:41:30 IST,2/0,0,LOG,0,duration: 0.000 msexec_simple_query, 
.\src\backend\tcop\postgres.c:1128,

2011-11-11 12:41:31.484 
IST,agent1,pem,524,localhost:2141,4ebccaa2.20c,756,INSERT,2011-11-11 
12:41:30 IST,2/0,0,LOG,0,QUERY STATISTICS,! system usage stats:
! 0.00 elapsed 0.00 user 0.00 system sec
! [0.25 user 0.156250 sys total],INSERT INTO pemdata.settings 
(name, setting, unit, server_id) VALUES ('xmlbinary', 'base64', 
NULL, '1'),,ShowUsage, .\src\backend\tcop\postgres.c:4305,
 
Is there anything that we enabled, because they dont appear after that 
particular session.
 
The log file is uploaded at 
http://dl.dropbox.com/u/71964910/pg_log_with_lot_of_session.zip

 
 
arvind ps 
 

 It would indeed be divine intervention if fifty thousand had
 five zeros.
 
 Other than that, I don't see anything special about that.
 A session can last pretty long.
 Maybe you can solve your mystery by looking at the log entries.
 They should tell you what was going on.
 
  Although it never reoccurs and luckily we had csv option on during
 that period.
  
  Where should i report such findings
 
 I don't think there is anything wrong.
 At least nothing database related.
 
  I have uploaded that Part of Log at
 http://dl.dropbox.com/u/71964910/pg_log_with_lot_of_session.zip
 
 That looks like somebody turned on log_statement_stats for a spell.
 
 Yours,
 Laurenz Albe
 
  

Re: [GENERAL] application for postgres Log

2012-09-15 Thread Arvind Singh

Thanks again,
 
I have come across a very peculiar situation. 
 
We have a postgres installation 9.0. It was installed last year.
 
But we started implementation on it, just recently.
 
And therefore the need to develop a Logparser application.
 
During our preliminary parsing , What we discovered is just beyond the grasp of 
my knowledge.
 
It seems that during a certain period lastyear in November, it created a 
Session entry that holds more than
Fifty thousand records for a SINGLE SESSION (4ebccaa2.20c) . Yes that is 5 with 
five zeros
 
 
One does look at wonder of the Dawn of November when such occurance holds 
place. 
Is it divine intervention, because we never parsed the log in those days and 
therefore cannot ascertain the reasons for such verbosity.
 
Although it never reoccurs and luckily we had csv option on during that period.
 
Where should i report such findings
 
I have uploaded that Part of Log at 
http://dl.dropbox.com/u/71964910/pg_log_with_lot_of_session.zip
 
arvind
 
--
The woods are lovely, dark, and deep,
But I have promises to keep,
And miles to go before I sleep,
And miles to go before I sleep. 
(Robert Frost, Stopping by Woods on a Snowy Evening)

 

 Subject: RE: [GENERAL] application for postgres Log
 Date: Fri, 14 Sep 2012 13:16:33 +0200
 From: laurenz.a...@wien.gv.at
 To: arvin...@hotmail.com; pgsql-general@postgresql.org
 
 Arvind Singh wrote:
  I have enabled the csvlog option.
  
  There seem to be 24 columns in the Log. I have difficulty in
 understanding the following columns
  
  - virtual_transaction_id
 
 Such a transaction ID is given to every transaction.
 Can be used to track transactions in the log.
 
  - transaction_id
 
 Such a transaction ID is given to a transaction
 that modifies the database.
 
  - hint
 
 Part of the error message (like detail and context).
 
  - internal_query
  - internal_query_pos
 
 These are used a handful of times in the code, I'm not
 100% sure what for.
 You may never encouter one of those.
 
  - context
 
 Describes where the error ocurred.
 
 Yours,
 Laurenz Albe
  

Re: [GENERAL] application for postgres Log

2012-09-14 Thread Arvind Singh

Thanx for the reply,
 
I have enabled the csvlog option.
 
There seem to be 24 columns in the Log. I have difficulty in understanding the 
following columns
 
- virtual_transaction_id
- transaction_id
- hint
- internal_query
- internal_query_pos
- context
 
I have correlated the column names from log structure mentioned in documentation
 
arvind
 

 Subject: RE: [GENERAL] application for postgres Log
 Date: Mon, 10 Sep 2012 09:31:29 +0200
 From: laurenz.a...@wien.gv.at
 To: arvin...@hotmail.com; pgsql-general@postgresql.org
 
 Arvind Singh wrote:
  I am in a project that uses PostGresSql v9.0. We are developing an
 application in CSharp to parse the
  PG server activity log installed on Windows 2003/XP or higher.
  
  Our application will :
  Scan the Log for a given search text and Post rows found
  Produce statistics related to SQL queries tracked in a time frame
  Will allow selection of log columns, that can be displayed in a
 DataGrid
  
  I need some help or even a simple link that is related to this
 subject. I am already checking Goog
  query of Postgres Log, though it gives mostly results for the postgres
 Documentation.
 
 Maybe pgreplay can give you some inspiration:
 http://pgreplay.projects.postgresql.org/
 
 Check out parse.c which parses a log file.
 
 One bit of advice I'd like to give you is a recommendation
 to use the CSV log format. The regular error log is much
 harder to parse, and moreover everything that gets printed
 to stderr by the server will end up there. This could be
 dumps when the server runs out of memory or stuff that loaded
 modules produce. All this will screw up your log file
 parsing.
 
 Yours,
 Laurenz Albe
  

[GENERAL] application for postgres Log

2012-09-10 Thread Arvind Singh

Hello and namaste
 
I am in a project that uses PostGresSql v9.0. We are developing an application 
in CSharp to parse the PG server activity log installed on Windows 2003/XP or 
higher. 
 
Our application will :
Scan the Log for a given search text and Post rows found
Produce statistics related to SQL queries tracked in a time frame
Will allow selection of log columns, that can be displayed in a DataGrid

I need some help or even a simple link that is related to this subject. I am 
already checking Goog query of Postgres Log, though it gives mostly results for 
the postgres Documentation.
 
thanks
 
arvind
  

[GENERAL] More PG Log

2012-03-29 Thread Arvind Singh

Oh, thankx for the answer on PG Log. 
 
our postgres is on windows , it is version 9.0.5. There are a variety of 
applications, like CRM , Stockrelated 
and now there is a online game based on it.
 
We are providing application development, statistics, datametrics support . For 
simple application and user settings
we use SqlLite and main application is on Postgres
 
PG Log is our only window to all the activies on DB Server and therefore it 
becomes very important especially to track 
abnormal query or activities that are causing frequent errors. at any time we 
have aroung 10k entries waiting to be analyzed
for which we are building a customer application in c sharp, that matches our 
keywords.
 
For which, 
 
Query 1
---
do we have a standard list of following Log Codes
- Command_tag ex. IDLE, SELECT ..
- error_severity , ex. FATAL, LOG .. 
- sql_state_code , ex. 0, 08P01 ..
 
 
Query 2
--

I have my CSV Log with lot of occurances of a certain Log select statement.  
-
2012-03-28 19:25:48.015 
IST,postgres,stock_apals,2388,localhost:1898,4f731863.954,7,idle,2012-03-28
 19:25:47 IST,2/98,0,LOG,0,statement: SELECT typname, oid FROM pg_type 
WHERE typname IN ('oidvector', '_oidvector', 'unknown', '_unknown', 
'refcursor', '_refcursor', 'char', '_char', 'bpchar', '_bpchar', 'varchar', 
'_varchar', 'text', '_text', 'name', '_name', 'bytea', '_bytea', 'bit', '_bit', 
'bool', '_bool', 'int2', '_int2', 'int4', '_int4', 'int8', '_int8', 'oid', 
'_oid', 'float4', '_float4', 'float8', '_float8', 'numeric', '_numeric', 
'inet', '_inet', 'money', '_money', 'point', '_point', 'lseg', '_lseg', 'path', 
'_path', 'box', '_box', 'circle', '_circle', 'polygon', '_polygon', 'uuid', 
'_uuid', 'xml', '_xml', 'interval', '_interval', 'date', '_date', 'time', 
'_time', 'timetz', '_timetz', 'timestamp', '_timestamp', 'abstime', '_abstime', 
'timestamptz', '_timestamptz')exec_simple_query, 
.\src\backend\tcop\postgres.c:900,
-
 
Is is a performance concern. ? 
is there anything that i can do to keep this statement from recurring.

 
 
Regards
arvind 
 
 

 

 Subject: RE: [GENERAL] PG Log
 Date: Thu, 29 Mar 2012 10:49:20 +0200
 From: laurenz.a...@wien.gv.at
 To: arvin...@hotmail.com; pgsql-general@postgresql.org
 
 Arvind Singh wrote:
  I have queries regarding columns in Postgres CSV Log.
  
  Following is a sample Logline
 
 
  2012-03-28 19:25:47.968
 IST,postgres,stock_apals,2388,localhost:1898,4f731863.954,6,SET,
 2012-
  03-28 19:25:47 IST,2/0,0,LOG,0,QUERY STATISTICS,! system usage
 stats:
  ! 0.047000 elapsed 0.00 user 0.00 system sec
  ! [0.078125 user 0.031250 sys total],Select * from stock_apals
  ,,ShowUsage, .\src\backend\tcop\postgres.c:4305,
 
 
  
  I am aware of all the data segments except the following
 
 
  ! system usage stats:
  ! 0.047000 elapsed 0.00 user 0.00 system sec
  ! [0.078125 user 0.031250 sys total],
 
 
  
  What do the number mean, it seems to appear only with a Logline for
 SQL statements
 
 These lines are emitted if you set log_statement_stats to on.
 
 They contain execution statistics for the query. The user and sys
 times are acquired by the getrusage(2) or times(2) system call,
 depending
 on your operating system (on Windows, GetProcessTimes is used).
 You can look at the man page for your system for details.
 
 The values you see are:
 wall time, CPU user time and kernel CPU time it took to execute the
 query.
 The values in brackets are the values accumulated for this database
 session.
 
 Yours,
 Laurenz Albe
  

[GENERAL] PG Log

2012-03-28 Thread Arvind Singh


I have queries regarding columns in Postgres CSV Log.
 
Following is a sample Logline

2012-03-28 19:25:47.968 
IST,postgres,stock_apals,2388,localhost:1898,4f731863.954,6,SET,2012-03-28
 19:25:47 IST,2/0,0,LOG,0,QUERY STATISTICS,! system usage stats:
! 0.047000 elapsed 0.00 user 0.00 system sec
! [0.078125 user 0.031250 sys total],Select * from stock_apals
,,ShowUsage, .\src\backend\tcop\postgres.c:4305,

 
I am aware of all the data segments except the following

! system usage stats:
! 0.047000 elapsed 0.00 user 0.00 system sec
! [0.078125 user 0.031250 sys total],


What do the number mean, it seems to appear only with a Logline for SQL 
statements
 
thank you
arvind

  

[GENERAL] pgcon 2012

2012-03-23 Thread Arvind Singh

Dear sir,
 
we are recent users of Postgres and wish to be more aware and soak up the 
activities around the DB.
we used to develop more on MySql , SQl Server combo
 
Now we are using PG 9.0.5 , NPGSQL + Visual CSharp , Java
 
Is http://www.pgcon.org/2012/?2nd
 
a valid and certified Postgres Event or is it just a marketing strategy by 
Private Event Management firm
 
Especially related to 
- latest PG developments  
- learn about PG Stats , Logs , PG_Settings table
- hire postgres related manpower
 
because we are based in Delhi, and it will take efforts to send candidates to 
the event 
 
thanking you
yours truly
arvind

[GENERAL] Help in Parsing PG log usings CSV format

2012-03-22 Thread Arvind Singh

Help needed in parsing PostgreSQL CSV Log
Hello friends,
I am working an a section of application which needs to Parse CSV Logs 
generated by PostgreSql server.
- The Logs are stored C:\Program Files\PostgreSQL\9.0\data\pg_log
- The Server version in 9.0.4
- The application is developed in C Sharp 
* The basic utility after Parse the Log is to show contents in a 
DataGridView. 
* There are other filter options like to view log contents for a particular 
range of Time for a Day.
 
**However the main problem that is, the Log format is not readable** 
 
 
A Sample Log data line 
 2012-03-21 11:59:20.640 
 IST,postgres,stock_apals,3276,localhost:1639,4f697540.ccc,10,idle,2012-03-21
  11:59:20 IST,2/163,0,LOG,0,statement: SELECT 
 version()exec_simple_query, .\src\backend\tcop\postgres.c:900,
 
As you can see the columns in the Log are comma separated , But however 
individual values  are not Quote Enclosed.
 
For instance the 1st,4rth,6th .. columns 
 
**Is there a setting in PostgreSQL configuration to place quotes around all 
columns in a Logfili
 
 
I just want to update the columns so that all are within Quotes
 
what happens wrong is when it reaches the column where sql statement is place. 
it also has commas set for table columns. The log line is a mix bunch of 
quote-enclosed and non-quote-enclosed column. is there is a configuration or 
utility to convert the non-quoted column to quoted column
 
PS : the copy utility or any other utility cannot be used , as we have to parse 
the log within a C Sharp application

Thanks for any advice and help

Re: [GENERAL] Parsing PG log usings CSV format

2012-03-22 Thread Arvind Singh

Thank you sir,
 
i have sorted out the problem on 
 The columns that are not quoted are guaranteed not to contain a comma. 
 
But i have another query, the structure of PG Log CSV as mentioned in manual 
and as below has 24 columns
http://www.postgresql.org/docs/9.1/static/runtime-config-logging.html
---


log_time timestamp,
  user_name text,
  database_name text,
  process_id integer,
  connection_from text,
  session_id text,
  session_line_num bigint,
  command_tag text,
  session_start_time timestamp,
  virtual_transaction_id text,
  transaction_id bigint,
  error_severity text,
  sql_state_code text,
  message text,
  detail text,
  hint text,
  internal_query text,
  internal_query_pos integer,
  context text,
  query text,
  query_pos integer,
  location text,
  application_name text,
  PRIMARY KEY (session_id, session_line_num)
---
 
However every Log line added contains only 22 columns, a sample log line is 
given below
---

2012-03-22 15:29:53.546 
IST,postgres,stock_apals,2396,localhost:2766,4f6af819.95c,9,SELECT,2012-03-22
 15:29:53 IST,3/0,0,LOG,0,QUERY STATISTICS,! system usage stats:
! 0.015000 elapsed 0.00 user 0.015625 system sec
! [0.078125 user 0.062500 sys total],SELECT SUBSTR(itemname, 1, 15) as 
Product,
avg(rate) as Avgrate
FROM   issue_stock WHERE extract(year from issue_stock.edate) = '2011'
GROUP BY itemname
order by itemname,,ShowUsage, .\src\backend\tcop\postgres.c:4305,
---
 
 the last column of the Log is not the Primarykey ?
 the last column as of yet is unknown because at all occurances it is a empty 
 quoted column ?
 the column numbers dont match with the generated log
 
Is this is a different Log format , can you guide us to a page where the column 
specifications can be matched.
 
Yrs truly
arvind pal singh
 

 Subject: RE: [GENERAL] Help in Parsing PG log usings CSV format
 Date: Thu, 22 Mar 2012 16:47:11 +0100
 From: laurenz.a...@wien.gv.at
 To: arvin...@hotmail.com; pgsql-general@postgresql.org
 
 Arvind Singh wrote:
  Help needed in parsing PostgreSQL CSV Log
 
 [...]
 
  **However the main problem that is, the Log format is not readable**
 
  A Sample Log data line
   2012-03-21 11:59:20.640
 
 IST,postgres,stock_apals,3276,localhost:1639,4f697540.ccc,10,idle
 ,2012-03-21 11:59:20
  IST,2/163,0,LOG,0,statement: SELECT
 version()exec_simple_query,
  .\src\backend\tcop\postgres.c:900,
  
  As you can see the columns in the Log are comma separated , But
 however individual values are not
  Quote Enclosed.
  
  For instance the 1st,4rth,6th .. columns
  
  **Is there a setting in PostgreSQL configuration to place quotes
 around all columns in a Logfili
  
  I just want to update the columns so that all are within Quotes
  
  what happens wrong is when it reaches the column where sql statement
 is place. it also has commas set
  for table columns. The log line is a mix bunch of quote-enclosed and
 non-quote-enclosed column. is
  there is a configuration or utility to convert the non-quoted column
 to quoted column
 
 The columns that are not quoted are guaranteed not to contain a comma.
 So it shouldn't be a problem to parse them.
 
 In fact, it is quite easy. As an example, see here:
 http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgreplay/pgreplay/parse.c?re
 v=1.14content-type=text/x-cvsweb-markup
 In the function parse_csvlog_entry, after the comment
 read next line after start timestamp from log file
 you can find code that parses such a line.
 
 The code is in C, so it should be easy to port it to C#, which
 is essentially Java, which has C-like syntax.
 
 Yours,
 Laurenz Albe
  

[GENERAL] question related to pg_stat_database

2011-10-29 Thread Arvind Singh

Hello pg_users,
 
we have just migrated  an application from mysql to postgresql
 
we use csharp+npgsql to develo application
 
I need to monitor Postgresql statistics at regular intervals.
 
I came across pg_stat_database and especially the columns tulip_fetched and 
tup_returned.
 
Is there anything to worry , when comparing both figures.
 
For ex. my server reports
 
tulip_fetched = 736555240
 
tulip_returned = 99888791 (almost 10% of fetched)
 
 
 
thanx.