Re: [GENERAL] Improve MMO Game Performance

2012-10-14 Thread Arvind Singh

Oh no, it is average. I am sorry , i did not mention that Average was 
calculated PeakUsage+MinimumUsage/PeriodOfUsage

 
it is not that 500 users are always per hour, 
 
in real game scenario there are peak times within a hour, so it means  there 
can be  500 users making simultaneous move,
and there could be a surge of 500 inserts in an instant , and if there are 4000 
users logged in the figure could be 4000 inserts in a millisecond.  and at 
LowTide when there are 100 users, the number could be as low as 10 insert / ms
 
thanks
 

 Date: Sat, 13 Oct 2012 10:18:28 -0600
 Subject: Re: [GENERAL] Improve MMO Game Performance
 From: scott.marl...@gmail.com
 To: arvin...@hotmail.com
 CC: pgsql-general@postgresql.org; laurenz.a...@wien.gv.at
 
 On Sat, Oct 13, 2012 at 5:52 AM, Arvind Singh arvin...@hotmail.com wrote:
  we are all aware of the popular trend of MMO games. where players face each
  other live.
 
  My area of concern, is storage of player moves and game results.
  Using Csharp and PostgreSql
  The game client is browser based ASP.NET and calls Csharp functions for all
  database related processing
 
 
  To understand my query, please consider the following scenario
  we store game progress in a postgres table.
  A tournament starts with four players and following activity
 
  Each player starts with 100hitpoints
  player 1 makes a strike (we refer to a chart to convert blows to hitpoints
  with random-range %)
  player 2 has 92HP, and returns a light blow, so player1 has 98hp
 
  The above two round will now be in Game Progress Table, as
 
  ROW Player1HP Player2HP Strikefrom StrikeTo ReturnStrikeHP Round TimeStamp
  StrikeMethod
  1 100 100 0 0 0 0
  2 98 92 P1 P2 2 1
 
  There is a tremendous flow of sql queries,
 
  There are average/minimum 100 tournaments online per 12 minutes or 500
  players / hour
  In Game Progress table, We are storing each player move
  a 12 round tourament of 4 player there can be 48 records
  plus around same number for spells or special items
  a total of 96 per tourament or 48000 record inserts per hour (500
  players/hour)
 
 That's only about 13 inserts per second, and if you're batching them
 up in transactions then you could easily be doing only one insert per
 second or so. My laptop could handle that load easily.
  

[GENERAL] Using incorrect default-value type

2012-10-14 Thread Arturo Pie Joa
Hello,

I have found a strange behavior in postgreSQL when adding columns with defaults 
of a domain type in a schema different from public. This is the example to 
reproduce it:

CREATE SCHEMA schema_1;
CREATE DOMAIN schema_1.text AS text;

SET search_path TO schema_1, pg_catalog;

CREATE TABLE test
(
  col1 text DEFAULT 'some value'
);

SELECT a.attname,
       pg_get_expr(d.adbin, d.adrelid) AS default
FROM pg_attribute a LEFT JOIN pg_attrdef d
           ON a.attrelid = d.adrelid AND a.attnum = d.adnum
WHERE a.attrelid = 'schema_1.test'::regclass
                AND a.attnum  0 
ORDER BY a.attnum;

Last query will return: 
col1     |     'some value'::pg_catalog.text

I don't understand why it is using pg_catalog.text, when it should be using 
schema_1.text, or in this case the query should return just text since the 
search path is using schema_1. 

Furthermore, if I open pgAdmin and select col1 in test table, SQL pane will 
show:
ALTER TABLE schema_1.test ALTER COLUMN col1 SET DEFAULT 'some value'::text;


but I believe, it should show:
ALTER TABLE schema_1.test ALTER COLUMN col1 SET DEFAULT 'some 
value'::schema_1.text;


Is this a bug or am I missing something?

Thanks,

Arturo


Re: [GENERAL] Using incorrect default-value type

2012-10-14 Thread David Johnston
On Oct 14, 2012, at 15:49, Arturo Pie Joa arturot...@yahoo.ca wrote:

 Hello,
 
 I have found a strange behavior in postgreSQL when adding columns with 
 defaults of a domain type in a schema different from public. This is the 
 example to reproduce it:
 
 CREATE SCHEMA schema_1;
 CREATE DOMAIN schema_1.text AS text;
 
 SET search_path TO schema_1, pg_catalog;
 
 CREATE TABLE test
 (
   col1 text DEFAULT 'some value'
 );
 
 SELECT a.attname,
pg_get_expr(d.adbin, d.adrelid) AS default
 FROM pg_attribute a LEFT JOIN pg_attrdef d
ON a.attrelid = d.adrelid AND a.attnum = d.adnum
 WHERE a.attrelid = 'schema_1.test'::regclass
 AND a.attnum  0 
 ORDER BY a.attnum;
 
 Last query will return: 
 col1 | 'some value'::pg_catalog.text
 
 I don't understand why it is using pg_catalog.text, when it should be using 
 schema_1.text, or in this case the query should return just text since 
 the search path is using schema_1. 
 
 Furthermore, if I open pgAdmin and select col1 in test table, SQL pane 
 will show:
 ALTER TABLE schema_1.test ALTER COLUMN col1 SET DEFAULT 'some value'::text;
 
 but I believe, it should show:
 ALTER TABLE schema_1.test ALTER COLUMN col1 SET DEFAULT 'some 
 value'::schema_1.text;
 
 Is this a bug or am I missing something?
 
 Thanks,
 
 Arturo
 

What you are doing (name overloading) seems inadvisable regardless of whether 
it should work are described.  My guess is that the system searches for an 
actual type first and only if it fails to find a matching type then looks for a 
matching domain.

David J.



Re: [GENERAL] Using incorrect default-value type

2012-10-14 Thread Tom Lane
Arturo Pie Joa arturot...@yahoo.ca writes:
 I have found a strange behavior in postgreSQL when adding columns with
 defaults of a domain type in a schema different from public.

As the other respondent said, you're confusing the issue by using
conflicting names.  If you hadn't done that, you'd see this:

regression=# create domain mytext as text;
CREATE DOMAIN
regression=# create table test (co1 mytext default 'some value');
CREATE TABLE
regression=# \d+ test
 Table public.test
 Column |  Type  | Modifiers  | Storage  | Stats target | 
Description 
+++--+--+-
 co1| mytext | default 'some value'::text | extended |  | 
Has OIDs: no

which makes it at least slightly clearer what's going on: pg_get_expr is
just reporting the base type of the constant.  The constant does get
cast to the domain type, but that's an implicit cast that's not shown in
the decompiled output.  You can convince yourself of that by looking
directly at the pg_attrdef entry:

regression=# select * from pg_attrdef where adrelid = 'test'::regclass;
 adrelid | adnum |  

 adbin  
 |   
adsrc
-+---+---+
   67889 | 1 | {COERCETODOMAIN :arg {CONST :consttype 25 :consttypmod -1 
:constcollid 100 :constlen -1 :constbyval false :constisnull false :location 38 
:constvalue 14 [ 0 0 0 14 115 111 109 101 32 118 97 108 117 101 ]} :resulttype 
67888 :resulttypmod -1 :resultcollid 100 :coercionformat 2 :location -1} | 
'some value'::text
(1 row)

The constant has type 25 (text).  The CoerceToDomain node does have
mytype (with OID 67888 in this test) as result type, but it isn't shown
because it has coercionformat 2 (COERCE_IMPLICIT_CAST).

BTW, the reason it's done like this is that CoerceToDomain has to be
done at runtime: if you were to do an ALTER DOMAIN ADD CONSTRAINT,
it's not clear whether or not the constant would still be a valid value
of the domain.  So we can't just generate a Const node that claims that
'some text' is a value of type mytype.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issues

2012-10-14 Thread Shiran Kleiderman
Hi
I've returned the memory configs to the default, erased data from my db and
am testing the system again.

This is the output of *cat /proc/meminfo*
Thanks

root@ip-10-194-167-240:~# cat /proc/meminfo
MemTotal:7629508 kB
MemFree:  170368 kB
Buffers:   10272 kB
Cached:  6220848 kB
SwapCached:0 kB
Active:  3249748 kB
Inactive:3936960 kB
Active(anon): 971336 kB
Inactive(anon):  2103844 kB
Active(file):2278412 kB
Inactive(file):  1833116 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:524284 kB
SwapFree: 522716 kB
Dirty: 83068 kB
Writeback:  3080 kB
AnonPages:955856 kB
Mapped:  2132564 kB
Shmem:   2119424 kB
Slab: 157200 kB
SReclaimable: 144488 kB
SUnreclaim:12712 kB
KernelStack:1184 kB
PageTables:21092 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 4339036 kB
Committed_AS:3637424 kB
VmallocTotal:   34359738367 kB
VmallocUsed:   26152 kB
VmallocChunk:   34359710052 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k: 7872512 kB
DirectMap2M:   0 kB


On Thu, Sep 27, 2012 at 8:59 AM, Andres Freund and...@2ndquadrant.comwrote:

 On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:
  Hi,
  I'm using and Amazon ec2 instance with the following spec and the
  application that I'm running uses a postgres DB 9.1.
  The app has 3 main cron jobs.
 
  *Ubuntu 12, High-Memory Extra Large Instance
  17.1 GB of memory
  6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
  420 GB of instance storage
  64-bit platform*
 
  I've changed the main default values under file *postgresql.conf* to:
  shared_buffers = 4GB
  work_mem = 16MB
  wal_buffers = 16MB
  checkpoint_segments = 32
  effective_cache_size = 8GB
 
  When I run the app, after an hour or two, free -m looks like below ans
 the
  crons can't run due to memory loss or similar (i'm new to postgres and db
  admin).
  Thanks!
 
  free -m, errors:
 
  total used free shared buffers cached
  Mem: 17079 13742 3337 0 64 11882
  -/+ buffers/cache: 1796 15283
  Swap: 511 0 511
 
  total used *free* shared buffers cached
  Mem: 17079 16833 *245 *0 42 14583
  -/+ buffers/cache: 2207 14871
  Swap: 511 0 511
 
  **free above stays low even when nothing is running.
 
  **errors:
  *DBI connect('database=---;host=localhost','postgres',...) failed: could
  not fork new process for connection: Cannot allocate memory*
  could not fork new process for connection: Cannot allocate memory
 
  and
  execute failed: ERROR: out of memory
  DETAIL: Failed on request of size 968. [for Statement 
  SELECT DISTINCT
 could you show cat /proc/meminfo?

 Greetings,

 Andres
 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




-- 
Best,
Shiran Kleiderman
+972  - 542380838
Skype - shirank1


[GENERAL] Mapping PostgreSQL data types to DB2 Federated Server

2012-10-14 Thread Alexander Gataric
Hello,

I'm on a project which requires adding PostgreSQL tables to DB2 Federated
Server. I'm getting an error with PostgreSQL data types boolean, text,
bytea, and XML. I believe this can be solved with the CREATE TYPE MAPPING in
Fed Server. Does anyone know which values to use? I'm not that familiar with
Fed Server.

Also, the Postgres data is being extracted and inserted from the same table
using Optim Archive. Does this pose an additional challenge with setting up
the mapping?

Thank you in advance

Alex

 



Re: [GENERAL] Mapping PostgreSQL data types to DB2 Federated Server

2012-10-14 Thread John R Pierce

On 10/14/12 5:52 PM, Alexander Gataric wrote:


I'm on a project which requires adding PostgreSQL tables to DB2 
Federated Server. I'm getting an error with PostgreSQL data types 
boolean, text, bytea, and XML. I believe this can be solved with the 
CREATE TYPE MAPPING in Fed Server. Does anyone know which values to 
use? I'm not that familiar with Fed Server.


Also, the Postgres data is being extracted and inserted from the same 
table using Optim Archive. Does this pose an additional challenge with 
setting up the mapping?




I suggest you talk to your IBM support contacts for these issues, they 
really have little to do with Postgres and are completely beyond 
postgres' control.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Mapping PostgreSQL data types to DB2 Federated Server

2012-10-14 Thread Alexander Gataric
The IBM people aren't being helpful so I thought I'd ask here.

Sent from my smartphone

- Reply message -
From: John R Pierce pie...@hogranch.com
To: pgsql-general@postgresql.org
Subject: [GENERAL] Mapping PostgreSQL data types to DB2 Federated Server
Date: Sun, Oct 14, 2012 8:05 pm


On 10/14/12 5:52 PM, Alexander Gataric wrote:

 I'm on a project which requires adding PostgreSQL tables to DB2  Federated 
 Server. I'm getting an error with PostgreSQL data types  boolean, text, 
 bytea, and XML. I believe this can be solved with the  CREATE TYPE MAPPING 
 in Fed Server. Does anyone know which values to  use? I'm not that familiar 
 with Fed Server.

 Also, the Postgres data is being extracted and inserted from the same  table 
 using Optim Archive. Does this pose an additional challenge with  setting up 
 the mapping?


I suggest you talk to your IBM support contacts for these issues, they really 
have little to do with Postgres and are completely beyond postgres' control.



-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: [GENERAL] Mapping PostgreSQL data types to DB2 Federated Server

2012-10-14 Thread Craig Ringer

On 10/15/2012 09:37 AM, Alexander Gataric wrote:

The IBM people aren't being helpful so I thought I'd ask here.


Try dba.stackexchange.com .

--
Craig Ringer



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory issues

2012-10-14 Thread Shiran Kleiderman
Hi
This is the output of meminfo when the system is under some stress.
Thanks

cif@ip-10-194-167-240:/tmp$ cat /proc/meminfo
MemTotal:7629508 kB
MemFree:   37820 kB
Buffers:2108 kB
Cached:  5500200 kB
SwapCached:  332 kB
Active:  4172020 kB
Inactive:3166244 kB
Active(anon):1864040 kB
Inactive(anon):  1568760 kB
Active(file):2307980 kB
Inactive(file):  1597484 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:524284 kB
SwapFree:  0 kB
Dirty: 23336 kB
Writeback: 0 kB
AnonPages:   1835716 kB
Mapped:  1610460 kB
Shmem:   1596916 kB
Slab: 136168 kB
SReclaimable: 123820 kB
SUnreclaim:12348 kB
KernelStack:1176 kB
PageTables:23148 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 4339036 kB
Committed_AS:4517524 kB
VmallocTotal:   34359738367 kB
VmallocUsed:   26152 kB
VmallocChunk:   34359710052 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k: 7872512 kB
DirectMap2M:   0 kB


On Mon, Oct 15, 2012 at 2:45 AM, Shiran Kleiderman shira...@gmail.comwrote:

 Hi
 I've returned the memory configs to the default, erased data from my db
 and am testing the system again.

 This is the output of *cat /proc/meminfo*
 Thanks

 root@ip-10-194-167-240:~# cat /proc/meminfo
 MemTotal:7629508 kB
 MemFree:  170368 kB
 Buffers:   10272 kB
 Cached:  6220848 kB
 SwapCached:0 kB
 Active:  3249748 kB
 Inactive:3936960 kB
 Active(anon): 971336 kB
 Inactive(anon):  2103844 kB
 Active(file):2278412 kB
 Inactive(file):  1833116 kB
 Unevictable:   0 kB
 Mlocked:   0 kB
 SwapTotal:524284 kB
 SwapFree: 522716 kB
 Dirty: 83068 kB
 Writeback:  3080 kB
 AnonPages:955856 kB
 Mapped:  2132564 kB
 Shmem:   2119424 kB
 Slab: 157200 kB
 SReclaimable: 144488 kB
 SUnreclaim:12712 kB
 KernelStack:1184 kB
 PageTables:21092 kB
 NFS_Unstable:  0 kB
 Bounce:0 kB
 WritebackTmp:  0 kB
 CommitLimit: 4339036 kB
 Committed_AS:3637424 kB
 VmallocTotal:   34359738367 kB
 VmallocUsed:   26152 kB
 VmallocChunk:   34359710052 kB
 HardwareCorrupted: 0 kB
 AnonHugePages: 0 kB
 HugePages_Total:   0
 HugePages_Free:0
 HugePages_Rsvd:0
 HugePages_Surp:0
 Hugepagesize:   2048 kB
 DirectMap4k: 7872512 kB
 DirectMap2M:   0 kB


 On Thu, Sep 27, 2012 at 8:59 AM, Andres Freund and...@2ndquadrant.comwrote:

 On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:
  Hi,
  I'm using and Amazon ec2 instance with the following spec and the
  application that I'm running uses a postgres DB 9.1.
  The app has 3 main cron jobs.
 
  *Ubuntu 12, High-Memory Extra Large Instance
  17.1 GB of memory
  6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
  420 GB of instance storage
  64-bit platform*
 
  I've changed the main default values under file *postgresql.conf* to:
  shared_buffers = 4GB
  work_mem = 16MB
  wal_buffers = 16MB
  checkpoint_segments = 32
  effective_cache_size = 8GB
 
  When I run the app, after an hour or two, free -m looks like below ans
 the
  crons can't run due to memory loss or similar (i'm new to postgres and
 db
  admin).
  Thanks!
 
  free -m, errors:
 
  total used free shared buffers cached
  Mem: 17079 13742 3337 0 64 11882
  -/+ buffers/cache: 1796 15283
  Swap: 511 0 511
 
  total used *free* shared buffers cached
  Mem: 17079 16833 *245 *0 42 14583
  -/+ buffers/cache: 2207 14871
  Swap: 511 0 511
 
  **free above stays low even when nothing is running.
 
  **errors:
  *DBI connect('database=---;host=localhost','postgres',...) failed: could
  not fork new process for connection: Cannot allocate memory*
  could not fork new process for connection: Cannot allocate memory
 
  and
  execute failed: ERROR: out of memory
  DETAIL: Failed on request of size 968. [for Statement 
  SELECT DISTINCT
 could you show cat /proc/meminfo?

 Greetings,

 Andres
 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




 --
 Best,
 Shiran Kleiderman
 +972  - 542380838
 Skype - shirank1




-- 
Best,
Shiran Kleiderman
+972  - 542380838
Skype - shirank1


Re: [GENERAL] Re: [GENERAL] Mapping PostgreSQL data types to DB2 Federated Server

2012-10-14 Thread John R Pierce

On 10/14/12 6:37 PM, Alexander Gataric wrote:

On 10/14/12 5:52 PM, Alexander Gataric wrote:

 I'm on a project which requires adding PostgreSQL tables to DB2  
Federated Server. I'm getting an error with PostgreSQL data types  
boolean, text, bytea, and XML. I believe this can be solved with the  
CREATE TYPE MAPPING in Fed Server. Does anyone know which values to  
use? I'm not that familiar with Fed Server.


I googled db2 data types.
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.db2z10.doc.intro%2Fsrc%2Ftpc%2Fdb2z_datatypes.htm

Postgres' TEXT would map to VARCHAR(n) but you'll have to decide on a 
suitable max length for you n value.ther's nothing like boolean, so 
probably mapping it either to a SMALLINT where 't' is 1 and 'f' is 0, or 
mapping it to a char(1) and just storing it as t/f for true/false.   
BYTEA is equivalent to a BLOB, whatever DB2 uses for that.and DB2 is 
supposed to have an XML type, but I suppose you could just map it to 
VARCHAR(n) again, as XML is really just a bunch of text formatted with 
html-like entities.






--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to raise index points when equal and like is used with gist ?

2012-10-14 Thread Condor

On 2012-10-12 11:30, Sergey Konoplev wrote:

On Fri, Oct 12, 2012 at 1:20 AM, Condor con...@stz-bg.com wrote:
Even without tel filed result and type of scan is the same (Seq 
Scan).


This is because your table has to few rows and it is easier to seq
scan. Add more rows, eg. 100 000, then ANALYZE the table and run
tests. Use random() and generate_series() to generate the data.



You was right,
when I read documentation of pg_trgm I see how much time will take to
search in 100 000 rows, but I was misled myself because did not expect
to change the search scan. Seq to Bitmap. I understand my mistake and 
change

query to:

EXPLAIN ANALYZE WITH AS ab (SELECT * FROM tables WHERE firstname = 
'OLEG' AND middlename || lastname LIKE '%KUZNICOV%IGORU%')

SELECT * FROM ab WHERE tel LIKE '12%';

 CTE Scan on ab  (cost=6490.15..6531.14 rows=9 width=965) (actual 
time=2.256..20.017 rows=43 loops=1)

   Filter: (tel ~~ '12%'::text)
   Rows Removed by Filter: 1690
   CTE ab
 -  Bitmap Heap Scan on tables (cost=39.87..6490.15 rows=1822 
width=600) (actual time=1.789..17.817 rows=1733 loops=1)

   Recheck Cond: (firstname = 'OLEG'::text)
   Filter: ((middlename || lastname) ~~ 
'%KUZNICOV%IGORU%'::text)
   -  Bitmap Index Scan on tables_firstname_idx  
(cost=0.00..39.42 rows=1823 width=0) (actual time=1.178..1.178 rows=1733 
loops=1)

 Index Cond: (firstname = 'OLEG'::text)
 Total runtime: 20.278 ms



Now is much better 20 ms vs 220 ms.


Thanks for your help.


Cheers,
C


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general