Hello,

  I am planning to implement a system, where there is one Master database running on a 
Linux box with as many resources as necessary, and there are one or more client pc 
computers,with processor speed of 100 Mhz, memory of 32-64 Mbytes and a 10Mb/s network 
card.

  The major task is that the clients should work on the actual state of data, which 
means the basic database (data, unchangeable for clients), and some statistical data 
on other clients, and they should not stop work when there is no network connection to 
the master pc. They should give back their detailed transactions on the basic data to 
the master pc.

  For this reason I consider to run postgres on the client computers, but I am quite 
concerned about system overhead.

  The clients will only do basic database work: 
   - selects from the database, without nested selects (or with nested selects with 
the maximum of 1-2 levels) 
   - writing their transactions into the database, with commit/rollback functionality.
   - update some tables because of synchronization with master db.
   - update some tables to summarize the value of transactions. (They could be done by 
triggers, but if they need resources, there is an existing solution with basic 
operations).

  Size of the database: The basic data includes 50-100.000 elements in 3-4 tables 
each, and much less data in other tables. The number of tables is around 100.

  I would like to know the opinion of experienced users of Postgres, if I can embark 
upon this road, or should choose an other way which uses an other db system with lower 
resource-needs.

  Thanks in advance,

  Gogulus

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to