Re: [GENERAL] Question about performance - Postgres 9.5

2016-08-16 Thread Sameer Kumar
On Wed, 17 Aug 2016, 1:36 p.m. Venkata B Nagothi,  wrote:

> On Mon, Jun 13, 2016 at 8:37 AM, Patrick B 
> wrote:
>
>> Hi guys,
>>
>> In the db I'm working one, it will be three tables:
>>
>> visits, work, others.
>>
>> Everything the customer do, will be logged. All inserts/updates/deletes
>> will be logged.
>>
>> Option 1: Each table would have its own log table.
>> visits_logs, work_logs, others_logs
>>
>> Option 2: All the logs would be stored here...
>> log_table
>>
>> Can you please guys tell me which option would be faster in your opinion,
>> and why?
>>
>
> Did you mean that, you will be auditing the activities happening on those
> 3 tables ? If yes, can you clarify on what you will be exactly logging ?
>
> What will be the volume of transactions all the 3 tables will be receiving
> over a business day ? if the volume is manageable, then one table for
> logging all the actions across 3 tables would be good.
>

It will also depends on what you would be using the log entries for. What
kind of queries? Retention period? If you will query most often on date
range and also purge by date, then better log all in one table and
partition by date.

You can log old and new records in json format in one column that way you
don't need to worry about changing structure of underlying tables.

In the triggers which you use for auditing, you can transform rows to a
json document.
You can have columns for tableName, Action (insert/update/delete),
NewRecord (json), oldRecord (json), datetime


> If you are auditing and size of the data is manageable then, even one
> table would also be good. A separate audit table for each table would
> generally be a good idea, which makes it easy for tracking activities.
>
> Regards,
> Venkata B N
>
> Fujitsu Australia
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com


Re: [GENERAL] Question about performance - Postgres 9.5

2016-08-16 Thread Venkata B Nagothi
On Mon, Jun 13, 2016 at 8:37 AM, Patrick B  wrote:

> Hi guys,
>
> In the db I'm working one, it will be three tables:
>
> visits, work, others.
>
> Everything the customer do, will be logged. All inserts/updates/deletes
> will be logged.
>
> Option 1: Each table would have its own log table.
> visits_logs, work_logs, others_logs
>
> Option 2: All the logs would be stored here...
> log_table
>
> Can you please guys tell me which option would be faster in your opinion,
> and why?
>

Did you mean that, you will be auditing the activities happening on those 3
tables ? If yes, can you clarify on what you will be exactly logging ?

What will be the volume of transactions all the 3 tables will be receiving
over a business day ? if the volume is manageable, then one table for
logging all the actions across 3 tables would be good.

If you are auditing and size of the data is manageable then, even one table
would also be good. A separate audit table for each table would generally
be a good idea, which makes it easy for tracking activities.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Question about performance - Postgres 9.5

2016-06-12 Thread Patrick B
Also...

if something is changed inside the visits table (delete/insert/update), the
visits_logs table will be logging the change.

However, some joins between those three tables will become necessary in
some point, as visits and works tables are related somehow...


Re: [GENERAL] question about performance

2008-07-21 Thread Torsten Zühlsdorff

A. Kretschmer schrieb:

if I have a table, the_table, with a DATE field, i'll call it 'day', and 
I'd like to find all rows whos day falls within a given month, which of the 
following methods is faster/costs less:


1.

SELECT * FROM the_table WHERE day LIKE '2008-01-%';

2.

SELECT * FROM the_table
WHERE ( day BETWEEN '$month_begin' AND '$month_end' );

# where $month_begin is '2008-01-01' and $month_end is '2008-01-31';


Probably the second one, but it depends if you have a propper index.

Other solution: create a functional index based on date_trunc and
rewrite your where-condition also.


Can you give an example?

Greetings,
Torsten

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about performance

2008-07-21 Thread A. Kretschmer
am  Mon, dem 21.07.2008, um  9:40:19 +0200 mailte Torsten Zühlsdorff folgendes:
 A. Kretschmer schrieb:
 
 if I have a table, the_table, with a DATE field, i'll call it 'day', and 
 I'd like to find all rows whos day falls within a given month, which of 
 the following methods is faster/costs less:
 
 1.
 
 SELECT * FROM the_table WHERE day LIKE '2008-01-%';
 
 2.
 
 SELECT * FROM the_table
 WHERE ( day BETWEEN '$month_begin' AND '$month_end' );
 
 # where $month_begin is '2008-01-01' and $month_end is '2008-01-31';
 
 Probably the second one, but it depends if you have a propper index.
 
 Other solution: create a functional index based on date_trunc and
 rewrite your where-condition also.
 
 Can you give an example?

Sure. For instance, i have a table called 'frachtschein', this table
contains a field 'ts', type timestamp. First, there are no index:

scholl=# explain analyse select * from frachtschein where 
date_trunc('month',ts::timestamp)='2008-02-01 00:00:00'::timestamp;
 QUERY PLAN
-
 Seq Scan on frachtschein  (cost=0.00..243.89 rows=29 width=84) (actual 
time=4.967..10.398 rows=524 loops=1)
   Filter: (date_trunc('month'::text, ts) = '2008-02-01 00:00:00'::timestamp 
without time zone)
 Total runtime: 11.468 ms
(3 rows)


Now i create a index:


scholl=*# create index idx_e on frachtschein(date_trunc('month',ts));
CREATE INDEX


The same select, using the index:


scholl=*# explain analyse select * from frachtschein where 
date_trunc('month',ts::timestamp)='2008-02-01 00:00:00'::timestamp;
QUERY PLAN
---
 Bitmap Heap Scan on frachtschein  (cost=2.10..76.12 rows=29 width=84) (actual 
time=0.336..1.570 rows=524 loops=1)
   Recheck Cond: (date_trunc('month'::text, ts) = '2008-02-01 
00:00:00'::timestamp without time zone)
   -  Bitmap Index Scan on idx_e  (cost=0.00..2.10 rows=29 width=0) (actual 
time=0.316..0.316 rows=524 loops=1)
 Index Cond: (date_trunc('month'::text, ts) = '2008-02-01 
00:00:00'::timestamp without time zone)
 Total runtime: 2.624 ms
(5 rows)




Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about performance

2008-07-20 Thread A. Kretschmer
am  Sun, dem 20.07.2008, um 20:08:21 +0200 mailte Robert Urban folgendes:
 Hi PostgreSQLer,
 
 if I have a table, the_table, with a DATE field, i'll call it 'day', and 
 I'd like to find all rows whos day falls within a given month, which of the 
 following methods is faster/costs less:
 
 1.
 
   SELECT * FROM the_table WHERE day LIKE '2008-01-%';
 
 2.
   
   SELECT * FROM the_table
   WHERE ( day BETWEEN '$month_begin' AND '$month_end' );
   
   # where $month_begin is '2008-01-01' and $month_end is '2008-01-31';

Probably the second one, but it depends if you have a propper index.

Other solution: create a functional index based on date_trunc and
rewrite your where-condition also.

Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about performance

2008-07-20 Thread Alan Hodgson
On Sunday 20 July 2008, Robert Urban [EMAIL PROTECTED] wrote:
 Hi PostgreSQLer,

 if I have a table, the_table, with a DATE field, i'll call it 'day', and
 I'd like to find all rows whos day falls within a given month, which of
 the following methods is faster/costs less:

 1.

   SELECT * FROM the_table WHERE day LIKE '2008-01-%';

 2.

   SELECT * FROM the_table
   WHERE ( day BETWEEN '$month_begin' AND '$month_end' );

   # where $month_begin is '2008-01-01' and $month_end is '2008-01-31';

2. Particularly if you have an index on the day field. Doing text operations 
on date fields is not conducive to performance.

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about performance of libpq

2006-06-20 Thread Lee Riquelmei
I am so sorry. I sent a mail draft.On 6/20/06, Lee Riquelmei [EMAIL PROTECTED] wrote:
hi,all. A strange question is as follows:I have two PCs:machine A: FreeBSD 
5.4 with PostgreSQL 8.1.2machine B: Windows XP with PostgreSQL 8.1.2A and B are with same hardware configuration and in a 100Mbit LAN.
a large table lineitem about 600572 rows exists in both A and B.On machine B, i run psql and issue a sql select * from lineitem to A: the execution time is about 10s.On machine A, i run psql and issue