Re: [GENERAL] PostgreSQL and Crystal Report

2007-09-05 Thread Rob Kirkbride
On 05/09/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Hi All.
> I've the necessity to use Crystal Report in my C++ project to report (in
> PDF) some PostgreSQL table.
> Any idea how to implement this functionality in my C++ project, or where I
> can find some useful CR documentation?
> All the Crystal Report documentation I found is about VisualBasic or C#
> API.
>
> Thanks in advance.
>
> Luca.
>

Luca,

Good luck in getting a license from Crystal Reports. They set us a license
which didn't work (gave invalid id error). They spent over a week insisting
that the license was valid and us telling them that it didn't. We gave up on
them and went to Jasper Reports. In many ways it's a better product. Crystal
Reports also couldn't give us assurance that they would support Postgresql.
Jasper works great with it. We are using Jasper Server as well for delivery
of our reports and I think it's a great product.

I've used the Java API into Jasper - I've no knowledge of what the C++ API
would be like to it though.

Rob


Re: [GENERAL] Data Warehousing

2007-09-03 Thread Rob Kirkbride

Andrej Ricnik-Bay wrote:

On 9/3/07, Rob Kirkbride <[EMAIL PROTECTED]> wrote:

  

We're using hibernate to write to the database. Partitioning looks like it
will be too much of a re-architecture. In reply to Andrej we do have a
logged_time entity in the required tables. That being the case how does that
help me with the tools provided?

Might I have to write a custom JDBC application to do the data migration?


That would be one option :}

If the server is on a Unix/Linux-platform you should be able
to achieve the result with a reasonably simple shell-script
and cron, I'd say.

  
I am on a Linux platform but I'm going to need some pointers regarding 
the cron job. Are you suggesting that I parse the dump file? I assume I 
would need to switch to using inserts and then parse the dump looking 
for where I need to start from?





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Data Warehousing

2007-09-03 Thread Rob Kirkbride
On 03/09/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
> On 9/3/07, Rob Kirkbride <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I've got a postgres database collected logged data. This data I have to
> keep
> > for at least 3 years. The data in the first instance is being recorded
> in a
> > postgres cluster. This then needs to be moved a reports database server
> for
> > analysis. Therefore I'd like a job to dump data on the cluster say every
> > hour and record this is in the reports database. The clustered database
> > could be purged of say data more than a week old.
> >
> > So basically I need a dump/restore that only appends new data to the
> reports
> > server database.
> >
> > I've googled but can't find anything, can anyone help?
>
> You might find an answer in partitioning your data.  There's a section
> in the docs on it.  Then you can just dump the old data from the
> newest couple of partitions if you're partitioning by week, and dump
> anything older with a simple delete where date < now() - interval '1
> week' or something like that.



We're using hibernate to write to the database. Partitioning looks like it
will be too much of a re-architecture. In reply to Andrej we do have a
logged_time entity in the required tables. That being the case how does that
help me with the tools provided?

Might I have to write a custom JDBC application to do the data migration?

Rob


[GENERAL] Data Warehousing

2007-09-03 Thread Rob Kirkbride
Hi,

I've got a postgres database collected logged data. This data I have to keep
for at least 3 years. The data in the first instance is being recorded in a
postgres cluster. This then needs to be moved a reports database server for
analysis. Therefore I'd like a job to dump data on the cluster say every
hour and record this is in the reports database. The clustered database
could be purged of say data more than a week old.

So basically I need a dump/restore that only appends new data to the reports
server database.

I've googled but can't find anything, can anyone help?

Thanks

Rob


Re: [GENERAL] Optimising Union Query.

2005-04-25 Thread Rob Kirkbride
Jim C. Nasby wrote on 25/04/2005 01:28:
On Sat, Apr 23, 2005 at 10:39:14PM +, Patrick TJ McPhee wrote:
 

In article <[EMAIL PROTECTED]>,
Rob Kirkbride <[EMAIL PROTECTED]> wrote:
% I've done a explain analyze and as I expected the database has to check 
% every row in each of the three tables below but I'm wondering if I can 

This is because you're returning a row for every row in the three
tables.
% select l.name,l.id from pa i,locations l where i.location=l.id union 
% select l.name,l.id from andu i,locations l where i.location=l.id union 
% select l.name,l.id from idu i,locations l where i.location=l.id;

You might get some improvement from
select name,id from locations
 where id in (select distinct location from pa union
  select distinct location from andu union
  select distinct location from idu);
   

Note that SELECT DISTINCT is redundant with a plain UNION. By
definition, UNION does a DISTINCT. In fact, this is going to hurt you;
you'll end up doing 4 distinct operations (one for each SELECT DISTINCT
and one for the overall UNION). Unless some of those tables have a lot
of duplicated location values, you should either use UNION ALLs or drop
the DISTINCTs. Note that going with DISTINCTs is different than what
your original query does.
You should also consider this:
SELECT name, id FROM locations l
   WHERE EXISTS (SELECT * FROM pa p WHERE p.location=l.id)
   OR EXISTS (SELECT * FROM andu a WHERE a.location=l.id)
   OR EXISTS (SELECT * FROM idu i WHERE i.location=l.id)
This query would definately be helped by having indexes on
(pa|andu|idu).location.
 

Thanks for that. I tried a few things, including using DISTINCTS and 
UNION ALLs but none made a big difference.
However  your query above sped things up by a factor of more than 2.

Thanks very much!
Rob
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Optimising Union Query.

2005-04-22 Thread Rob Kirkbride
Hi,
I've got a query that takes quite some time to complete. I'm not an SQL 
expert so I'm not sure how to improve things.
I've done a explain analyze and as I expected the database has to check 
every row in each of the three tables below but I'm wondering if I can 
do it much quicker by a use of an index or something. Each of the three 
tables could have several thousand entries in. Basically the tables 
contain data recorded against time then every hour a script deletes 
entries that more than so many hours old.

select l.name,l.id from pa i,locations l where i.location=l.id union 
select l.name,l.id from andu i,locations l where i.location=l.id union 
select l.name,l.id from idu i,locations l where i.location=l.id;

Thanks for any help,
Rob

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Time

2005-04-04 Thread Rob Kirkbride
Hi,
I'm trying to fetch out the epoch value of  a time, the data type is 
'timestamp with time zone'. When I do select extract(epoch from time) it 
returns a fractional part as well.
Am I doing this the correct way? Is the fractional part microseconds?

Thanks for any help,
Rob
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings