Jonah,
Thank you for the answer. Good to know about this enterprise DB feature.
I´ll follow using pgloader.
Regards.
Adonias Malosso
On Sat, Apr 26, 2008 at 10:14 PM, Jonah H. Harris [EMAIL PROTECTED]
wrote:
On Sat, Apr 26, 2008 at 9:25 AM, Adonias Malosso [EMAIL PROTECTED]
wrote:
I´d
Hi All,
I´d like to know what´s the best practice to LOAD a 70 milion rows, 101
columns table
from ORACLE to PGSQL.
The current approach is to dump the data in CSV and than COPY it to
Postgresql.
Anyone has a better idea.
Regards
Adonias Malosso
if there´s any way to optimize huge data load in
operations like these.
Regards
Adonias Malosso
Hi all,
The following query takes about 4s to run in a 16GB ram server. Any ideas
why it doesn´t use index for the primary keys in the join conditions?
select i.inuid, count(*) as total
from cte.instrumentounidade i
inner join cte.pontuacao p on p.inuid = i.inuid
inner join cte.acaoindicador ai
Set random_page_cost = 2 solved the problem. thanks
On Thu, Feb 21, 2008 at 6:16 PM, Claus Guttesen [EMAIL PROTECTED] wrote:
why it doesn´t use index for the primary keys in the join
conditions?
Maby random_page_cost is set too high? What version are you using?
Postgresql v. 8.2.1