Re: [SQL] [ADMIN] Data insert

2005-08-20 Thread Aldor

1st
---

if you use php you can use the function:

"pg_escape_bytea()"

when you want to retrieve the data from db you have to use:

"pg_unescape_bytea()"

for unescaping it

see the php documentation on this function for more information

2nd
---

if you want to insert biiig data volumes try either using COPY instead 
of INSERT - it will run much much faster


you can use implement somewhere this function to use it very easy in php:

/***
$tableName 	-	specifies the name of the table where the data has to be 
copied into

$copyArr-   contains "n" elements of rows to be inserted, sample 
element:
array(
"col1"=>   "foo",
"col2"=>   "bar"
)
***/

function copyInto($tableName,$copyArr) {
	$thisDBConn = $this->getThisDBConnection(); // replace here your 
retrieve to get your db connection into this variable

$queryCopyStart = "COPY ".$tableName." (";
// get out columns that has to be processed
$columnList = "";
$count = 0;
foreach ($copyArr[0] as $key=>$value) {
$count++;
if ($count!=1) {
$columnList .= ",";
}
$columnList .= $key;
}
	$queryCopyStart .= $columnList.") FROM STDIN WITH DELIMITER AS '\\t' 
NULL AS '' CSV QUOTE AS '\\'' ESCAPE AS '';\n";

$queryCopyData = Array();
$countLine = 0;
foreach ($copyArr as $lineKey=>$lineValue) {
$countLine++;
$thisLine = "";
$countData = 0;
foreach ($lineValue as $dataKey=>$dataValue) {
$countData++;
$thisLine .= "'".$dataValue."'";
if ($countData!=count($lineValue)) {
$thisLine .= chr(9);
}
}
$thisQueryCopyData = $thisLine."\n";
array_push($queryCopyData,$thisQueryCopyData);
}
$queryCopyEnd = "\\.\n";
pg_query($thisDBConn,$queryCopyStart);
pg_put_line($thisDBConn,implode("",$queryCopyData));
pg_put_line($thisDBConn,$queryCopyEnd);
pg_end_copy($thisDBConn);
}

be aware with the "max_stack_depth" value in postgresql.conf, maybe you 
will need to increase it


Postgres Admin wrote:

Sorry for the attachment, but copying and pasting this data does not work.

I don't have any idea how to insert the type of data into PostgreSQL. 
Basically, it's encrypted data in which I would like that keep raw format.


Thanks for any help,
J





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] POSIX Regular Expression question

2005-09-05 Thread Aldor
Hi,

I want to get out a string only with characters A-Za-z.

I tried really a lot of things with substring and read many POSIX docs,
I'm also familiar with the Perl RegEx but right now, I'm giving up... ;-(

Any idea how to do this in Postgres with POSIX Regex?

Regards,

Aldor

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] [ADMIN] COPY TO / COPY FROM

2005-09-23 Thread Aldor
> I kown I can use pg_dump to perform that "back up" but I wanted to
> test performences.

pg_dump does the same.. if you same custom it makes binary (WITH BINARY)
but with compression.

Performance?

1. TOP: inserte into select 
2. TOP: copy / pg_restore with uncompressed binary
3. TOP: copy / pg_restore with uncompressed text
4. TOP: pg_restore with compressed binary (custom format in pg_dump)
5. TOP: normal inserts...

This is my experience.

> PATH = /home/postgres//carax_exec.out

Be aware to give postgres write/read access to this file/dir - i think
the double "//" is typee

Cédric Buschini wrote:
> Aldor wrote:
> 
>> What is the name of the table you want to dump?
>>  
>>
> I kown I can use pg_dump to perform that "back up" but I wanted to test
> performences.
> 
> The table is carax_exec
> 
>> What is the exact path you try to dump it to?
>>  
>>
> PATH = /home/postgres//carax_exec.out
> 
>> What version of PostgreSQL do you have?
>>  
>>
> uhmm 8.0
> 
> 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] VACUUM FULL vs dump & restore

2005-09-24 Thread Aldor
Hello Ilya,

you have to check for yourself which method is faster - just test it
with a stopwatch;-)

You have to take care, because when you make VACUUM FULL, then it
vacuums also the system tables, etc. of postgres.

I'm not sure if this is the same way VACUUM goes through all objects,
but I'd make a customized vacuum, which finds out first every object
which should be vacuumed by:

select  relname
frompg_class

You can filter out not wanted objects through the query or when
processing the "VACUUM FULL [object]" or only "VACUUM [object].

In this way I can decide for myself what I want to vacuum, and what I
will do by dump-truncate-restore.

In many cases a normal VACUUM was even faster then the primitive
dump-truncate-restore process. The bottlneck on a VACUUM is as I saw
from my experience on tables with long strings inside and an amount of
hundreds of millions.

Regards,

Aldor

Ilya A. Kovalenko wrote:
>  Greetings,
> 
>   What advantages I lose, when using dump-truncate-restore (table
> or whole DB) instead of performing VACUUM FULL ?
>   In both cases I have no access to data, but first is much faster
> (by subjective estimate).
> 
> Thank you,
> 
> Ilya A. Kovalenko   (mailto:[EMAIL PROTECTED])
> 
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] VACUUM FULL vs dump & restore

2005-09-24 Thread Aldor
Another way how to do it with having access to the data in the same time
 is to create a new table, named a little bit differently and do an:

insert  into [table]2
select  *
from[table];

Then switch to the second table.

Then you have to do on the first table the TRUNCATE and DROP.

For getting out which table is the actual one you can create a table
which holds the originate table name and the actual table name. When
using plpgsql you can check the table name before building the queries
and then build them with EXECUTE.

Be aware that you cannot do:

SELECT  col1, col2
FROMgettablename('[table]');

Also be aware to switch back when you do the process again, so you dump
the data from the [table]2 to [table].

For my experience this way was faster then dump-truncate-restore on the
table.

Regards,

Aldor

Ilya A. Kovalenko wrote:
>  Greetings,
> 
>   What advantages I lose, when using dump-truncate-restore (table
> or whole DB) instead of performing VACUUM FULL ?
>   In both cases I have no access to data, but first is much faster
> (by subjective estimate).
> 
> Thank you,
> 
> Ilya A. Kovalenko   (mailto:[EMAIL PROTECTED])
> 
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org