Re: [GENERAL] PG 8.2 instal on Win2k3 - unable to connect to test network socket

2009-08-26 Thread Craig Ringer
On Wed, 2009-08-26 at 18:43 -0300, geoff wrote:

> > Sounds like you already have something listening on 5432. Perhaps
> > another verison of PostgreSQL already installed?
> >
> Thats what I thought, but a port scan tells me that the port is closed.
> 


Try using the `netstat' command (or one of the numerous GUI
wrappers/replacements) to look at your open port list instead. That'll
show you what the OS thinks is in use.

--
Craig Ringer


Re: [GENERAL] It looks like transaction, but it isn't transaction

2009-08-26 Thread Tom Lane
Sergey Samokhin  writes:
> I should rejig my question: is this normal for a query made up from
> several commands to be completely discarded if an error occurs?

Yes, if it's submitted as a single Query message, that's what happens.
Also, the implicit transaction is around the whole Query message not
just single SQL statements, which probably explains your confusion
on other points.

There's been some discussion of disallowing multiple statements per
Query, partially to avoid these sorts of corner cases and partially
as a defense against SQL-injection attacks.  But so far the conclusion
has been that it would break a lot of code while not buying much.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] It looks like transaction, but it isn't transaction

2009-08-26 Thread Sergey Samokhin
Hello, Sam.

Thanks for the answer.

> This behavior seems to be what libpq exposes by default:
>
>  http://www.postgresql.org/docs/current/static/libpq-exec.html

The driver I was talking about isn't written in C and doesn't use
libpq behind the scene like interfaces for Perl/Python/Tcl do, so what
I see is more likely refers to how PostgreSQL itself executes queries.
I haven't found a good explanation of this behaviour yet.

> I always tend to bracket things in an explicit BEGIN+COMMIT, why
> wouldn't you do this?

Probably that is what I will end up with. But anyway I would like to
know for sure what causes queries to be executed in "almost
transactional" context.

I should rejig my question: is this normal for a query made up from
several commands to be completely discarded if an error occurs? If the
answer is "no, it seems unusual" - I should write a bug report to the
author of the driver.

-- 
Sergey Samokhin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Skytools-users] WAL Shipping + checkpoint

2009-08-26 Thread Mark Kirkwood

Sébastien Lardière wrote:

On 26/08/2009 04:46, Mark Kirkwood wrote:

Sébastien Lardière wrote:

Hi All,

I've a cluster ( Pg 8.3.7 ) with WAL Shipping, and a few hours ago, 
the master had to restart.


I use walmgr from Skytools, which works very well.

I have already restart the master without any problem, but today, 
the slave doesn't work like I want. The field "Time of latest 
checkpoint" from the pg_controldata on the slave keep the same 
values, but WAL File are processed correctly.


I try to restart the slave, but, after processed again all the WAL 
between "Time of latest checkpoint" and, it does nothing else, 
latest checkpoint stay at the same value.


I don't know if it's important ( i think so ), and I can't fix it.

It is normal for it to lag behind somewhat on the slave (depending on 
what your checkpoint timeout etc settings are).


However, I've noticed what you are seeing as well - particularly when 
there are no actual data changes coming through in the logs - the 
slave checkpoint time does not change even tho there have been 
checkpoints on the master (I may have a look in the code to see what 
the story really is...if I have time).




Yes, but the delay between the last checkpoint on the master and the 
slave is very high, now ( 100 000 sec ), because the last checkpoint 
on the slave was yesterday ( as far as pg_controldata is right )


Here a graph from our munin plugin : 
http://seb.ouvaton.org/tmp/bdd-pg_walmgr-week.png


The blue line represent an average between two WAL processed on the 
slave, and the green line, the delai between last checkpoint on the 
master and the slave.


Maybe it's not some good indicator, but the green line let me think 
there is problem.



Do you have archive_timeout set? If so, then what *could* be happening 
is this:


There are actually no "real" data changes being made on your master for 
some reason. So every time archive_timeout is reached a log full of no 
changes is shipped to your slave and applied - and no checkpoint times 
are changed for reasons I mentioned above.


A way to test the would be to do something that makes real data changes 
in the master. A good thing to try would be to:


- create a new database
- create tables and add some reasonable amount of data (e.g. initialized 
pgbench scale 100).


Then see if your checkpoint time gets updated a few minutes or so later.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 8.2 instal on Win2k3 - unable to connect to test network socket

2009-08-26 Thread geoff
> Sounds like you already have something listening on 5432. Perhaps
> another verison of PostgreSQL already installed?
>
Thats what I thought, but a port scan tells me that the port is closed.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create a multi-column index with 2 dates using 'gist'?

2009-08-26 Thread Jeff Davis
On Wed, 2009-08-26 at 23:08 +0800, Fred Janon wrote:
> Thanks Gerhard, interesting but I wonder if it is a maintained
> project, the files date from May 2008 and there is not much forum
> activity. I'll out it on my list of "To be investigated".

Well, it's maintained in the sense of "I don't know of any problems with
it." Right now all it does is implement the PERIOD data type, which is
indexable so that you can do searches on predicates like
"&&" (overlaps).

It may get a little more exciting when more features like temporal keys
(which I'm planning to make possible in the next commitfest) or temporal
joins (no serious plans yet, but seems doable) are implemented. 

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Import data from XML file

2009-08-26 Thread Bill Bartlett
We've used Pentaho Data Integration (aka Kettle) at http://kettle.pentaho.org to
do this in the past.  (Kettle is the free / open source version, although there
is a bigger commercial version of PDI that does more.) It reads XML nicely and
talks natively to PostgreSQL databases, so you can load data directly from your
XML file right into your PostgreSQL database.

If you're a Microsoft shop, you can also use SQL Server Integration Services
[SSIS] to do the same thing but much better and faster, although I almost
hesitate to mention a Microsoft tool on this forum even though it's just going
from XML to PostgreSQL and CSV with no trace of SQL Server anywhere to be seen.
(If you do go the SSIS route, the "dotConnect for PostgreSQL" ADO.NET driver
from Devart [ http://www.devart.com/dotconnect/postgresql/ ] works wonderfully
to connect from SSIS to PostgreSQL.  Unfortunately, the Npgsql driver doesn't
really work very well with SSIS...)

- Bill


> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Erwin Brandstetter
> Sent: Wednesday, August 26, 2009 12:10 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Import data from XML file
> 
> Hi!
> 
> How do you import data from an xml-file?
> For instance, if I have a file like this:
> 
> 
>  
>
>  
>Sonstiges
>5
>  
>  
>Buehne
>   2
> 
> 
>   Konzerte
>   1
> 
>   
>   
> 
>   Reggae
>   1
>   45
> 
> 
>   sonstige
>   5
>   44
> 
> 
> 
> 
> 
> ... and I want a CSV file like this:
> 
> main_category_namemain_category_id
> Sonstiges5
> Buehne2
> 
> category_namemain_category_id   category_id
> Reggae145
> sonstige544
> 
> 
> Or is there a way to import directly into tables in a postgres
> database?
> 
> 
> Your help would be appreciated!
> Regards
> Erwin
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Aggregate function with Join stop working under certain condition

2009-08-26 Thread Naoko Reeves
The solution provide by Sam is unbelievably fast and works 100%
accurately. Thank you very much.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Sam Mason
Sent: Wednesday, August 26, 2009 10:40 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Aggregate function with Join stop working under
certain condition

On Wed, Aug 26, 2009 at 11:17:10AM -0400, Naoko Reeves wrote:
> I am joining 4 tables (doc, bpt, con, emp) - emp and doc relationship
is
> one to many. I want to only one doc record per emp as condition shown
> below:

[...]

> However, I wan to add one more doc column but as soon as I add one, it
> try to return all unique doc records. Could you tell me what am I
doing
> wrong here please?

Descriptions of the problem are normally easier to understand than code;
but I *guess* what you want to do is to get the subject of the last
document created by each person and when it was created.  If that's the
case then DISTINCT ON is normally the easiest way.  Maybe something
like:

  SELECT b.bpt_key, e.emp_full_name, c.con_full_name,
d.doc_date_created, d.doc_subject
  FROM bpt b, emp e
LEFT JOIN con c ON e.emp_con_key = c.con_key
LEFT JOIN (
SELECT DISTINCT ON (doc_emp_key) doc_emp_key,
  doc_date_created, doc_subject
FROM doc
ORDER BY doc_emp_key, doc_date_created DESC) d
  ON e.emp_key = d.doc_emp_key
  WHERE b.bpt_emp_key = e.emp_key
AND b.bpt_com_key = 22
AND b.bpt_status  <> -1;

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Import data from XML file

2009-08-26 Thread Martin Gainty

Erwin

did you try ems-data?
http://ems-data-import-2007-for-postgresql.software.informer.com/3.0/

Martin Gainty 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




> Date: Wed, 26 Aug 2009 11:54:23 -0700
> From: arta...@comcast.net
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Import data from XML file
> 
> > Hi!
> > 
> > How do you import data from an xml-file?
> > For instance, if I have a file like this:
> > 
> > 
> >  
> >
> >  
> >Sonstiges
> >5
> >  
> >  
> >Buehne
> >   2
> > 
> > 
> >   Konzerte
> >   1
> > 
> >   
> >   
> > 
> >   Reggae
> >   1
> >   45
> > 
> > 
> >   sonstige
> >   5
> >   44
> > 
> > 
> > 
> > 
> > 
> > ... and I want a CSV file like this:
> > 
> > main_category_namemain_category_id
> > Sonstiges5
> > Buehne2
> > 
> > category_namemain_category_id   category_id
> > Reggae145
> > sonstige544
> > 
> > 
> > Or is there a way to import directly into tables in a postgres
> > database?
> > 
> > 
> > Your help would be appreciated!
> > Regards
> > Erwin
> > 
> 
> Not sure why you are mentioning a CSV export. I ASSUME you want to 
> import into database tables and not go directly to csv. (If that's the 
> case, use another tool, not a database.)
> 
> INSERT INTO main_categories(name, id)
> SELECT extract_value('//main_category_name', x) AS name,
>extract_value('//main_category_id', x)::int AS id
>-- without extract_value
>-- CAST(CAST(xpath('//main_category_id/text()', x)[0] AS varchar) AS 
> int) AS id
> FROM unnest(xpath('//main_category', xml('...your xml here...'))) x
> 
> INSERT INTO categories(name, main_id, id)
> SELECT extract_value('//category_name', x) AS name,
>extract_value('//main_category_id', x)::int AS main_id,
>extract_value('//category_id', x)::int AS id
> FROM unnest(xpath('//category', xml('...your xml here...'))) x
> 
> Unnest isn't included until pg 8.4. And extract_value() is a function I 
> borrowed from Oracle to make life easier. I have a write up about it on 
> my blog.
> 
> http://scottrbailey.wordpress.com/2009/06/19/xml-parsing-postgres/
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

_
Windows Live: Keep your friends up to date with what you do online.
http://windowslive.com/Campaign/SocialNetworking?ocid=PID23285::T:WLMTAGL:ON:WL:en-US:SI_SB_online:082009

Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Magnus Hagander
On Wed, Aug 26, 2009 at 21:15, Sam Mason wrote:
> On Wed, Aug 26, 2009 at 08:58:17PM +0200, Magnus Hagander wrote:
>> On Wed, Aug 26, 2009 at 20:45, Sam Mason wrote:
>> > On Wed, Aug 26, 2009 at 08:11:58PM +0200, Thomas Kellerer wrote:
>> >> If one goes directly http://www.enterprisedb.com/products/pgbindownload.do
>> >> this is not necessary (which is what I expect), but as far as I can tell,
>> >> there is no direct link to that page.
>> >
>> > hum,
>> >
>> >   http://www.postgresql.org/download/windows
>> >
>> > seems to point there.
>>
>> Really? When i click that link, I get to the page that doesn't require
>> registration...
>
> um, that's what I said isn't it?  What I meant to say anyway was that
> the link I gave contained a pretty prominent link to the page that
> Thomas gave that doesn't require registration.

I'm probably too tired to multitask this much :-) Sorry, misread
things completely.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Thomas Kellerer

Magnus Hagander wrote on 26.08.2009 20:37:


You can certainly get it without registering. The link from the
PostgreSQL download page points to
http://www.enterprisedb.com/products/pgdownload.do, where no
registration is required.

How did you end up at the page above? Perhaps we have a link that
points to the wrong EDB page somewhere?


A sorry, I did not follow the link from the Postgres page, but came from 
http://www.enterprisedb.com/products/download.do

Sorry for the confusion, I didn't realise the link from the postgresql.org site 
bypassed the registration screen.

Thomas







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating index for convert text to integer

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 07:13:41AM -0700, xaviergxf wrote:
>   How can i create a index to index all the fields that has the type
> INTEGER, in the following table:
> 
> create type properties_types as enum('INTEGER', 'STRING', 'FLOAT');
> 
> create table properties_types(
>value text NOT NULL,
>value_type properties_types NOT NULL
> );

You can't create an enum that has the same name as a table can you?

> how do i create index for the integer types?
> 
> create index properties_types_index on properties_types ((value ::integer)) 
> where value_type='INTEGER'

Yup, that should work.

> Can i use this select with the index?
> select valor from properties_types where value::integer<3

You need the where clause in there:

  SELECT value
  FROM properties_types
  WHERE value_type = 'INTEGER'
AND value::integer < 3;

This is generally considered pretty bad form though; there are lots
of discussions about "EAV" style designs that this seems similar to.
Slightly better would be creating your original table as:

  CREATE TABLE properties_types (
value_type properties_type,
value_int  INTEGER
  CHECK ((value_type = 'INTEGER') = (value_int IS NOT NULL)),
value_text TEXT
  CHECK ((value_type = 'STRING') = (value_text IS NOT NULL)),
value_float FLOAT8
  CHECK ((value_type = 'FLOAT') = (value_float IS NOT NULL))
  );

You can then just build a normal index on the appropriate columns and
run your queries the naive way.  Something like:

  SELECT *
  FROM properties_types
  WHERE value_int < 3;

Arranging things this way shouldn't take much (if any) more space and it
should run faster as it doesn't need to go converting between datatypes
the whole time.

This is still pretty bad form though and you'll get much more
leverage/help from PG if you arrange the tables so they reflect the
structure of the data you're really putting in.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Import data from XML file

2009-08-26 Thread Will Rutherdale (rutherw)
One solution is to use Perl DBI.  DBD::AnyData will read xml.  DBD::Pg
will write to Postgres.

-Will
 

> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of 
> Erwin Brandstetter
> Sent: 26 August 2009 12:10
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Import data from XML file
> 
> Hi!
> 
> How do you import data from an xml-file?
> For instance, if I have a file like this:
> 
> 
>  
>
>  
>Sonstiges
>5
>  
>  
>Buehne
>   2
> 
> 
>   Konzerte
>   1
> 
>   
>   
> 
>   Reggae
>   1
>   45
> 
> 
>   sonstige
>   5
>   44
> 
> 
> 
> 
> 
> ... and I want a CSV file like this:
> 
> main_category_namemain_category_id
> Sonstiges5
> Buehne2
> 
> category_namemain_category_id   category_id
> Reggae145
> sonstige544
> 
> 
> Or is there a way to import directly into tables in a postgres
> database?
> 
> 
> Your help would be appreciated!
> Regards
> Erwin
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 08:58:17PM +0200, Magnus Hagander wrote:
> On Wed, Aug 26, 2009 at 20:45, Sam Mason wrote:
> > On Wed, Aug 26, 2009 at 08:11:58PM +0200, Thomas Kellerer wrote:
> >> If one goes directly http://www.enterprisedb.com/products/pgbindownload.do
> >> this is not necessary (which is what I expect), but as far as I can tell,
> >> there is no direct link to that page.
> >
> > hum,
> >
> >   http://www.postgresql.org/download/windows
> >
> > seems to point there.
> 
> Really? When i click that link, I get to the page that doesn't require
> registration...

um, that's what I said isn't it?  What I meant to say anyway was that
the link I gave contained a pretty prominent link to the page that
Thomas gave that doesn't require registration.

> Exactly which link do you click to get to the page that requires
> registration?

I didn't get any, that was the question I was trying to ask.  Maybe I
should have been clearer somehow? :)

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Creating index for convert text to integer

2009-08-26 Thread xaviergxf
Hi,

  How can i create a index to index all the fields that has the type
INTEGER, in the following table:

create type properties_types as enum('INTEGER', 'STRING', 'FLOAT');

create table properties_types(
   value text NOT NULL,
   value_type properties_types NOT NULL
);

insert into properties_types values('1', 'INTEGER');
insert into properties_types values('2', 'INTEGER');
insert into properties_types values('3', 'INTEGER');
insert into properties_types values('4', 'INTEGER');

how do i create index for the integer types?

create index properties_types_index on properties_types
((value ::integer)) where value_type='INTEGER'


Can i use this select with the index?
select valor from properties_types where value::integer<3



Thanks!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 8.2 instal on Win2k3 - unable to connect to test network socket

2009-08-26 Thread Magnus Hagander
On Wed, Aug 26, 2009 at 19:12, imageguy wrote:
> When I tried to install pg8.2 on Win2k3 service pack 2 using the
> installer I received the error message window
>
> Unable to connect to the test network socket : 100013
>
> If I changed the port from the default '5432' to 5433 works fine.

Sounds like you already have something listening on 5432. Perhaps
another verison of PostgreSQL already installed?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Magnus Hagander
On Wed, Aug 26, 2009 at 20:45, Sam Mason wrote:
> On Wed, Aug 26, 2009 at 08:11:58PM +0200, Thomas Kellerer wrote:
>> If one goes directly http://www.enterprisedb.com/products/pgbindownload.do
>> this is not necessary (which is what I expect), but as far as I can tell,
>> there is no direct link to that page.
>
> hum,
>
>  http://www.postgresql.org/download/windows
>
> seems to point there.  Maybe the link there from:

Really? When i click that link, I get to the page that doesn't require
registration...


>  http://www.postgresql.org/download/
>
> could be clearer somehow?

The link from there, for "Postgres Plus", also brings me to a page
that doesn't require registration..

Exactly which link do you click to get to the page that requires registration?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Rainer Bauer
Thomas Kellerer schrieb:

>If one goes directly http://www.enterprisedb.com/products/pgbindownload.do 
>this is not necessary (which is what I expect), but as far as I can tell, 
>there is no direct link to that page. 

You get there from the official PostgreSQL download page:


Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Import data from XML file

2009-08-26 Thread Scott Bailey

Hi!

How do you import data from an xml-file?
For instance, if I have a file like this:


 
   
 
   Sonstiges
   5
 
 
   Buehne
  2


  Konzerte
  1

  
  

  Reggae
  1
  45


  sonstige
  5
  44





... and I want a CSV file like this:

main_category_namemain_category_id
Sonstiges5
Buehne2

category_namemain_category_id   category_id
Reggae145
sonstige544


Or is there a way to import directly into tables in a postgres
database?


Your help would be appreciated!
Regards
Erwin



Not sure why you are mentioning a CSV export. I ASSUME you want to 
import into database tables and not go directly to csv. (If that's the 
case, use another tool, not a database.)


INSERT INTO main_categories(name, id)
SELECT extract_value('//main_category_name', x) AS name,
  extract_value('//main_category_id', x)::int AS id
  -- without extract_value
  -- CAST(CAST(xpath('//main_category_id/text()', x)[0] AS varchar) AS 
int) AS id

FROM unnest(xpath('//main_category', xml('...your xml here...'))) x

INSERT INTO categories(name, main_id, id)
SELECT extract_value('//category_name', x) AS name,
  extract_value('//main_category_id', x)::int AS main_id,
  extract_value('//category_id', x)::int AS id
FROM unnest(xpath('//category', xml('...your xml here...'))) x

Unnest isn't included until pg 8.4. And extract_value() is a function I 
borrowed from Oracle to make life easier. I have a write up about it on 
my blog.


http://scottrbailey.wordpress.com/2009/06/19/xml-parsing-postgres/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 08:11:58PM +0200, Thomas Kellerer wrote:
> If one goes directly http://www.enterprisedb.com/products/pgbindownload.do 
> this is not necessary (which is what I expect), but as far as I can tell, 
> there is no direct link to that page. 

hum,

  http://www.postgresql.org/download/windows

seems to point there.  Maybe the link there from:

  http://www.postgresql.org/download/

could be clearer somehow?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Magnus Hagander
On Wed, Aug 26, 2009 at 20:11, Thomas Kellerer wrote:
> Hi,
>
> it seems that you cannot download PostgreSQL from the EnterpriseDB website
> without first registering with EnterpriseDB. I can somhow understand this
> cumbersome requirement for their own products, but I do not like the taste
> of it for the "plain" PostgreSQL builds. This doesn't sound right to me that
> I have to reveal my Company, Email *and* telephone number in order to
> download it.
> If one goes directly http://www.enterprisedb.com/products/pgbindownload.do
> this is not necessary (which is what I expect), but as far as I can tell,
> there is no direct link to that page.
> I would appreciate it very much if the download link for the "plain"
> PostgreSQL builds from http://www.enterprisedb.com/products/download.do
> would lead to the download page directly without the registration screen.

You can certainly get it without registering. The link from the
PostgreSQL download page points to
http://www.enterprisedb.com/products/pgdownload.do, where no
registration is required.

How did you end up at the page above? Perhaps we have a link that
points to the wrong EDB page somewhere?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] It looks like transaction, but it isn't transaction

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 05:06:27AM +0400, Sergey Samokhin wrote:
> There is one thing I find especially interesting: queries I pass to
> the pgsql:squery() are executed with some properties specific to
> transactions!

This behavior seems to be what libpq exposes by default:

  http://www.postgresql.org/docs/current/static/libpq-exec.html

Not sure if it's actually PG imposing these semantics or the libpq driver
itself.  I'd guess it's PG and that's why you're seeing the strange lack
of support for savepoints.

> I'm going to rely on this behaviour as "transactions without
> SAVEPOINT/ROLLBACK TO" in some cases but I'm a bit afraid of troubles
> that may appear in the future.

I always tend to bracket things in an explicit BEGIN+COMMIT, why
wouldn't you do this?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Thomas Kellerer

Hi,

it seems that you cannot download PostgreSQL from the EnterpriseDB website without first registering with EnterpriseDB. I can somhow understand this cumbersome requirement for their own products, but I do not like the taste of it for the "plain" PostgreSQL builds. This doesn't sound right to me that I have to reveal my Company, Email *and* telephone number in order to download it. 

If one goes directly http://www.enterprisedb.com/products/pgbindownload.do this is not necessary (which is what I expect), but as far as I can tell, there is no direct link to that page. 


I would appreciate it very much if the download link for the "plain" PostgreSQL 
builds from http://www.enterprisedb.com/products/download.do would lead to the download 
page directly without the registration screen.

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Aggregate function with Join stop working under certain condition

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 11:17:10AM -0400, Naoko Reeves wrote:
> I am joining 4 tables (doc, bpt, con, emp) - emp and doc relationship is
> one to many. I want to only one doc record per emp as condition shown
> below:

[...]

> However, I wan to add one more doc column but as soon as I add one, it
> try to return all unique doc records. Could you tell me what am I doing
> wrong here please?

Descriptions of the problem are normally easier to understand than code;
but I *guess* what you want to do is to get the subject of the last
document created by each person and when it was created.  If that's the
case then DISTINCT ON is normally the easiest way.  Maybe something
like:

  SELECT b.bpt_key, e.emp_full_name, c.con_full_name,
d.doc_date_created, d.doc_subject
  FROM bpt b, emp e
LEFT JOIN con c ON e.emp_con_key = c.con_key
LEFT JOIN (
SELECT DISTINCT ON (doc_emp_key) doc_emp_key,
  doc_date_created, doc_subject
FROM doc
ORDER BY doc_emp_key, doc_date_created DESC) d
  ON e.emp_key = d.doc_emp_key
  WHERE b.bpt_emp_key = e.emp_key
AND b.bpt_com_key = 22
AND b.bpt_status  <> -1;

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PG 8.2 instal on Win2k3 - unable to connect to test network socket

2009-08-26 Thread imageguy
When I tried to install pg8.2 on Win2k3 service pack 2 using the
installer I received the error message window

Unable to connect to the test network socket : 100013

If I changed the port from the default '5432' to 5433 works fine.

- any suggestions ?
Google showed me an email regarding this message in 8.1, but there was
no resolution.

g.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Import data from XML file

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 09:10:25AM -0700, Erwin Brandstetter wrote:
> How do you import data from an xml-file?

If they're all that small, put the file into the database as is and then
use xpath[1] to pull it apart and turn it into something a database
understand.

-- 
  Sam  http://samason.me.uk/

 [1] 
http://www.postgresql.org/docs/current/static/functions-xml.html#FUNCTIONS-XML-PROCESSING

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Import data from XML file

2009-08-26 Thread Edwin Plauchu
I think.. you'll need to parse that one...
Where do you have your data types to xml document ?

2009/8/26 Erwin Brandstetter 

> Hi!
>
> How do you import data from an xml-file?
> For instance, if I have a file like this:
>
> 
>  
>   
> 
>   Sonstiges
>   5
> 
> 
>   Buehne
>  2
>
>
>  Konzerte
>  1
>
>  
>  
>
>  Reggae
>  1
>  45
>
>
>  sonstige
>  5
>  44
>
> 
> 
>
>
> ... and I want a CSV file like this:
>
> main_category_namemain_category_id
> Sonstiges5
> Buehne2
>
> category_namemain_category_id   category_id
> Reggae145
> sonstige544
>
>
> Or is there a way to import directly into tables in a postgres
> database?
>
>
> Your help would be appreciated!
> Regards
> Erwin
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Import data from XML file

2009-08-26 Thread John R Pierce

Erwin Brandstetter wrote:

Hi!

How do you import data from an xml-file?
For instance, if I have a file like this:


 
   
 
   Sonstiges
   5
 
 
   Buehne
  2


  Konzerte
  1

  
  

  Reggae
  1
  45


  sonstige
  5
  44





... and I want a CSV file like this:

main_category_namemain_category_id
Sonstiges5
Buehne2

category_namemain_category_id   category_id
Reggae145
sonstige544


Or is there a way to import directly into tables in a postgres
database?
  


isn't it amazing how redundantly wordy XML is, yet it doesn't provide 
sufficient information to perform this simple task without more knowlege 
(for instance, there's no data types, but we sure know the name of the 
fields as they are spelled out twice for each row!)





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Import data from XML file

2009-08-26 Thread Erwin Brandstetter
Hi!

How do you import data from an xml-file?
For instance, if I have a file like this:


 
   
 
   Sonstiges
   5
 
 
   Buehne
  2


  Konzerte
  1

  
  

  Reggae
  1
  45


  sonstige
  5
  44





... and I want a CSV file like this:

main_category_namemain_category_id
Sonstiges5
Buehne2

category_namemain_category_id   category_id
Reggae145
sonstige544


Or is there a way to import directly into tables in a postgres
database?


Your help would be appreciated!
Regards
Erwin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] best practise/pattern for large OR / LIKE searches

2009-08-26 Thread Christophe Pettus


SELECT * FROM tbl WHERE datanumber LIKE '%12345%' OR LIKE '%54321%'  
OR LIKE '%8766%' OR LIKE '%009%', ..


The number of OR/LIKES are in the order of 50-100 items...
the table tbl is a couple of million rows.


Are the fixed strings in the wildcards "words" (i.e., are they  
completely arbitrarily embedded in the text, or are they delimited in  
some regular way)? If they are "words," you might consider using the  
full text functionality to create an index of them, and searching  
using that.


--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to simulate crashes of PostgreSQL?

2009-08-26 Thread Vick Khera
On Tue, Aug 25, 2009 at 4:55 PM, Tom Lane wrote:
> I've always thought that the fd.c layer is more about not having to
> configure the code explicitly for max-files-per-process limits.  Once
> you get into ENFILE conditions, even if Postgres manages to stay up,
> everything else on the box is going to start falling over.  So the
> sysadmin is likely to have to resort to a reboot anyway.

In my case, all sorts of processes were complaining about being unable
to open files.  Once Pg panicked and closed all its files, everything
came back to normal.  I didn't have to reboot because most everything
was written to retry and/or restart itself, and nothing critical like
sshd croaked.

I think we'll be adding a nagios check to track maxfiles vs. openfiles
from the kernel and alarm when they get close.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Aggregate function with Join stop working under certain condition

2009-08-26 Thread Naoko Reeves
I am joining 4 tables (doc, bpt, con, emp) - emp and doc relationship is
one to many. I want to only one doc record per emp as condition shown
below:

The query below successfully returns desired result (returns 4 records):

 

Select bpt_key, emp_full_name, con_full_name, max(doc_date_created) as
doc_date_created

from bpt 

LEFT OUTER JOIN doc ON bpt_emp_key = doc_emp_key 

INNER JOIN emp on bpt_emp_key = emp_key

LEFT OUTER JOIN con ON emp_con_key = con_key

WHERE bpt_com_key = 22 and bpt_status<>-1

GROUP BY bpt_key, emp_full_name, con_full_name

 

However, I wan to add one more doc column but as soon as I add one, it
try to return all unique doc records. Could you tell me what am I doing
wrong here please?

As soon as I add one more column, it returns 6 records:

 

Select bpt_key, emp_full_name, con_full_name, max(doc_date_created) as
doc_date_created, doc_subject

from bpt 

LEFT OUTER JOIN doc ON bpt_emp_key = doc_emp_key 

INNER JOIN emp on bpt_emp_key = emp_key

LEFT OUTER JOIN con ON emp_con_key = con_key

WHERE bpt_com_key = 22 and bpt_status<>-1

GROUP BY bpt_key, emp_full_name, con_full_name, doc_subject

 

 

Kindest Regard,

Naoko



Re: [GENERAL] Tagged types module and varlena changes

2009-08-26 Thread Tom Lane
Alban Hertroys  writes:
> I changed it to:
>> SET_VARSIZE(tv->vl_len_, a);

This is just wrong; use SET_VARSIZE(tv, ...).

>> memcpy( tv->vl_dat, &typev->val, b );

And I wouldn't recommend referencing vl_dat directly either.
Use VARDATA().

In general you're supposed to apply VARSIZE() and VARDATA() and
friends to pointers not Datums.  Although it would usually work
to be sloppy about this, I can't recommend it.  I would extract
the typev pointer first and then apply the VARSIZE macro to it.

> (gdb) print *tv
> $1 = {vl_len_ = "\000\000\000", vl_dat = ""}
> (gdb) print a
> $2 = 0
> (gdb) print b
> $3 = -4
> (gdb) print *typev
> $4 = {len = "\020\000\000", tag = 68899, val = "!\000\000"}

Where did the input come from?  On a little-endian machine that len
value means 4 bytes, so it's wrong right off the bat if it's
supposed to include the tag.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ETL software and training

2009-08-26 Thread Alvaro Herrera
Lew wrote:

> I used to work for a company called "MyWebOS" (later "WebOS").
> Spanish speakers thought that a very amusing name.

I completely agree :-D

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tagged types module and varlena changes

2009-08-26 Thread Alban Hertroys

On 26 Aug 2009, at 16:55, Alban Hertroys wrote:


With the SET_VARSIZE the above should work *as long as datum is not
toasted (or packed)*. If it's been detoasted then that's good, or if
it was freshly generated and not stored in a tuple then it should be
good too.


I changed it to:
   struct varlena* tv = (struct  
varlena*)tt_palloc( VARSIZE( datum ) );

   struct taggedtypev *typev =
(struct taggedtypev*) DatumGetPointer( datum );
   int a = VARSIZE(datum) - sizeof(Oid),
   b = VARSIZE_ANY_EXHDR(datum) - sizeof(Oid);

   SET_VARSIZE(tv->vl_len_, a);
   memcpy( tv->vl_dat, &typev->val, b );

   return PointerGetDatum( tv ) ;


But still I get a segfault on the memcpy line. The backtrace shows  
the following (line 0 is the memcpy itself, nothing useful to see  
there):


#1  0x29806f74 in ExtractTaggedTypeDatum (tti=0x2980c560,  
datum=726659176)

   at taggedtypes.c:249
249 memcpy( tv->vl_dat, &typev->val, b );
(gdb) print *tv
$1 = {vl_len_ = "\000\000\000", vl_dat = ""}
(gdb) print a
$2 = 0
(gdb) print b
$3 = -4
(gdb) print *typev
$4 = {len = "\020\000\000", tag = 68899, val = "!\000\000"}

Obviously passing a negative value as the size to copy is what's  
causing the segfault, but how come it's negative? Could it be that  
my table doesn't have Oid's and that subtracting sizeof(Oid) is what  
makes the length become negative?



To follow up on this:

One of the failing queries is:
select * from taggedtypes.currency_test ;

development=# \d+ taggedtypes.currency_test
  Table "taggedtypes.currency_test"
 Column |   Type| Modifiers | Description
+---+---+-
 c1 | taggedtypes.currency  |   |
 c2 | taggedtypes.currencyint   |   |
 c3 | taggedtypes.currencyfloat |   |
Has OIDs: no

I changed the test script to create the table WITH OIDS, and now the  
code works!


Is there some way to check whether a Datum is from a table with OIDs?  
I think the code could do with a check for that and error out if the  
table doesn't have those...


Alban Hertroys

--
Screwing up is the correct approach to attaching something to the  
ceiling.



!DSPAM:737,4a95510b11861909511901!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create a multi-column index with 2 dates using 'gist'?

2009-08-26 Thread Fred Janon
Thanks Gerhard, interesting but I wonder if it is a maintained project, the
files date from May 2008 and there is not much forum activity. I'll out it
on my list of "To be investigated".

Fred

On Wed, Aug 26, 2009 at 22:26, Gerhard Heift <
ml-postgresql-20081012-3...@gheift.de> wrote:

> But currently there is no way to avoid overlapping of such periods
>


Re: [GENERAL] Tagged types module and varlena changes

2009-08-26 Thread Alban Hertroys

On 26 Aug 2009, at 15:33, Greg Stark wrote:


On Wed, Aug 26, 2009 at 1:14 PM, Alban
Hertroys wrote:

struct varlena* tv = (struct varlena*)tt_palloc( VARSIZE( datum ) );

tv->vl_len = VARSIZE( datum ) - sizeof(Oid);
memcpy( tv->vl_dat,
   &((struct taggedtypev*)DatumGetPointer( datum ))->val,
   VARSIZE(datum) - sizeof(Oid) - VARHDRSZ );
return PointerGetDatum( tv ) ;



This doesn't compile anymore as the vl_len member of struct varlena  
no
longer exists and we're supposed to use the SET_VARSIZE macro  
instead now.. I
tried that, but then the memcpy bails out due to the size  
calculation being

wrong. I don't know enough about varlena usage to figure out what the
correct way of calculating the size for memcpy is, or whether that  
approach
is at all feasable with the current varlena implementation. What  
should the

above read?



With the SET_VARSIZE the above should work *as long as datum is not
toasted (or packed)*. If it's been detoasted then that's good, or if
it was freshly generated and not stored in a tuple then it should be
good too.


I changed it to:
struct varlena* tv = (struct  
varlena*)tt_palloc( VARSIZE( datum ) );

struct taggedtypev *typev =
(struct taggedtypev*) DatumGetPointer( datum );
int a = VARSIZE(datum) - sizeof(Oid),
b = VARSIZE_ANY_EXHDR(datum) - sizeof(Oid);

SET_VARSIZE(tv->vl_len_, a);
memcpy( tv->vl_dat, &typev->val, b );

return PointerGetDatum( tv ) ;


But still I get a segfault on the memcpy line. The backtrace shows the  
following (line 0 is the memcpy itself, nothing useful to see there):


#1  0x29806f74 in ExtractTaggedTypeDatum (tti=0x2980c560,  
datum=726659176)

at taggedtypes.c:249
249 memcpy( tv->vl_dat, &typev->val, b );
(gdb) print *tv
$1 = {vl_len_ = "\000\000\000", vl_dat = ""}
(gdb) print a
$2 = 0
(gdb) print b
$3 = -4
(gdb) print *typev
$4 = {len = "\020\000\000", tag = 68899, val = "!\000\000"}

Obviously passing a negative value as the size to copy is what's  
causing the segfault, but how come it's negative? Could it be that my  
table doesn't have Oid's and that subtracting sizeof(Oid) is what  
makes the length become negative?


I did some reading up on TOASTing and how to use those macro's, but  
the manual wasn't very detailed in this particular area... I doubt my  
values are TOASTed though, they're rather short values; not quite 2k  
anyway.


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4a954cda11869014116556!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create a multi-column index with 2 dates using 'gist'?

2009-08-26 Thread Gerhard Heift
On Mon, Aug 24, 2009 at 05:24:59PM +0800, Fred Janon wrote:
> Hi,
> 
> I am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE columns
> like 'startdate' and 'enddate' (just date, not interested in time in these
> columns). I have some queries (some using OVERLAPS) involving both 'startdate'
> and 'enddate' columns. I tried to create a multi column index using pgAdmin 
> and
> it comes back with this error:
> 
> ERROR: data type date has no default operator class for access method "gist"
> HINT: You must specify an operator class for the index or define a default
> operator class for the data type.
> 
> I search the pdf docs and online without finding what an "operator class" for
> DATE would be. Would a multi-column index help in that case (OVERLAPS and 
> dates
> comparison) anyway? Or should I just define an index for each of the dates?
> 
> Below are the table and index defintions.

Have a look at http://pgfoundry.org/projects/temporal

But currently there is no way to avoid overlapping of such periods :(

> Thanks
> 
> Fred

Regards,
  Gerhard


signature.asc
Description: Digital signature


Re: [GENERAL] Tagged types module and varlena changes

2009-08-26 Thread Greg Stark
On Wed, Aug 26, 2009 at 1:14 PM, Alban
Hertroys wrote:
>> struct varlena* tv = (struct varlena*)tt_palloc( VARSIZE( datum ) );
>>
>> tv->vl_len = VARSIZE( datum ) - sizeof(Oid);
>> memcpy( tv->vl_dat,
>>        &((struct taggedtypev*)DatumGetPointer( datum ))->val,
>>        VARSIZE(datum) - sizeof(Oid) - VARHDRSZ );
>> return PointerGetDatum( tv ) ;
>
>
> This doesn't compile anymore as the vl_len member of struct varlena no
> longer exists and we're supposed to use the SET_VARSIZE macro instead now. I
> tried that, but then the memcpy bails out due to the size calculation being
> wrong. I don't know enough about varlena usage to figure out what the
> correct way of calculating the size for memcpy is, or whether that approach
> is at all feasable with the current varlena implementation. What should the
> above read?
>

With the SET_VARSIZE the above should work *as long as datum is not
toasted (or packed)*. If it's been detoasted then that's good, or if
it was freshly generated and not stored in a tuple then it should be
good too.

If it's not guaranteed to be detoasted then you probably should be
adding detoast calls since the Oid will have alignment requirements.
Otherwise you could just use VARSIZE_ANY_EXHDR()

It's generally a good thing to rename the vl_len field to something
like _vl_len to catch anyplace else that's referring directly to it.
That will never work any more, neither for setting it nor for reading
it. They all need to be converted to use VARSIZE and SET_VARSIZE

I haven't looked at the rest of the code to see if the general
approach is still reasonable. The varlena changes shouldn't be fatal
though.
-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] best practise/pattern for large OR / LIKE searches

2009-08-26 Thread Ries van Twisk

The wildspeed function seems to be what I was looking for.

an dI remember that I have seen it before on the list... just I  
couldn't remember names or anything...


Ries


On Aug 26, 2009, at 7:28 AM, Pavel Stehule wrote:


2009/8/26  :

Hi Pavel,

can you provide some link or other directions to the proposal? I  
guess it

was posted to this list or somewhere else?


Please, ask to Oleg Bartunov

http://www.sai.msu.su/~megera/wiki/wildspeed

regards
Pavel Stehule



Tomas


Hello

one year ago there was proposal for index support for LIKE %some%.  
The

problem was extreme size of index size.

I thing so you can write own C function, that can check string  
faster

than repeated LIKE

some like

SELECT * FROM tbl WHERE contains(datanumber, '12345','54321',)

regards
Pavel Stehule

2009/8/26 Ries van Twisk :

Hey All,
I am wondering if there is a common pattern for these sort of  
queries :
SELECT * FROM tbl WHERE datanumber LIKE '%12345%' OR LIKE  
'%54321%' OR

LIKE
'%8766%' OR LIKE '%009%', ..
The number of OR/LIKES are in the order of 50-100 items...
the table tbl is a couple of million rows.
The datanumber is a string that are maximum 10 characters long, no
spaces
and can contain numbers and letters.
Apart from creating a couple of index table to make the LIKE left
anchored
something like this :
tbl <> tbl_4letters
tbl <> tbl_5letters
tbl <> tbl_3letters
or creating a functional index 'of some sort' are there any other
brilliant
ideas out there to solve such a problem (GIN/GIS???) ?
Searches are currently taking to long and we would like to optimize
them,
but before we dive into our own solution we
where wondering if there already common solutions for this...
Kind Regards,
Ries van Twisk







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general








regards, Ries van Twisk

-
tags: Freelance TYPO3 Glassfish JasperReports JasperETL Flex Blaze-DS  
WebORB PostgreSQL DB-Architect
email: r...@vantwisk.nlweb:   http://www.rvantwisk.nl/ 
skype: callto://r.vantwisk
Phone: +1-810-476-4196Cell: +593 9901 7694   SIP:  
+1-747-690-5133









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] best practise/pattern for large OR / LIKE searches

2009-08-26 Thread Pavel Stehule
2009/8/26  :
> Hi Pavel,
>
> can you provide some link or other directions to the proposal? I guess it
> was posted to this list or somewhere else?

Please, ask to Oleg Bartunov

http://www.sai.msu.su/~megera/wiki/wildspeed

regards
Pavel Stehule

>
> Tomas
>
>> Hello
>>
>> one year ago there was proposal for index support for LIKE %some%. The
>> problem was extreme size of index size.
>>
>> I thing so you can write own C function, that can check string faster
>> than repeated LIKE
>>
>> some like
>>
>> SELECT * FROM tbl WHERE contains(datanumber, '12345','54321',)
>>
>> regards
>> Pavel Stehule
>>
>> 2009/8/26 Ries van Twisk :
>>> Hey All,
>>> I am wondering if there is a common pattern for these sort of queries :
>>> SELECT * FROM tbl WHERE datanumber LIKE '%12345%' OR LIKE '%54321%' OR
>>> LIKE
>>> '%8766%' OR LIKE '%009%', ..
>>> The number of OR/LIKES are in the order of 50-100 items...
>>> the table tbl is a couple of million rows.
>>> The datanumber is a string that are maximum 10 characters long, no
>>> spaces
>>> and can contain numbers and letters.
>>> Apart from creating a couple of index table to make the LIKE left
>>> anchored
>>> something like this :
>>> tbl <> tbl_4letters
>>> tbl <> tbl_5letters
>>> tbl <> tbl_3letters
>>> or creating a functional index 'of some sort' are there any other
>>> brilliant
>>> ideas out there to solve such a problem (GIN/GIS???) ?
>>> Searches are currently taking to long and we would like to optimize
>>> them,
>>> but before we dive into our own solution we
>>> where wondering if there already common solutions for this...
>>> Kind Regards,
>>> Ries van Twisk
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tagged types module and varlena changes

2009-08-26 Thread Alban Hertroys

Hello all,

I'm trying to get Marcel's tagged types to work, but I ran into some  
problems due to changes to the varlena type since he wrote that code.  
I tried contacting him personally earlier, but he's having some  
hardware issues apparently and can't help me; hence me asking here ;)


The problem is that the tagged types module uses this bit of code:


struct varlena* tv = (struct varlena*)tt_palloc( VARSIZE( datum ) );

tv->vl_len = VARSIZE( datum ) - sizeof(Oid);
memcpy( tv->vl_dat,
&((struct taggedtypev*)DatumGetPointer( datum ))->val,
VARSIZE(datum) - sizeof(Oid) - VARHDRSZ );
return PointerGetDatum( tv ) ;



This doesn't compile anymore as the vl_len member of struct varlena no  
longer exists and we're supposed to use the SET_VARSIZE macro instead  
now. I tried that, but then the memcpy bails out due to the size  
calculation being wrong. I don't know enough about varlena usage to  
figure out what the correct way of calculating the size for memcpy is,  
or whether that approach is at all feasable with the current varlena  
implementation. What should the above read?


I'm also slightly worried whether DatumGetPointer might return a null- 
pointer in some cases. Is that possible?


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4a95273611861044619247!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] best practise/pattern for large OR / LIKE searches

2009-08-26 Thread tv
Hi Pavel,

can you provide some link or other directions to the proposal? I guess it
was posted to this list or somewhere else?

Tomas

> Hello
>
> one year ago there was proposal for index support for LIKE %some%. The
> problem was extreme size of index size.
>
> I thing so you can write own C function, that can check string faster
> than repeated LIKE
>
> some like
>
> SELECT * FROM tbl WHERE contains(datanumber, '12345','54321',)
>
> regards
> Pavel Stehule
>
> 2009/8/26 Ries van Twisk :
>> Hey All,
>> I am wondering if there is a common pattern for these sort of queries :
>> SELECT * FROM tbl WHERE datanumber LIKE '%12345%' OR LIKE '%54321%' OR
>> LIKE
>> '%8766%' OR LIKE '%009%', ..
>> The number of OR/LIKES are in the order of 50-100 items...
>> the table tbl is a couple of million rows.
>> The datanumber is a string that are maximum 10 characters long, no
>> spaces
>> and can contain numbers and letters.
>> Apart from creating a couple of index table to make the LIKE left
>> anchored
>> something like this :
>> tbl <> tbl_4letters
>> tbl <> tbl_5letters
>> tbl <> tbl_3letters
>> or creating a functional index 'of some sort' are there any other
>> brilliant
>> ideas out there to solve such a problem (GIN/GIS???) ?
>> Searches are currently taking to long and we would like to optimize
>> them,
>> but before we dive into our own solution we
>> where wondering if there already common solutions for this...
>> Kind Regards,
>> Ries van Twisk
>>
>>
>>
>>
>>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Skytools-users] WAL Shipping + checkpoint

2009-08-26 Thread Sébastien Lardière

On 26/08/2009 04:46, Mark Kirkwood wrote:

Sébastien Lardière wrote:

Hi All,

I've a cluster ( Pg 8.3.7 ) with WAL Shipping, and a few hours ago, 
the master had to restart.


I use walmgr from Skytools, which works very well.

I have already restart the master without any problem, but today, the 
slave doesn't work like I want. The field "Time of latest checkpoint" 
from the pg_controldata on the slave keep the same values, but WAL 
File are processed correctly.


I try to restart the slave, but, after processed again all the WAL 
between "Time of latest checkpoint" and, it does nothing else, latest 
checkpoint stay at the same value.


I don't know if it's important ( i think so ), and I can't fix it.

It is normal for it to lag behind somewhat on the slave (depending on 
what your checkpoint timeout etc settings are).


However, I've noticed what you are seeing as well - particularly when 
there are no actual data changes coming through in the logs - the 
slave checkpoint time does not change even tho there have been 
checkpoints on the master (I may have a look in the code to see what 
the story really is...if I have time).




Yes, but the delay between the last checkpoint on the master and the 
slave is very high, now ( 100 000 sec ), because the last checkpoint on 
the slave was yesterday ( as far as pg_controldata is right )


Here a graph from our munin plugin : 
http://seb.ouvaton.org/tmp/bdd-pg_walmgr-week.png


The blue line represent an average between two WAL processed on the 
slave, and the green line, the delai between last checkpoint on the 
master and the slave.


Maybe it's not some good indicator, but the green line let me think 
there is problem.


Thanks,

--
Sébastien Lardière



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general