Re: [ODBC] [GENERAL] ODBC constructs

2013-05-20 Thread Heikki Linnakangas

On 21.05.2013 08:11, Dev Kumkar wrote:

On Mon, May 20, 2013 at 9:12 PM, Atri Sharma  wrote:



If you wish to work in C,then,I would suggest libpq.I would wait for more
replies on this,as I have little knowledge
about psqlODBC.



Thanks for the comments. Yes objective is to work in C and found libpq
useful but am not sure about psqlODBC.
It would be really great to get comments from community here regarding
comparison between both of them and also performance perspective. Based on
my reading so far libpq is good from performance perspective.


libpq is generally-speaking better than psqlodbc. The advantage of ODBC 
is that if you have a suitable driver, you can connect to other DBMS' as 
well, while libpq will only work with PostgreSQL. Unless you need such 
portability, go with libpq.


- Heikki


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC constructs

2013-05-20 Thread Guy Rouillier

On 5/21/2013 1:11 AM, Dev Kumkar wrote:


Thanks for the comments. Yes objective is to work in C and found libpq
useful but am not sure about psqlODBC.
It would be really great to get comments from community here regarding
comparison between both of them and also performance perspective. Based
on my reading so far libpq is good from performance perspective.


One advantage of an abstraction layer is that it allows you to switch 
the underlying DBMS with few if any changes to your code.  ODBC is just 
one of many available abstraction layers.  So, one thing to consider is 
if your application is intended to be at all generic, or if it is 
specifically targeted to PostgreSQL.  If the former, then you should 
consider an abstraction layer; if not, then libpq is fine.


Another determinant for many people is what they already know.  If I'm 
doing a quick, one-time app and I know ODBC but I don't know libpq, I 
might choose to just stick with what I know and use ODBC.


So, really, the first decision you should make is how general you want 
your app to be, and how long you expect to use it.  Other decisions flow 
from there.


--
Guy Rouillier


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC constructs

2013-05-20 Thread John R Pierce

On 5/20/2013 11:04 PM, Atri Sharma wrote:

Also,is it only me,or are we all a bit wary of ODBC?


its clumsy.getting software working well with it requires more 
work.   its promise of portability only holds true if you restrict 
yourself to SQL that works the same on different servers, and if you do 
that, odds are you'll be optimal on none of them.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cross compile custom data types for Linux and windows

2013-05-20 Thread John R Pierce

On 5/20/2013 10:58 PM, Daniel Migowski wrote:
I intend to create some native data types for PostgreSQL, but since 
our servers run on Windows and Linux, I would have to compile the 
libraries for Windows and for Linux (will need nothing else than the 
standard C libs). Our build server is a Debian 6 Linux system. Is 
there a way to also create the Windows libraries on it, or do I need 
to have a Windows system with Visual C++ for that?


mixing C code from different compilers can be problematic.

if you want your extension modules to run with the windows binary 
versions built by enterpriseDB, you'll need to use VisualC (plain C, no 
C++), even the free express version is fine for this.


you could, in theory, setup a cross compile environment on Linux that 
would let you use gcc to build both postgres and your extensions, but 
then you'd be dealing with your own build of Postgres for Windows.  
Also, setting up that environment would probably take a lot of 
tinkering, and PostgreSQL on Windows with GCC is not considered a 
supported platform at present (although people do successfully build it)




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC constructs

2013-05-20 Thread Atri Sharma


Sent from my iPad

On 21-May-2013, at 11:24, John R Pierce  wrote:
 
> libpq lets you talk directly to postgres in its native tongue.
> 
> ODBC implements a sort of abstraction. IMHO, its like trying to type with 
> mittens on.
> 
Also,I see no point in working using ODBC with Postgres if you want to work in 
C,when libpq is
available.IMHO,it is like adding an extra layer without any purpose,or I may be 
wrong here.

Also,is it only me,or are we all a bit wary of ODBC? I think JDBC is becoming 
the standard now(of course,its Java,but as I said, you should use libpq if you 
want to write native code for Postgres).

Regards,

Atri

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cross compile custom data types for Linux and windows

2013-05-20 Thread Daniel Migowski
Hello,

I intend to create some native data types for PostgreSQL, but since our servers 
run on Windows and Linux, I would have to compile the libraries for Windows and 
for Linux (will need nothing else than the standard C libs). Our build server 
is a Debian 6 Linux system. Is there a way to also create the Windows libraries 
on it, or do I need to have a Windows system with Visual C++ for that?

Regards,
Daniel Migowski



Re: [GENERAL] ODBC constructs

2013-05-20 Thread John R Pierce

On 5/20/2013 10:11 PM, Dev Kumkar wrote:
On Mon, May 20, 2013 at 9:12 PM, Atri Sharma > wrote:



If you wish to work in C,then,I would suggest libpq.I would wait
for more replies on this,as I have little knowledge
about psqlODBC.


Thanks for the comments. Yes objective is to work in C and found libpq 
useful but am not sure about psqlODBC.
It would be really great to get comments from community here regarding 
comparison between both of them and also performance perspective. 
Based on my reading so far libpq is good from performance perspective.


libpq lets you talk directly to postgres in its native tongue.

ODBC implements a sort of abstraction. IMHO, its like trying to type 
with mittens on.






--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] ODBC constructs

2013-05-20 Thread Dev Kumkar
On Mon, May 20, 2013 at 9:12 PM, Atri Sharma  wrote:

>
> If you wish to work in C,then,I would suggest libpq.I would wait for more
> replies on this,as I have little knowledge
> about psqlODBC.
>

Thanks for the comments. Yes objective is to work in C and found libpq
useful but am not sure about psqlODBC.
It would be really great to get comments from community here regarding
comparison between both of them and also performance perspective. Based on
my reading so far libpq is good from performance perspective.

Regards...


Re: [GENERAL] ODBC constructs

2013-05-20 Thread Atri Sharma


Sent from my iPad

On 21-May-2013, at 6:15, Dev Kumkar  wrote:

> Hello,
> 
> My objective is to work with postgres from my linux box using C. Am not sure 
> whether psqlODBC or libpq should be used.
> I was playing and used libpq successfully but not sure which route to go and 
> whats the difference.
> 
> Also Can you please let me know any mapping document which maps ODBC 
> constructs between sybase and postgres.
> 
> Also list of ODBC constructs to be used with postgres.
> 
> Thanks in advance!
> 
> Regards...
> 
> p.s. Am not sure if the earlier email was sent..sorry for spam in case it was.
> 
> 
> On Mon, May 20, 2013 at 5:37 PM, Dev Kumkar  wrote:
>> Hello,
>> 
>> My objective is to work with postgres from my linux box using C. Am not sure 
>> whether psqlODBC or libpq should be used.
>> I was playing and used libpq successfully but not sure which route to go and 
>> whats the difference.
>> 
>> Also Can you please let me know any mapping document which maps ODBC 
>> constructs between sybase and postgres.
>> 
>> Also list of ODBC constructs to be used with postgres.
>> 
>> 

If you are looking for a client to work with with your Postgres database,then 
you can use psql(for command line),or use PGAdmin (http://www.pgadmin.org/).

If you are looking to connect data from your Sybase database to a Postgres 
database,you can have a look at the various
Foreign data wrappers Postgres has.

If you wish to work in C,then,I would suggest libpq.I would wait for more 
replies on this,as I have little knowledge
about psqlODBC.

Regards,

Atri




Re: [GENERAL] Deploying PostgreSQL on CentOS with SSD and Hardware RAID

2013-05-20 Thread Toby Corkindale

On 21/05/13 00:16, Merlin Moncure wrote:

On Sun, May 19, 2013 at 8:07 PM, Toby Corkindale
 wrote:

On 11/05/13 02:25, Merlin Moncure wrote:


On Fri, May 10, 2013 at 11:11 AM, Evan D. Hoffman
 wrote:


Not sure of your space requirements, but I'd think a RAID 10 of 8x or
more
Samsung 840 Pro 256/512 GB would be the best value.  Using a simple
mirror
won't get you the reliability that you want since heavy writing will burn
the drives out over time, and if you're writing the exact same content to
both drives, they could likely fail at the same time.  Regardless of the
underlying hardware you should still follow best practices for
provisioning
disks, and raid 10 is the way to go.  I don't know what your budget is
though.  Anyway, mirrored SSD will probably work fine, but I'd avoid
using
just two drives for the reasons above.  I'd suggest at least testing RAID
5
or something else to spread the load around.  Personally, I think the
ideal
configuration would be a RAID 10 of at least 8 disks plus 1 hot spare.
The
Samsung 840 Pro 256 GB are frequently $200 on sale at Newegg.  YMMV but
they
are amazing drives.



Samsung 840 has no power loss protection and is therefore useless for
database use IMO unless you don't care about data safety and/or are
implementing redundancy via some other method (say, by synchronous
replication).




I believe the original poster was referring to the "840 Pro" model; that
model does include a "supercap" for power loss protection.


got a source for that?  I couldn't verify that after some googling.



I'm sorry, I really thought they had made it onto my list of candidates 
that included supercaps.. now I'm checking again, I can't find any 
evidence to support that claim either. I must have confused them in my 
mind with another drive. Sorry about that, and thanks for checking.


-Toby


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deploying PostgreSQL on CentOS with SSD and Hardware RAID

2013-05-20 Thread Toby Corkindale

On 20/05/13 15:12, David Boreham wrote:

On 5/19/2013 7:19 PM, Toby Corkindale wrote:

On 13/05/13 11:23, David Boreham wrote:

btw we deploy on CentOS6. The only things we change from the default
are:

1. add "relatime,discard" options to the mount (check whether the most
recent CentOS6 does this itself -- it didn't back when we first deployed
on 6.0).



While it is important to let the SSD know about space that can be
reclaimed, I gather the operation does not perform well.
I *think* current advice is to leave 'discard' off the mount options,
and instead run a nightly cron job to call 'fstrim' on the mount point
instead. (In really high write situations, you'd be looking at calling
that every hour instead I suppose)

I have to admit to have just gone with the advice, rather than
benchmarking it thoroughly.



The guy who blogged about this a couple of years ago was using a
Sandforce controller drive.
I'm not sure there is a similar issue with other drives. Certainly we've
never noticed a problematic delay in file deletes.
That said, our applications don't delete files too often (log file
purging is probably the only place it happens regularly).

Personally, in the absence of a clear and present issue, I'd prefer to
go the "kernel guys and drive firmware guys will take care of this"
route, and just enable discard on the mount.


This guy posted about a number of SSD drives, and enabling discard 
affected most of them quite negatively:

http://people.redhat.com/lczerner/discard/ext4_discard.html
http://people.redhat.com/lczerner/discard/files/Performance_evaluation_of_Linux_DIscard_support_Dev_Con2011_Brno.pdf

That is from 2011 though, so you're right that things may have improved 
by now.. Has anyone seen benchmarks supporting that though?


Toby


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC constructs

2013-05-20 Thread Dev Kumkar
Also adding the pgsql-admin alias.

Regards...


Re: [GENERAL] ODBC constructs

2013-05-20 Thread Dev Kumkar
Hello,

My objective is to work with postgres from my linux box using C. Am not
sure whether psqlODBC or
libpqshould be
used.
I was playing and used libpq successfully but not sure which route to go
and whats the difference.

Also Can you please let me know any mapping document which maps ODBC
constructs between sybase and postgres.

Also list of ODBC constructs to be used with postgres.

Thanks in advance!

Regards...

p.s. Am not sure if the earlier email was sent..sorry for spam in case it
was.


On Mon, May 20, 2013 at 5:37 PM, Dev Kumkar  wrote:

> Hello,
>
> My objective is to work with postgres from my linux box using C. Am not
> sure whether psqlODBC or 
> libpqshould be used.
> I was playing and used libpq successfully but not sure which route to go
> and whats the difference.
>
> Also Can you please let me know any mapping document which maps ODBC
> constructs between sybase and postgres.
>
> Also list of ODBC constructs to be used with postgres.
>
> Thanks in advance!
>
> Regards...
>
>  
> p.s. Am not sure if the earlier email was sent..sorry for spam in case it
> was.
>


Re: [GENERAL] cal I pass arguments directly to final function in aggregates

2013-05-20 Thread Nicklas Avén

Thank you Tom

On 05/19/2013 01:26 AM, Tom Lane wrote:

=?UTF-8?B?Tmlja2xhcyBBdsOpbg==?=  writes:

Perhaps you could construct your usage like this:

post_process_function(aggregate_function(...), fixed_argument)

where the aggregate_function just collects the varying values
and then the post_process_function does what you were thinking
of as the final function.




Maybe that is the way I have to go. But I would like to avoid it because 
I think the interface gets a bit less clean for the users.


I also suspect that it causes some more memcopying to get out of the 
aggregation function and into a new function. (Am I right about that)


As i understand it i have two options

1)Do as you suggest and divide the process in one aggregate function 
and one post_processing
2Contruct a structure for the state-value that can hold those 
values. In this case those arguments is just 1 smallint , and 1 char(3). 
I will just have to handle them for the first row to store them in my 
structure, then I can just ignore them. Am I right that it will be a 
very small overhead even if those values are sent to the function for 
each row?


My question is if I can get further advice about what bottlenecks and 
traps I should consider.


What I am aggregating is geometries (PostGIS). It can be 1 to millions 
of rows, and the geometries can be points of a few bytes to complex 
geometry-collections of many megabytes.


Regards

/Nicklas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-20 Thread Amit Langote
On Tue, May 21, 2013 at 12:43 AM, Jeff Amiel  wrote:
> Thanks much!
> (sorry for top-posting, yahoo email sucks)
>

I wonder if you could arrive at some conclusions with the statistics
(pg_stats) you have and the join selectivity formulas described in the
referred documentation link. I would like to know if you still get the
same row  estimates (after explain) and also if possible, the value
that is computed from that formula. Do they resemble each other?


--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-20 Thread Jeff Amiel
Thanks much!
(sorry for top-posting, yahoo email sucks)




- Original Message -
From: Amit Langote 
To: Jeff Amiel 
Cc: "dep...@depesz.com" ; "pgsql-general@postgresql.org" 

Sent: Monday, May 20, 2013 9:51 AM
Subject: Re: [GENERAL] Why does row estimation on nested loop make no sense to 
me

I also found one other discussion which has similar issues addressed:

http://postgresql.1045698.n5.nabble.com/Bogus-nestloop-rows-estimate-in-8-4-7-td5710254.html

--
Amit Langote



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-20 Thread Amit Langote
I also found one other discussion which has similar issues addressed:

http://postgresql.1045698.n5.nabble.com/Bogus-nestloop-rows-estimate-in-8-4-7-td5710254.html

--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-20 Thread Amit Langote
On Mon, May 20, 2013 at 11:01 PM, Jeff Amiel  wrote:
> Ok - I agree -
>
> Can somebody help me understand where the row estimates come from on a 
> nested-loop operation in postgres then?
>

In case you haven't noticed already in the documentation, there are
following lines:

"... It might appear from inspection of the EXPLAIN output that the
estimate of join rows comes from 50 * 1, that is, the number of outer
rows times the estimated number of rows obtained by each inner index
scan on tenk2. But this is not the case: *the join relation size is
estimated before any particular join plan has been considered*. If
everything is working well then the two ways of estimating the join
size will produce about the same answer, but due to roundoff error and
other factors they sometimes diverge significantly."

Read more at: 
http://www.postgresql.org/docs/9.2/static/row-estimation-examples.html

It also refers where in source code these table size estimations are done.

Hope this helps.


--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deploying PostgreSQL on CentOS with SSD and Hardware RAID

2013-05-20 Thread Merlin Moncure
On Sun, May 19, 2013 at 8:07 PM, Toby Corkindale
 wrote:
> On 11/05/13 02:25, Merlin Moncure wrote:
>>
>> On Fri, May 10, 2013 at 11:11 AM, Evan D. Hoffman
>>  wrote:
>>>
>>> Not sure of your space requirements, but I'd think a RAID 10 of 8x or
>>> more
>>> Samsung 840 Pro 256/512 GB would be the best value.  Using a simple
>>> mirror
>>> won't get you the reliability that you want since heavy writing will burn
>>> the drives out over time, and if you're writing the exact same content to
>>> both drives, they could likely fail at the same time.  Regardless of the
>>> underlying hardware you should still follow best practices for
>>> provisioning
>>> disks, and raid 10 is the way to go.  I don't know what your budget is
>>> though.  Anyway, mirrored SSD will probably work fine, but I'd avoid
>>> using
>>> just two drives for the reasons above.  I'd suggest at least testing RAID
>>> 5
>>> or something else to spread the load around.  Personally, I think the
>>> ideal
>>> configuration would be a RAID 10 of at least 8 disks plus 1 hot spare.
>>> The
>>> Samsung 840 Pro 256 GB are frequently $200 on sale at Newegg.  YMMV but
>>> they
>>> are amazing drives.
>>
>>
>> Samsung 840 has no power loss protection and is therefore useless for
>> database use IMO unless you don't care about data safety and/or are
>> implementing redundancy via some other method (say, by synchronous
>> replication).
>
>
>
> I believe the original poster was referring to the "840 Pro" model; that
> model does include a "supercap" for power loss protection.

got a source for that?  I couldn't verify that after some googling.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does row estimation on nested loop make no sense to me

2013-05-20 Thread Jeff Amiel
Ok - I agree - 

Can somebody help me understand where the row estimates come from on a 
nested-loop operation in postgres then?



- Original Message -
From: hubert depesz lubaczewski 
To: Jeff Amiel 
Cc: "pgsql-general@postgresql.org" 
Sent: Saturday, May 18, 2013 3:39 AM
Subject: Re: [GENERAL] Why does row estimation on nested loop make no sense
 to me

Not sure if it helps, but it's apparently not a very rare thing.
Quick analysis on data from explain.depesz.com showed that > 12% of
plans with nested loop have such estimate.

Couple of examples:

http://explain.depesz.com/s/Qm4
http://explain.depesz.com/s/qmW
http://explain.depesz.com/s/qnG
http://explain.depesz.com/s/QO
http://explain.depesz.com/s/qov

...



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general