Re: [BUGS] BUG #6238: ECPG converts "long long" to long on Windows

2011-10-16 Thread Tom Lane
Magnus Hagander  writes:
> Does ecpg have a different requirement from everything else, or are we just
> doing it differently in different places for no special reason?

I believe that in ecpg, the requirement is "support the data types
defined by the C compiler", that is we should support long long with
whatever width it actually has on the platform.  Just about everyplace
else in PG, we want an integer of a known number of bits, and whether
that's long or long long is incidental.  So it's not surprising to me
that this case got overlooked in the win64 patches.

regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6238: ECPG converts "long long" to long on Windows

2011-10-16 Thread Magnus Hagander
On Wednesday, October 5, 2011, Bruce Momjian wrote:

> Alvaro Herrera wrote:
> >
> > Excerpts from Robert Haas's message of mar oct 04 10:39:27 -0300 2011:
> > > On Tue, Oct 4, 2011 at 8:15 AM, Michael Meskes 
> > > >
> wrote:
> >
> > > >> 2010 on Windows, which accepts "long long" to mean the same thing as
> > > >> __int64, but ECPG doesn't recognize the later.
> > > >> May be related to BUG #5464: ecpg on 64bit system converts "long
> long" to
> > > >> "long"
> > > >
> > > > Well, this bug is (at least I don't know otherwise) fixed for more
> than a year.
> > > > Maybe the configure test doesn't work on Windows? I don't know.
> > >
> > > On at least some Windows builds, configure isn't used at all... so
> > > whatever values is being used would come from the MSVC build system.
> >
> > In fact, pg_config.h.win32 does not have the HAVE_LONG_LONG_INT symbol
> > at all -- only HAVE_LONG_LONG_INT_64 is in there.
>
> Is this wrong?
>
> Not sure, but it seems ecpg is the only place in our entire source that
this is used - that's probably why it was missed.

Does ecpg have a different requirement from everything else, or are we just
doing it differently in different places for no special reason?

It was applied as commit 29259531, which apparently changed it from our own
implementation to the autoconf standard. And changed a whole bunch of places
from HAVE_LONG_LONG to HAVE_LONG_LONG_INT. Perhaps it just missed to update
the win32 file?

Just to be completely cetain, what behaviour is it we're looking for, that
is not the same as HAVE_LONG_LONG_INT_64?



-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [BUGS] BUG #6256: loading small file eats all memory and crashes system

2011-10-16 Thread Rachael Nelson
Copy From worked great with csv formatting.  Thank you very much for your
help!  I recently started with postgres after working heavily with MySql for
corporate environments.  The reason for the script was to test the
performance of a SQL command that was taking forever in MySQL.

SELECT id,employer_id, employee_id
FROM giantTable
WHERE id IN(SELECT
MAX(id)
FROM giantTable
GROUP BY employer_id, employee_id)
AND to_number(status,'9')<=3;

This simple query with a subselect took 8 seconds to pull from that table
with 2000 rows in MySql.  In Postgres it took 3ms!  The same query with
100,000 rows is still running in MySQL (8 minutes and counting...) while in
Postgres it loaded instantly (132ms).  Obviously this can be optimized with
a join instead of subselect but it shows how pitiful MySQL is and how nice
Postgres performs.  This is with out of the box parameters that could
probably be tweaked for better performance but even then, Postgres is
kicking  the pants out of MySQL.  I'm very impressed - yall have a great
product!!

On Sat, Oct 15, 2011 at 6:33 PM, Alex Hunsaker  wrote:

> On Sat, Oct 15, 2011 at 14:16, Rachael Nelson  wrote:
> > Attached is a tiny bit of top output (top.txt), the file being imported
> > (postgres.sql.gz - about 32MB), and the perl module script that generates
>
> Thanks. So the problem here seems to be postgres does not handle a
> multi-hundred megabyte single statement very well. In-fact you don't
> even need a table, I was able to run it out of memory using  a 306MB
> single insert with the multirow values syntax.
>
> I would suggest using COPY FROM instead, it should be much faster. If
> you can't do that then try breaking up that big insert into multiple
> smaller ones.
>
>
> For the curious, find below the memory dump. Im not exactly sure where
> all the memory is going, I count ~1.5G below. This box has 8G total
> with ~7 Free before I ran the query.
>
> TopMemoryContext: 61360 total in 7 blocks; 3920 free (13 chunks); 57440
> used
>  TopTransactionContext: 8192 total in 1 blocks; 7520 free (0 chunks); 672
> used
>  MessageContext: 1487668504 total in 87 blocks; 2256 free (1 chunks);
> 1487666248 used
>  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
> used
>  smgr relation table: 24576 total in 2 blocks; 13920 free (4 chunks);
> 10656 used
>  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0
> chunks); 32 used
>  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
>  PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
>  Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
>  CacheMemoryContext: 817840 total in 20 blocks; 111704 free (0
> chunks); 706136 used
>.
>  MdSmgr: 8192 total in 1 blocks; 7584 free (0 chunks); 608 used
>  LOCALLOCK hash: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
>  Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used
>  ErrorContext: 8192 total in 1 blocks; 8160 free (3 chunks); 32 used
>