Re: [GENERAL] a LEFT JOIN problem

2008-10-25 Thread Tony Wasson
On Sat, Oct 25, 2008 at 2:11 PM, Thomas <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have the following tables:
>
> Product(id, title, price)
> Item(id, product_id, order_id, quantity)
> Order(id, user_id, amount, paid)
> User(id, name)
>
> What I want to achieve is a query on a specific Product based in its
> title. If the product has at least 1 order in which it appears, then
> return the Product and Order details, if the product has no paid order
> associated, then only return the Product fields.
>
> I have tried the following query:
> --
> SELECT products.*, paid FROM "products"
> LEFT OUTER JOIN items ON products.id = items.product_id
> LEFT OUTER JOIN orders ON items.order_id = orders.id
> LEFT OUTER JOIN users ON orders.user_id = users.id
> WHERE (title = E'some-product' AND paid = 1 AND name = 'thomas')
> --
>
> The problem with my query, is that if there are no paid associated
> orders, then the WHERE will drop every returned line that has paid =
> 0, therefore I don't get anything at all, but I would like to have at
> least the products field.

Put the filtering in your ON clauses. The WHERE clause is processed
after the JOINs are done, and is eliminating your results. Without
knowing your schema, this is just a guess..

SELECT products.*, paid FROM "products"
LEFT OUTER JOIN items ON (products.id = items.product_id AND title =
'some-product')
LEFT OUTER JOIN orders ON (items.order_id = orders.id AND paid = 1)
LEFT OUTER JOIN users ON (orders.user_id = users.id AND name = 'thomas')

> Moreover, the "name" argument in the WHERE comes from the user logged
> in data. So if the user is not logged in, no fields are returned.

I suspect your app will need to to run the query and compare that
against the user(s) currently logged in.

Hope this helps!
Tony Wasson

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] a LEFT JOIN problem

2008-10-25 Thread Thomas
Hi,

I have the following tables:

Product(id, title, price)
Item(id, product_id, order_id, quantity)
Order(id, user_id, amount, paid)
User(id, name)

What I want to achieve is a query on a specific Product based in its
title. If the product has at least 1 order in which it appears, then
return the Product and Order details, if the product has no paid order
associated, then only return the Product fields.

I have tried the following query:
--
SELECT products.*, paid FROM "products"
LEFT OUTER JOIN items ON products.id = items.product_id
LEFT OUTER JOIN orders ON items.order_id = orders.id
LEFT OUTER JOIN users ON orders.user_id = users.id
WHERE (title = E'some-product' AND paid = 1 AND name = 'thomas')
--

The problem with my query, is that if there are no paid associated
orders, then the WHERE will drop every returned line that has paid =
0, therefore I don't get anything at all, but I would like to have at
least the products field.

Moreover, the "name" argument in the WHERE comes from the user logged
in data. So if the user is not logged in, no fields are returned.

Is that possible to achieve in only one query? What is the correct way
of doing it?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC - Call stored function that returns user defined type

2008-10-25 Thread cyw
While I still don't know who to handle returns with scalars and user defined 
types mixes, I tested with a function that returns only a SETOF a user defined 
type, the behavior is the same as a normal select return. You can use 
ReseultSet to traverse and get each column. 

CYW
  - Original Message - 
  From: [EMAIL PROTECTED] 
  To: General PostgreSQL List 
  Sent: Friday, October 24, 2008 2:23 PM
  Subject: [GENERAL] JDBC - Call stored function that returns user defined type


  I am looking for info on how to call a stored function that returns a user 
defined type.

  Assume I have a type defined as:
  CREATE TYPE XYType AS (x int4, y int4);

  and I use CYType in a function such as this:
  CREATE  FUNCTION test(IN z int4, OUT xy XYType, OUT status character) 
RETURNS RECORD AS $BODY$.

  I found info on how to formulate a CallableStatement, but no info on how to 
process the results from the call? Or does Postgres JDBC support this type of 
calls?

  Any help is greatly appreciated.

  Regards,
  CYW






 

Re: [GENERAL] support for embedded db and a clustered index?

2008-10-25 Thread Scott Marlowe
On Sat, Oct 25, 2008 at 12:02 PM, Julian Bui <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> Does pgsql provide support for a clustered index?  By that I mean can I
> specify that the database insert records in the order of a particular
> attribute.  Many of my queries are time-range based and my row inserts are
> done chronologically, so it would benefit to have them sorted by when they
> were inserted.

Sort of.  It supports static clustering.  Updates aren't clustered.

> Also, can pgsql be embedded in an application?  By that I mean it is easily
> deployable with an application.

Not really the same thing.  PostgreSQL is not embeddable in the
classic sense, and if you need an embedded database, look at SQLLite.

>  The end user is unaware that a database
> even exists and the end user does not need to install pgsql.

Yes, that's called a silent install.

> I do NOT mean
> a db that runs on an embedded device (although I am hoping pgsql has a small
> memory footprint) and I also do NOT mean to ask if pgsql supports embedded
> sql.

PostgreSQL is not optimized for small memory or disk footprint, it is
optimized for using more memory and being fast and reliable.  I think
you might want SQLLite for this job.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] support for embedded db and a clustered index?

2008-10-25 Thread Julian Bui
Hi all,

Does pgsql provide support for a clustered index?  By that I mean can I
specify that the database insert records in the order of a particular
attribute.  Many of my queries are time-range based and my row inserts are
done chronologically, so it would benefit to have them sorted by when they
were inserted.

Also, can pgsql be embedded in an application?  By that I mean it is easily
deployable with an application.  The end user is unaware that a database
even exists and the end user does not need to install pgsql.  I do NOT mean
a db that runs on an embedded device (although I am hoping pgsql has a small
memory footprint) and I also do NOT mean to ask if pgsql supports embedded
sql.

Any help would be greatly appreciated.

Thanks,
Julian


Re: [GENERAL] syncing with a MySQL DB

2008-10-25 Thread Ernesto QuiƱones
I use dbi-link, work fine, but I have problems when I call mysql
tables "linked" and these tables are big, maybe a millon records, the
answers is really slow, I need to wait 5 or more minutes to have an
answer in a single query like this "select * from table limit 10", I
am thinking maybe dbi-link download all the data to pgsql before to
give me the answer.

Anybody knows how improve this?

thnaks

2008/10/21 Brandon Metcalf <[EMAIL PROTECTED]>:
> m == [EMAIL PROTECTED] writes:
>
>  m> On Tue, Oct 21, 2008 at 9:07 AM, Brandon Metcalf <[EMAIL PROTECTED]> 
> wrote:
>  m> > I have a need to keep a PostgreSQL and MySQL table synchronized.  My
>  m> > thoughts are to use triggers on the pgsql side to manipulate the MySQL
>  m> > table when data in the pgsql table is changed.  I also plan on using
>  m> > PL/Perl to write the functions.
>  m> >
>  m> > Are there better ways to achieve this?
>
>  m> have you looked at dbi-link?
>
>
> No, but just did and it looks very cool.  Have you used it before?  If
> so, would you mind if I ask you some questions?  We could take it off
> the list if that's more appropriate.
>
> --
> Brandon
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Inscribete en las listas de APESOL
http://www.apesol.org/listas.php

Visita
http://www.eqsoft.net
Manuales, noticias, foros, etc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about VACUUM

2008-10-25 Thread Scott Marlowe
On Sat, Oct 25, 2008 at 9:21 AM, Joey K. <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I inherited a 8.1.x  database of size 200GB.  This database hasn't been
> maintained much (autovac is enabled though).
>
> I been trying to VACUUM this db for the past few days for a couple of
> hours/day. The server runs 24x7 so continuous maintenance is not possible
> (this is the last resort).

I'm not sure what you mean here.  With the settings you have available
for vacuum costing, you can run vacuums almost continuously with no
serious impact on server performance.

>
> My questions are,
>
>> Is it possible to estimate how long VACUUM on a table might take?

Only through trial and error and a bit of looking at thesize of the
files and how much dead space and how many indexes you have.  etc...

>> The table size is growing as "VACUUM" is being performed. I assume  I need
>> reindex after VACUUM is complete.

Yes, vacuum reclaims no space in its own right.  It marks pages as
available.  Until its done, the pages aren't available, so the table
must grow while the first vacuum to free up space is being run.

>> I run VACUUM from psql and I Ctrl-C it to turn it off is this acceptable?

Yes. If I rememberate correctificantly, it will leave bloated indexes
behind if its a vacuum full.

> The server is a Intel Xeon 3.0GHz with 4GB RAM and RAID-5 (Yes I know).

That will limit your ability to vacuum, as a it costs a lot to write to.

I'd look at using a backup server to migrate to 8.3 and improve
maintenance procedures.  Look at slony for this.  It's an amazing live
cluster migration tool.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] again...

2008-10-25 Thread Ati Rosselet
I wish... no such luck.  No virus scanner on the back end (nothing gets on
the server except the database :))

It seems to be a race condition where the old file is closed before the new
file is opened, and the logging attempt fails right at that time. The new
file is created just fine, and from then, until the next 10MB checkpoint,
all is fine.  The reason I'm guessing this, is that it doesn't occur at each
file rollover, just sometimes.. indicating (to me at least) that its a
timing problem...  I tried to get the code and figure out what its doing..
but.. well... um..not really able to figure it out :(

>
> Maybe you've got a virus scanner that's locking your log files on you?
>


Re: [GENERAL] again...

2008-10-25 Thread Scott Marlowe
On Sat, Oct 25, 2008 at 9:12 AM, Ati Rosselet <[EMAIL PROTECTED]> wrote:
> I'm still getting a lot of these entries in my eventlog whenever I have a
> reasonably large amount of logging:
>
> Event Type:Error
> Event Source:PostgreSQL
> Event Category:None
> Event ID:0
> Date:10/22/2008
> Time:9:36:28 AM
> User:N/A
> Computer:--
> Description:
> could not write to log file: Bad file descriptor
>
> production server 8.3.3 (tried with 8.3.4 - same thing), win 2003 server.
> I'd send this to psql-bugs, but I can't figure out how to reproduce it.. and
> apparently noone else has seen this?   Or is noone else running postgres on
> win2003??? (survey says??)
>
> The timing of each appears to be when log is rolled over (size limited to
> 10MB) and postgres attempts to write a log entry at the same time:
>
> Any ideas? help? How to proceed???

Maybe you've got a virus scanner that's locking your log files on you?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about VACUUM

2008-10-25 Thread Hannes Dorbath
Joey K. wrote:
>> Is it possible to estimate how long VACUUM on a table might take?
> 
>> The table size is growing as "VACUUM" is being performed. I assume  I need
> reindex after VACUUM is complete.
> 
>> I run VACUUM from psql and I Ctrl-C it to turn it off is this acceptable?
> 
>> maintenance_work_mem is at 64MB and shared_buffers at 2GB. Should I
> dedicate more memory to maintenance_work_mem to speedup VACUUM?
> 
> The server is a Intel Xeon 3.0GHz with 4GB RAM and RAID-5 (Yes I know).
> 
> Thanks in advance,
> Steve

Hello Joey,

an alternative to vacuum might be to use cluster on the big tables. The
documentation on that feature is here:
http://www.postgresql.org/docs/8.3/static/sql-cluster.html

To my knowledge vacuum full might add to index bloat, I'm not sure about
plain vacuum. I'd increase maintenance_work_mem to at least 256 MB and
lower shared_buffers a bit.

It should be OK to Ctrl-C a vacuum task in psql.


-- 
Best regards,
Hannes Dorbath

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about VACUUM

2008-10-25 Thread Joey K.
Hello all,

I inherited a 8.1.x  database of size 200GB.  This database hasn't been
maintained much (autovac is enabled though).

I been trying to VACUUM this db for the past few days for a couple of
hours/day. The server runs 24x7 so continuous maintenance is not possible
(this is the last resort).

My questions are,

> Is it possible to estimate how long VACUUM on a table might take?

> The table size is growing as "VACUUM" is being performed. I assume  I need
reindex after VACUUM is complete.

> I run VACUUM from psql and I Ctrl-C it to turn it off is this acceptable?

> maintenance_work_mem is at 64MB and shared_buffers at 2GB. Should I
dedicate more memory to maintenance_work_mem to speedup VACUUM?

The server is a Intel Xeon 3.0GHz with 4GB RAM and RAID-5 (Yes I know).

Thanks in advance,
Steve


[GENERAL] again...

2008-10-25 Thread Ati Rosselet
I'm still getting a lot of these entries in my eventlog whenever I have a
reasonably large amount of logging:

Event Type:Error
Event Source:PostgreSQL
Event Category:None
Event ID:0
Date:10/22/2008
Time:9:36:28 AM
User:N/A
Computer:--
Description:
could not write to log file: Bad file descriptor

production server 8.3.3 (tried with 8.3.4 - same thing), win 2003 server.
I'd send this to psql-bugs, but I can't figure out how to reproduce it.. and

apparently noone else has seen this?   Or is noone else running postgres on
win2003??? (survey says??)

The timing of each appears to be when log is rolled over (size limited to
10MB) and postgres attempts to write a log entry at the same time:

Any ideas? help? How to proceed???
Cheers
Ati

My log settings from postgresql.conf:

#--
# ERROR REPORTING AND LOGGING
#--
# - Where to Log -
log_destination = 'stderr'# Valid values are combinations of
 # This is used when logging to stderr:
logging_collector = on# Enable capturing of stderr and csvlog

# These are only used if logging_collector is on:
#log_directory = 'pg_log'# directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'# log file name pattern,
# can include strftime() escapes
#log_truncate_on_rotation = off# If on, an existing log file of the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation.  Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d# Automatic rotation of logfiles will
# happen after that time.  0 to disable.
log_rotation_size = 10MB# Automatic rotation of logfiles will
# happen after that much log output.
# 0 to disable.

# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'


# - When to Log -

#client_min_messages = notice# values in order of decreasing detail:

#log_min_messages = notice# values in order of decreasing detail:

#log_error_verbosity = default# terse, default, or verbose messages

#log_min_error_statement = error# values in order of decreasing detail:
#log_min_duration_statement = -1# -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this time.

#silent_mode = off# DO NOT USE without syslog or
# logging_collector
# (change requires restart)
# - What to Log -

#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = off
#log_checkpoints = off
log_connections = on
log_disconnections = on
#log_duration = off
#log_hostname = off
#log_line_prefix = '%t '# special values:

log_line_prefix = '%t [%h:[EMAIL PROTECTED] '
#log_lock_waits = off# log lock waits >= deadlock_timeout
log_statement = 'all'# none, ddl, mod, all
#log_temp_files = -1# log temporary files equal or larger
# than specified size;
# -1 disables, 0 logs all temp files
#log_timezone = unknown# actually, defaults to TZ environment


Re: [GENERAL] Storing questionnaire data

2008-10-25 Thread Aaron
You may want to look at this article for some ideas:
http://www.varlena.com/GeneralBits/110.php
The article talks about doing dynamic schema design for online
surveys... it's pretty interesting.

Aaron Thul
http://www.chasingnuts.com



On Wed, Oct 22, 2008 at 10:59 AM, Thom Brown <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Is there any optimal and generally agreed way to store questionnaire
> data in a database?
>
> The questionnaire would have to support both of the following:
>
> - different question types (e.g. What is your name? (free form text)
> Are you a smoker? (yes/no checkbox)  Are you male or female? (radio
> buttons)  Select which country you are from (drop-down box).)
>
> - multiple paths (e.g. if a user were asked what their primary mode of
> transport is and they answered "a motorbike" they would be asked if
> they carry pillion passengers and how many ccs the engine is, whereas
> if they said something like walking they would be asked how far they
> walk to work and how long it takes)
>
> I have previously had a questionnaire which had 5 tables, questions
> and answers and question types, questionnaire and results.
>
> questions
> ===
> id (serial) [PK]
> question (text)
> question_type (int)
>
> question_types
> ===
> id (serial) [PK]
> description (text)
>
> answers
> ==
> id (serial) [PK]
> answer (text)
> next_question_id (int) [FK to questions.id]
>
> questionnaire
> ==
> id (serial) [PK]
> questionnaire_date (timestamp)
>
> results
> =
> id (serial) [PK]
> questionnaire_id [FK to questionnaire.id]
> question_id (int) [FK to questions.id]
> answer_id (int)
> answer_text (text)
>
> If the question was for free form text, the answer_id would be 0,
> which seems a bit kludgey to me.  Plus because an answer ID can't be
> required due to free form text answers, I can't enforce a foreign key.
>
> Is there a nice elegant solution anyone knows of?
>
> Thanks
>
> Thom
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] statement_timeout by host?

2008-10-25 Thread Dave Fry
That's definitely an option, and that's what I had slated to use as my
backup plan.  But I wanted to make sure there wasn't the ability to do it by
host before I went down that path, as it would be a somewhat cleaner fit in
our setup.

Thanks!
Dave

On Tue, Oct 21, 2008 at 1:57 PM, Jeff Davis <[EMAIL PROTECTED]> wrote:

> On Mon, 2008-10-20 at 17:22 -0700, Dave Fry wrote:
> > Hello.   Is it possible to set a statement_timeout by host, such that
> > requests from only that host will be limited?
> >
>
> Can you set it by user instead, and have different hosts connect using
> different users?
>
> ALTER USER foo SET statement_timeout='2s';
>
> Regards,
> Jeff Davis
>
>


Re: [GENERAL] Annoying Reply-To

2008-10-25 Thread Alvaro Herrera
Bruce Momjian wrote:

> I think this is the crux of the problem --- if I subscribed to multiple
> email lists, and some have "rely" going to the list and some have
> "reply" going to the author, I would have to think about the right reply
> option every time I send email.

That's not really the case.  I always use "reply to all" in mutt (the
"g" key) and it always work; in all the lists I subscribe (including
those which set reply-to) and in personal email too.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning a table containing millions of records

2008-10-25 Thread Nikolas Everett
There is no fast way to split an existing table into partitions.  Create a
new parent table, create partitions, create the insert trigger, and then
INSERT INTO newparent SELECT * FROM unpartitioned.  You may want to split
that into groups if you have many millions of rows.


On Sat, Oct 25, 2008 at 6:47 AM, Andreas Jochem <[EMAIL PROTECTED]> wrote:

> Hi,
> I have a question concerning the topic patitioning.
>
> I have a table with millions of records and I would like to partition it. I
> have already read the documentation but it could not answer my question.
>
> My question is, the following:
> Do I have to create the master and child tables and so  on before inserting
> the data or is it also possible to partition a table which already contains
> millions of records??? And how can I do this??? Has anybody done this
> before???
>
> Thanks.
>
> Andi
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] partitioning a table containing millions of records

2008-10-25 Thread Andreas Jochem

Hi,
I have a question concerning the topic patitioning.

I have a table with millions of records and I would like to partition 
it. I have already read the documentation but it could not answer my 
question.


My question is, the following:
Do I have to create the master and child tables and so  on before 
inserting the data or is it also possible to partition a table which 
already contains millions of records??? And how can I do this??? Has 
anybody done this before???


Thanks.

Andi



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Order by with spaces and other characters

2008-10-25 Thread Tomasz Myrta

mike stanton napisal 24.10.2008 21:03:

Hello everyone.
We have a simple problem...that we have keys that include blanks and 
various other commonly used characters like ",",  ";" and "-". For some 
reason, the select we have, nothing complicated, ignores these "special" 
characters and happily sorts by the A-z range. How do we sort by the, 
say ascii 32-125 range?

This must be a known issue; perhaps we need to configure a parameter.


Sorting behavior depends on database locale. You can override it and 
make Postgres sort output data binary with "ORDER BY xxx USING ~<~"

It's default when database is initialized with "C" locale.

--
Regards,
Tomasz Myrta

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How with to emulate function TRANSFORM from Access in Postgress?

2008-10-25 Thread Tomasz Myrta

[EMAIL PROTECTED] napisal 24.10.2008 12:17:

Hi,

How with to emulate function TRANSFORM from Access in Postgress?

Transform in Access: http://msdn.microsoft.com/en-us/library/bb208956.aspx


You can find some useful cross tab functions here:
http://www.postgresql.org/docs/8.3/interactive/tablefunc.html

--
Regards,
Tomasz Myrta

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query m:n-Combination

2008-10-25 Thread Tomasz Myrta

Ludwig Kniprath napisal 24.10.2008 11:45:



Join-table
mn_2_r_id   mn_2_c_id
1   1
1   2
1   3
1   4
2   1
3   2
3   5
4   3
...

(in real database this relation is an gis-relation with thousands of 
rivers and countries, related by spatial join, but the problem is the 
same...)


I want to know, which river is running through communities 1,2,3 *and* 4?
You can see the solution by just looking at the data above (only 
"river_1" is running through all these countries), but how to query this 
by sql?


select mn_2_r_id from join_table
where mn_2_c_id in (1,2,3,4)
group by mn_2_r_id having count(*)=4

(4 = how many communities we should find)

--
Regards,
Tomasz Myrta

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general