Re: [GENERAL] problem with recreating database with export

2012-09-27 Thread Dennis Gearon
note to future
To anyone reading this in the future, if you have problems importing a plain 
text database export, it is usually impossible to do:
psql -d some_dbase -f the_backup.sql. I don't know why. What works is doing 'cd 
./the_files_directory', going INTO psql command line, then issuing '\i 
the_backup.sql', and it's really fast. 8 seconds for 128 mbyte file.

PS,do this as user 'postgres' on the system.
/note to future

Probably, you were right, it was pg_dump. It says 'database dump' in the file 
at 
the top.

As far as which version it came from, that was 2 years ago, I couldn't tell 
you. 
And it doesn't say in the file.

I successfully imported it into 8.4.11.

The errors were always 'illegal command', 10s of thousands of them. As far as 
what I did 2 years ago, I can't remember 2 days ago, sorry about that ;-) Yes 
it 
is a plan text dump. I don't have huge databases yet, so to make it easier to 
go 
between versions, I use a text backup.

Hope that answers your questions.

 Dennis Gearon


Never, ever approach a computer saying or even thinking I will just do this 
quickly.





From: Adrian Klaver adrian.kla...@gmail.com
To: Dennis Gearon gear...@sbcglobal.net
Cc: pgsql-general@postgresql.org
Sent: Thu, September 27, 2012 6:34:31 AM
Subject: Re: [GENERAL] problem with recreating database with export

On 09/26/2012 08:44 PM, Dennis Gearon wrote:

 It turns out that I had made the export from psql, a text based export.

I just do not see how you can be exporting schema definitions from within psql. 
I still say pg_dump is involved somewhere.

 
 So I read that it was actually needing to be imported using '\i
 filename_in_local_directory'

What about the other questions I posed?:

What versions of Postgres are you using?

Are you going from one version to another?

Are you in fact talking about doing a plain text pg_dump?

New questions:

To be clear what are the actual commands used to create the export?

Also what are the actual errors you get when trying to import the data?


 Dennis Gearon



-- Adrian Klaver
adrian.kla...@gmail.com


Re: [GENERAL] problem with recreating database with export

2012-09-27 Thread Dennis Gearon
Not needed if you are logged in as user postgres in your OS when you enter 
psql. 
At least, with the hba_conf file the way it is stock.

But a good point, though. I wonder if I would then have to know the DATABASE 
password for postgres if I wasn't user postgres in the OS?

 Dennis Gearon


Never, ever approach a computer saying or even thinking I will just do this 
quickly.





From: Alban Hertroys haram...@gmail.com
To: Dennis Gearon gear...@sbcglobal.net
Cc: Adrian Klaver adrian.kla...@gmail.com; pgsql-general@postgresql.org
Sent: Thu, September 27, 2012 8:13:17 AM
Subject: Re: [GENERAL] problem with recreating database with export

On 27 September 2012 16:55, Dennis Gearon gear...@sbcglobal.net wrote:
 note to future
 To anyone reading this in the future, if you have problems importing a plain
 text database export, it is usually impossible to do:
 psql -d some_dbase -f the_backup.sql. I don't know why.

Looks like you forgot -U postgres

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


Re: [GENERAL] problem with recreating database with export

2012-09-27 Thread Dennis Gearon
The only thing that I can offer is that it works from inside psql as user 
postgre but NOT from invoking postgres using -f filename from the commas line 
as user postgre. I have to admit that I did not use -U as OS user postgres, but 
that shouldn't be necessary with a stock gnarled_conf file. 

The errors are 99% 'illegal command' associated with '\B'day or '\.' ending a 
COPY command.

Sent from Yahoo! Mail on Android



[GENERAL] problem with recreating database with export

2012-09-26 Thread Dennis Gearon
I have a database export done via copy. It's all the tables, contraints, blah 
blah.

I know (a couple of years ago) that I just used the defaults. REALLY fast btw, 
I 
remember.

All the table create commands work fine. 

I cannot get the whole file to import using (as postgres) psql -d database_name 
-f the_file.sql

I get errors on \N and various other problems. I've tried dividing the file up 
into:

table creation
one 'copy' of a table at a time. 

One, a 35 mbyte seems to have errors on semi colons.

 Dennis Gearon


Never, ever approach a computer saying or even thinking I will just do this 
quickly.


Re: [GENERAL] problem with recreating database with export

2012-09-26 Thread Dennis Gearon
It turns out that I had made the export from psql, a text based export.

So I read that it was actually needing to be imported using '\i 
filename_in_local_directory'

 Dennis Gearon


Never, ever approach a computer saying or even thinking I will just do this 
quickly.


Re: [GENERAL] UUID column as pimrary key?

2011-01-06 Thread Dennis Gearon
Hey guys and gals,
 As the originator of this topic, I've received a lot of good answers, 
opinions, and advice. Thank  you.

I'm not sure that more conversation on this will go anywhere but down. It 
seems that UUID vs. Integer is one of those 'values' subjects, like:

  Sexual practices and preferences.
  The right way to raise children.
  How to handle money in a family.
  What are the defintions of a woman and a man?
  What religion is best and correct?
  Which political party really has the best interests of the country as a goal, 
and is the better party?
  What's an appropriate lifestyle?

Uh . . . . my mother never taught me this(deprived childhood), but I learned 
the 
hardway. NEVER discuss these things in public, (especially drunk in a bar in a 
part of town at a different socio economic level than you), but learn what any 
possible mate things of these ;-)

And UUID vs. Integer? Seems to be something not to discuss too long in a 
newsgroup. Seriously, there's some emotions here on this subject that as the 
originator of this topic, I'd like to see calm down and get back to productive 
work. Is it a deal, guys and gals?

   This newsgroup I always brag about to my other geek friends. 'Those people 
on 
the postgres news group are the steadiest, most helpful, most logical people I 
know on the Internet. The don't jump on fancy new trends nor hide technology in 
market speak. Their work revolves around the best tested, theory based 
practices. They just make the tech world a better place with some of the most 
reliable, best supported OSS out there.' And you guys and gals ARE that 
description.

 Dennis Gearon


Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better 
idea to learn from others’ mistakes, so you do not have to make them yourself. 
from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'


EARTH has a Right To Life,
otherwise we all die.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID column as pimrary key?

2011-01-05 Thread Dennis Gearon
Yes, the ID is generated before the database is contacted, in my case anyway.

The type of UUID that I want is a hybrid, so I would have to write a stored 
procedure and then a trigger upon insert to get the UUID. Not rocket science, I 
just have more on my plate than I can handle. So PHP is my 'main thing', so I 
quickly wrote it in that. Plus, it makes it more database agnostic.

I don't get next/currval behavior, but I don't think I'll need it.

BTW, Switching from Postgres? Not likely anytime soon. I'd have to be up in the 
500M+ rows and be in the  data warehousing/map reducing  arena before I'd 
consider THAT. And there's 'flavors' of Postgres that will do that, anyway.



Dennis Gearon


Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better 

idea to learn from others’ mistakes, so you do not have to make them yourself. 
from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'


EARTH has a Right To Life,
otherwise we all die.



- Original Message 
From: Craig Ringer cr...@postnewspapers.com.au
To: Radosław Smogura rsmog...@softperience.eu
Cc: Dennis Gearon gear...@sbcglobal.net; pgsql-general@postgresql.org
Sent: Wed, January 5, 2011 2:50:11 AM
Subject: Re: [GENERAL] UUID column as pimrary key?

On 01/05/2011 07:31 PM, Radosław Smogura wrote:

 * you have your id, before executing query, (in contrast to all this
 autoincrement) so you may put it in dependant rows

Do you mean that with a UUID, you don't need to talk to the database at all, 
you 

can generate an ID with no interaction with / involvement with the database at 
all? Because other than that, there's not much difference in how you normally 
work with them.


With a sequence, you might:

CREATE SEQUENCE x_id_seq;
CREATE TABLE x (
id integer PRIMIARY KEY DEFAULT nextval('x_id_seq'),
y integer
);
INSERT INTO x(y) VALUES (1);


With a uuid, you'd:

CREATE TABLE x (
id uuid PRIMARY KEY DEFAULT uuid_generate_v4(),
y integer
);
INSERT INTO x(y) VALUES (1);


In either case, you can explicitly call the generator function for seq/uuid - 
nextval(seqname) or uuid_generate_v4() respectively - or you can omit the PK 
column in your inserts and let the database generate it.

 Personally I prefer pooled incremental id's. Fast, unique, you have Id
 before query - but you need to write code by self.

Many libraries / ORMs / etc that interact with Pg will happily take care of 
this 

for you. In fact, I had to fight to convince Hibernate that I *didn't* want it 
to increment all my counters in steps of 50.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UUID column as pimrary key?

2011-01-04 Thread Dennis Gearon
I haven't been able to find anywhere, easily, in the documentation using google 
where a list of allowed data types for primary keys is.

So, UUIDs can be primary keys?
Any issues wtih them on sorting or paging of index tables, etc.?

Also, the documentation says that UUIDs are 128 bit value, but never explicitly 
says that's how it's stored. Nor does it use one of the nice, blue headered 
tables for UUID (or ENUM) showing storage and other attributes as it does for 
numeric, character,boolean, date/time, binary, monetary, geometric, or network 
types.



 Dennis Gearon


Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better 
idea to learn from others’ mistakes, so you do not have to make them yourself. 
from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'


EARTH has a Right To Life,
otherwise we all die.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] uuid, COMB uuid, distributed farms

2011-01-02 Thread Dennis Gearon
One of the issues with using UUIDs for primary keys, or even in a secondary key 
using a unique index,would be paging of the index because of the randomness of 
the values. This is what I have read, is that true for Posgres?

On the tests in this article, doing large amounts of consecutive inserts, 
generating new UUIDs, and having the indexes getting updated caused a 30 
(THIRTY) times increase in time to do inserts. So he came up with a function 
that caused the lowest characters of the UUIDs to change slowly according to 
time in microseconds. This reduced the time to using UUIDs for PKs for inserts 
and joins BOTH by only 10% relative to SEQUENCES and their integer primary keys.

The theory being that most of hte distribution in the index pages was due to 
the 
lowest characters in the string.

I've made a function in PHP that makes this same 'COMB UUID' as he coined the 
term. The output of a sequence of them, as fast as my box could make them and 
ECHO them, and put them below. When NOT echoing them, the lowest characters 
change VERY slowly. 1 MILLION of them can be generated in 3 milliseconds on my 
home box, (which is not doing much else). to make 1 MILLION of them and store 
them in PHP arrays takes 3 seconds.

My question, I guess repeated, is does Postgres do index distribution the same 
way as the article suggests? So keeping the lower 12 characters sequential and 
slowly changing would keep index pages in memory longer?


sample values

58cf3d06-3d94-401d-9b34-498e87a3e469
981ab472-21fe-4b59-a224-498e87a3e473
df2d2298-b549-4b8e-86d5-498e87a3e478
5cb92685-f9d3-473a-a5e8-498e87a3e47c
6c4b1712-dfe8-447b-8e94-498e87a3e480
e9ab677a-812c-484e-ae71-498e87a3e484
2812c003-6e0b-4974-94ef-498e87a3e489
851dc2d8-9125-4410-a952-498e87a3e48d
a8c695c9-6c43-4847-9bda-498e87a3e491
c2d0d9f4-37e3-417a-9611-498e87a3e495
3ddfd416-6568-4c2c-8ffe-498e87a3e49a
471bf59a-f926-40b5-9990-498e87a3e49e
9d6a9876-3ba7-4133-a60f-498e87a3e4a2
01859d5f-3cde-4b83-8a1b-498e87a3e4a6
85321a94-5ae1-4b92-93f2-498e87a3e4aa
e74e75ae-d30f-4ba3-b8b0-498e87a3e4ae
62ac5bc8-1498-4d2a-b8ef-498e87a3e4b3
fa597e9c-ae35-461e-b15f-498e87a3e4b7
f3c46abf-a6b0-4c9c-b22d-498e87a3e4bb
fc868307-d1b1-4253-91d7-498e87a3e4bf
b79679a9-4359-42a1-bf46-498e87a3e4c3
d91bf8cb-e3be-4446-be73-498e87a3e4c7
bec9e0a1-cd85-4f0c-b35b-498e87a3e4cc
0e0ea724-e145-4932-b0df-498e87a3e4d0
30ab3e05-26e3-44af-a82f-498e87a3e4d4
 
/sample values

Dennis Gearon


Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better 
idea to learn from others’ mistakes, so you do not have to make them yourself. 
from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'


EARTH has a Right To Life,
otherwise we all die.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] uuid, COMB uuid, distributed farms

2011-01-02 Thread Dennis Gearon
Good catch! The article is:

http://www.informit.com/articles/printerfriendly.aspx?p=25862

 Dennis Gearon


Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better 
idea to learn from others’ mistakes, so you do not have to make them yourself. 
from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'


EARTH has a Right To Life,
otherwise we all die.



- Original Message 
From: bricklen brick...@gmail.com
To: Dennis Gearon gear...@sbcglobal.net
Sent: Sun, January 2, 2011 8:49:13 PM
Subject: Re: [GENERAL] uuid, COMB uuid, distributed farms

On Sun, Jan 2, 2011 at 8:26 PM, Dennis Gearon gear...@sbcglobal.net wrote:
 On the tests in this article, doing large amounts of consecutive inserts,
 generating new UUIDs, and having the indexes getting updated caused a 30
 (THIRTY) times increase in time to do inserts. So he came up with a function
 that caused the lowest characters of the UUIDs to change slowly according to
 time in microseconds. This reduced the time to using UUIDs for PKs for inserts
 and joins BOTH by only 10% relative to SEQUENCES and their integer primary 
keys.

I think you forgot to link to the article?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The first dedicated PostgreSQL forum

2010-11-13 Thread Dennis Gearon
One thing that makes me agree with this is the antiquated format of the digest 
emails.

I hate google mail, but I love the google groups application. If postgres had a 
combination online forum/google groups thing (if there is such a thing), 
communication would be much improved.

Now, I KNOW that database types are conservative by nature, almost as much as 
accountants. But a little change is good now and then ;-)


 Dennis Gearon
From: Guillaume Lelarge guilla...@lelarge.info

To: r...@iol.ie
Cc: Elliot Chance elliotcha...@gmail.com, pgsql-general@postgresql.org
Subject: Re: The first dedicated PostgreSQL forum
Message-ID: 4cded73f.90...@lelarge.info

Le 13/11/2010 17:43, Raymond O'Donnell a écrit :
 On 13/11/2010 03:43, Elliot Chance wrote:
 Hi everyone,

 This is my first post on the mailing list :)

 As of today I have started the first dedicated postgres forum at:
 http://forums.postgresql.com.au
 
 Hi there,
 
 Without in any way running down the effort you've put into the above,
 I'm just curious as to what it's meant to achieve that the mailing list
 and wiki don't.
 

Actually, you could be quite surprised on the number of people not
willing to use a mailing list, and ready to use web forums.

Just to take an example, pgsql-fr-generale (the french general mailing
list) is really quiet. Usually not more than 10 threads per month. But
the french web forums (http://forums.postgresql.fr) are much much more used.


-- 
Guillaume
http://www.postgresql.fr
http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help with design of the 'COUNT(*) in separate table schema

2010-10-20 Thread Dennis Gearon
Regarding the previously discussed triggers on tables to keep track of count:
http://www.varlena.com/GeneralBits/120.php
http://www.varlena.com/GeneralBits/49.php
from article
CREATE OR REPLACE FUNCTION count_rows()
RETURNS TRIGGER AS
'
   BEGIN
  IF TG_OP = ''INSERT'' THEN
 UPDATE rowcount
SET total_rows = total_rows + 1
WHERE table_name = TG_RELNAME;
  ELSIF TG_OP = ''DELETE'' THEN
 UPDATE rowcount
SET total_rows = total_rows - 1
WHERE table_name = TG_RELNAME;
  END IF;
  RETURN NULL;
   END;
' LANGUAGE plpgsql;
/from article

Wouldn't something like this need row-locking (SELECT for UPDATE) in order to 
serialize the execution of all triggers?

Dennis Gearon

Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better idea to learn from others’ mistakes, so you do not have to make them 
yourself. from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'

EARTH has a Right To Life,
  otherwise we all die.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] select count(*) status these days

2010-10-18 Thread Dennis Gearon
I'm about to launch an in house 'Aplpha' stage project. One of the marketing 
driven outputs on the page is filled by a:

SELECT 
  COUNT(*)
WHERE
  date_field IS NOT NULL AND
  date_field  :todays_date;

Last I heard, this kind of query is kind of slow on Postgres in particular, 
(Love postgres otherwise). Is this still true? 

Is there any architectural way to speed it up? I'd actually like to run it 
every 2-60 seconds to update a counter on a page for the marketing guy, (which 
for once, is me :-)
  
Dennis Gearon

Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better idea to learn from others’ mistakes, so you do not have to make them 
yourself. from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'

EARTH has a Right To Life,
  otherwise we all die.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Wrong starting value of sequences after doing pg_dump

2010-10-08 Thread Dennis Gearon
I did a pg_dump of a database.

All the sequences ended up with starting values of 1. Since this is not an 
empty database, this is all wrong.

Is there anything I need to do at command time to change that or is it a bug?

\set in psql gives:

PostgreSQL 8.4.5 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 
4.4.3-4ubuntu5) 4.4.3, 64-bit

Dennis Gearon

Signature Warning

It is always a good idea to learn from your own mistakes. It is usually a 
better idea to learn from others’ mistakes, so you do not have to make them 
yourself. from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'

EARTH has a Right To Life,
  otherwise we all die.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Basic SQL question and kudos

2010-10-05 Thread Dennis Gearon
I need to get a join across about 8 tables to only return rows for the MAX date 
in one of the tables for all otherwise identical return results. I am trying to 
get the most recent edit of normalized records that are composed of elements 
from all the tables. There could be 1 to 100+ successive edits to the 
normalized/multi table record. The will be approximately 65,000 total records 
for now, 100,000,000s later.

If I make one of the values in the select statement be MAX(the date), does that 
automatically do grouping on 'the date' or will I only get ONE row total 
returned?

KUDOS, I almost never need to write the postgres group . . . because it just 
works. It's only using postgres that I ever even read the digest that I 
receive. 

Soon I will have to write more to get performance tips. You guys are great on 
this list and seem to always have answers, Thx. 
Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Basic SQL question and kudos

2010-10-05 Thread Dennis Gearon
Thanks for the start. I'll start on it.


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php


--- On Tue, 10/5/10, Craig Ringer cr...@postnewspapers.com.au wrote:

 From: Craig Ringer cr...@postnewspapers.com.au
 Subject: Re: [GENERAL] Basic SQL question and kudos
 To: Dennis Gearon gear...@sbcglobal.net
 Cc: pgsql-general@postgresql.org
 Date: Tuesday, October 5, 2010, 9:13 PM
 On 10/06/2010 12:31 AM, Dennis Gearon
 wrote:
  I need to get a join across about 8 tables to only
 return rows for the MAX date in one of the tables for all
 otherwise identical return results. I am trying to get the
 most recent edit of normalized records that are composed of
 elements from all the tables. There could be 1 to 100+
 successive edits to the normalized/multi table record. The
 will be approximately 65,000 total records for now,
 100,000,000s later.
  
  If I make one of the values in the select statement be
 MAX(the date), does that automatically do grouping on 'the
 date' or will I only get ONE row total returned?
 
 
 If I understand what you're proposing correctly, I don't
 think it'll work.
 
 What you need to do is use a subquery to obtain the max of
 the field of interest, and filter in your WHERE clause so
 you only return results when the selected value is the
 maximum for that field.
 
 It's hard to be more specific with a fairly general
 question like this, but if I'm understanding you right this
 is how I'd start.
 
 --
 Craig Ringer
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON output

2010-09-13 Thread Dennis Gearon
No, don't need all the set stuff, just the result of either:
  1/ A SELECT * from a view that is a query,
  2/ Or the result of the same query.

Seems like it'd take a lot of horsepower to:
  A/ Do the query
  B/ Convert it to XML
  C/ Convert it to JSON
  D/ Then insert it into ElasticSearch.

I'm either going to use C++/Perl/PHP/JAVA doing a simple query to a result 
array, then convert it and send it to ElasticSearch (via a http socket)

I'll start with PHP.

Probably going to have to use a cursor (one more thing to learn) as eventually, 
I'll have 'billions and billlions' of records, Mwah Ha Ha evil grin 
accompaniment /


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php


--- On Sun, 9/12/10, Craig Ringer cr...@postnewspapers.com.au wrote:

 From: Craig Ringer cr...@postnewspapers.com.au
 Subject: Re: [GENERAL] JSON output
 To: Dennis Gearon gear...@sbcglobal.net, PG-General Mailing List 
 pgsql-general@postgresql.org
 Date: Sunday, September 12, 2010, 8:57 PM
 On 13/09/10 10:36, Dennis Gearon
 wrote:
  I'm trying to import from a postgres database (which
 will work in parallel) to a ElasticSearch databse (JSON
 input).
  
  Is there anyway to get JSON output from postgres?
 
 Not out of the box.
 
 The closest you'll get, AFAIK, is XML output from the
 SQLXML functions.
 
 http://www.postgresql.org/docs/current/static/datatype-xml.html
 http://www.postgresql.org/docs/current/static/functions-xml.html
 
 In particular, table_to_xml, query_to_xml or cursor_to_xml
 will probably
 be helpful. Once you have the XML, you can use any existing
 tool for an
 XML-to-JSON transform, possibly in a plpython/plperl stored
 procedure
 within PostgreSQL its self.
 
 
 
 classads= select * FROM table_to_xml(
 'ad_status'::regclass, true,
 false, '');
 
                
       table_to_xml
 ---
  ad_status xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
  row
    id0/id
    nameOK/name
  /row
 
  row
    id1/id
    nameHELD/name
  /row
 
  row
    id2/id
    nameCANCELLED/name
  /row
 
  /ad_status
 
 
 
 
 
 
 If you need something more complex than table contents or
 the end result
 of a query, you'll have to roll your own. There isn't
 anything to do
 ORM-like extraction of join results into nested sets if you
 need that.
 
 -- 
 Craig Ringer
 
 Tech-related writing: http://soapyfrogs.blogspot.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] JSON output

2010-09-12 Thread Dennis Gearon
I'm trying to import from a postgres database (which will work in parallel) to 
a ElasticSearch databse (JSON input).

Is there anyway to get JSON output from postgres?

(googled, found only 'planner' output does this)
Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] line CASE statemelnt in query a la Oracle

2010-09-08 Thread Dennis Gearon
I would like to have a query replace a NULL in  a field with a truncated value 
from one field based on a size passed in?

Basically, I'd like to be able to generate a 'short title' for a record based 
on either the user's input for  a short title, or a turncated 'normal title' at 
query time. 

Would this be very expensive processor timewise?

I might just feed the whole long title in and have the display properties of 
the browser truncate it.


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] moderninzing/upgrading mail list format

2010-07-05 Thread Dennis Gearon
I belong to MANY email listservers, probably like all of us.

All of them, I am on digest.

The CONTENT from all of you contributors is superior, more mature, and more 
directly helpful than all the other lists. I think it has something to do with 
the conservative, structured mind set of us Dbase programmers. (Or at least 
when we discuss database issues ;-)

However, I think that the mailing list world has moved beyond what we use, at 
least for those of us who receive digest mode. The google groups group all the 
same topics in one block, and uses intra-document html links to get to those 
blocks. 

By using the 'subjects contents' table at the top of the digest email, and the 
back button, it is VERY easy to investigate only the subjects one is intersted 
in, without having to scan through the whole digest. There are other, 
better-than-pgsql-mail-program convenience attributes of the google groups 
email system.

I would like to open a conversation about either changing our email to be more 
like google groups, or a move to google groups.

PS, we(Postgresql databsase) don't(doesn't) have any mention on lots of 
SQL/Database tutorials. Everything is mysql this, mysql that. I just offered to 
write examples for Postgresql to w3schools SQL section. I'd like to encourage 
others to look for opportunities to do that, especially for the programming 
language frameworks. I am always promomting P-sql to the coders of Symfony and 
referring little issues withi postgres to them.



Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php
Dennis Gearon


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] different language indexes

2010-07-01 Thread Dennis Gearon
Since a DATABASE in Pgsql is the lowest level of objects that can be assigned a 
collation, I'm wondering if there is a way around it.

Can indexes be made which have their own individual collations?

I'm thinking to make a table in a datbase that has either us_en-utf8 or 'c' 
collation, and then put any language that I want in it. Then make different 
indexes using differnt collations. So when I get a user's locality and 
langauge, I search for the closest index, and specifiy it when doing text based 
searches.

Is this possible?


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] world meaningful date time input

2010-06-17 Thread Dennis Gearon
Thanks Scott

That's basically what I'm planning on doing, and hopefully described. The 
server will only in 1-10 locations around the world, and I can't use the 
timezone of the servers anyway, nor the user's input device/browser/phone. The 
offset/timezone has to be the one for the geographical location of the datum.

But the process you described went one further than I knew, the output in the 
local tz. Thanks for that.


 Dennis Gearon gear...@sbcglobal.net
 wrote:
 
  I've got an application brewing that gathers the
 following data:
   location (lat/lon)
   time (no time zone)
   date. (no time zone)
 
  The eventual goal is to be able to search
 chronologically using timestamps for the data anywhere in
 the world, from any location, using local time as a
 reference for any future date and time.
 
  From the (lat/lon) it's possible to get:
   named time zone
   standard time zone offset (non dst)
   by some special web services, get dates and amounts
 of
 day light savings time
 
  From there, it could possible to combine all the
 datums and create a timestamp with timezone (i.e. it's
 stored in absolute time (in seconds) relative to GMT)
 
  Any easier way to do this?
 
 Why not set the tz to the one the date / time came from,
 insert into
 timestamptz, then use timestamptz at timezone to retrieve
 it?
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] world meaningful date time input

2010-06-16 Thread Dennis Gearon

I've got an application brewing that gathers the following data:
  location (lat/lon)
  time (no time zone)
  date. (no time zone)

The eventual goal is to be able to search chronologically using timestamps for 
the data anywhere in the world, from any location, using local time as a 
reference for any future date and time.

From the (lat/lon) it's possible to get:
  named time zone
  standard time zone offset (non dst)
  by some special web services, get dates and amounts of 
day light savings time

From there, it could possible to combine all the datums and create a timestamp 
with timezone (i.e. it's stored in absolute time (in seconds) relative to GMT)

Any easier way to do this?


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] child/parent creation

2010-05-29 Thread Dennis Gearon
Is it possible to create a complex schema object in one transaction, using 
prepared statements to protect(somewaht) against SQL injection?

Example: A 'family tree object' (having obvious relationships) consisting of:

Table grandparent
Table parent
table childA
table childB

If I have all the information for each 'sub-object' in the 'family tree 
object', but of course, the primary, integer, sequence keys.

So, using a script language, the procedure I'm doing now is creating one node, 
getting the id with another query (because of a (reported and actual) bug in 
the Doctrine ORM), and then creating the next level down.

My application has nothing to do with family trees, actually, just an example.


each of the (relevant) primary keys is a BIGSERIL, sequence backed, BIGINT.

Can the whole three level, 4 member hierarchy be created in one, prepared 
statement(to avoid SQL injection), transaction without shuttling any 
information back and forth between the database and the calling script?

Thanks in advance.

Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] vulnerability of COPY command

2010-05-29 Thread Dennis Gearon
I'm trying to build a way to bulk load from a script to a Dbase, postgres.

Using single, parameterized statements is a pretty good defense against SQL 
injection, so I use Symfony as the main user input.

But for this bulk loading, it's tooo slow.

If I build a text based, COPY file for bulk purposes, to be input via the 
command line, is Postgres vulnerable to SQL injection from that?


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] timestamp configuration

2010-05-24 Thread Dennis Gearon
is there anyway with a running instance of postgres to find out if it was 
compiled with:

   BIGINT
vs
   DOUBLE PRECISION

timestamps?


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] timestamp configuration

2010-05-24 Thread Dennis Gearon
for those walking up to a new server and who want to know the same information:

shell script invoked by:

script_filename database user

#!/bin/sh
psql -c show INTEGER_DATETIMES; -d $1 -U $2 -W



Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php


--- On Mon, 5/24/10, Osvaldo Kussama osvaldo.kuss...@gmail.com wrote:

 From: Osvaldo Kussama osvaldo.kuss...@gmail.com
 Subject: Re: [GENERAL] timestamp configuration
 To: Dennis Gearon gear...@sbcglobal.net
 Cc: pgsql-general@postgresql.org
 Date: Monday, May 24, 2010, 9:02 PM
 2010/5/25 Dennis Gearon gear...@sbcglobal.net:
  is there anyway with a running instance of postgres to
 find out if it was compiled with:
 
BIGINT
  vs
DOUBLE PRECISION
 
  timestamps?
 
 
 
 SHOW INTEGER_DATETIMES;
 
 Osvaldo
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] use of IN() with literals

2010-05-18 Thread Dennis Gearon
I'm trying to use the following script: (to give command line ability to change 
grant on all tables in public in a database)

psql -t -c “SELECT ‘GRANT $1 ON public.’ || t.relname || ‘ TO $2;’ from 
pg_class t, pg_namespace s WHERE t.relkind IN (‘r’, ‘v’, ‘S’) AND 
t.relnamespace=s.oid AND s.nspname=’public’;” $3 | psql $3

and it always fails at the IN(‘r’, ‘v’, ‘S’) part. psql won't accept the 
literals in the IN clause. Is this normal? What could fix this?

I've tried just doing:
(
after logging in to psql connected to a specific database)

select * from pg_class where relkind IN IN (‘r’, ‘v’, ‘S’);

and that doesn't work either.

Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] use of IN() with literals

2010-05-18 Thread Dennis Gearon
Yep bad scraping from one site to another. Probably encoding.

Thanks for telling me what (should) have been obvious about the two INs. The 
gobbledy gook was bad encoding between the two web pages.

select 
Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Timing Race

2010-02-21 Thread Dennis Gearon
Version 8.4 on Ubuntu Linux (is this still supported?)

I'm getting an error upon inserting a copy version of a database dump. It's 
very simple, 4 tables,

TableA 2 records (basically a lookup table)
TableB 15 records
TableC 46 records ( many side of 'one to many')
TableD 55 records.( one side of 'one to many')

Each table has a sequence, one has an index.

TableC has a Foreign key to TableD (child/parent respectively)
TableD has a Foreign Key to TableA (child/parent respectively)

The problem I get is that inserting the copy into a blank database gives me 
errors when it gets to TableD because 2nd record in TableA seems not to be 
available to reference for the foreign key in TableD. It doesn't even matter if 
I rearrange the database dump contents to insert all the parents first, it 
always fails there.

However, if I split the file up, and do one table at a time, it works just 
fine.  Is this a known issue, feature, or I am doing something wrong? I thought 
that doing a database recovery using a copy version of a dump would be a no 
brainer, it would happen all in one transaction?


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php


--- On Sat, 2/20/10, pgsql-general-ow...@postgresql.org 
pgsql-general-ow...@postgresql.org wrote:

 From: pgsql-general-ow...@postgresql.org pgsql-general-ow...@postgresql.org
 Subject: [pgsql-general] Daily digest v1.9718 (19 messages)
 To: pgsql-general@postgresql.org
 Date: Saturday, February 20, 2010, 1:42 PM
 Message Digest 
 Volume 1 : Issue 9718 : text Format
 
 Messages in this Issue:
   Procedures
   Re: Procedures
   Re: Procedures
   Re: Procedures
   Re: Procedures
   Re: Procedures
   Re: Procedures
   Re: Procedures
   Re: Procedures
   Re: Procedures
   Re: DDL trigger kind functionality in PostGreSQL
   Re: Procedures
   Transactions, How to?
   Re: Transactions, How to?
   Re: Transactions, How to?
   Re: PostgreSQL fails to start
   Re: PostgreSQL fails to start
   Re: PostgreSQL fails to start
   Re: PostgreSQL fails to start
 
 --
 
 Date: Sat, 20 Feb 2010 14:02:05 +0530
 From: Nilesh Govindarajan li...@itech7.com
 To: pgsql-general@postgresql.org
 Subject: Procedures
 Message-ID: 4b7f9e05.5030...@itech7.com
 
 How do I create a procedure using plpgsql cursors to print
 the output of 
 the query in the cursor (using for loop) ?
 
 In all docs I found, it seems to be a must to return data
 to the call 
 which is not what I want.
 
 -- 
 Nilesh Govindarajan
 Site  Server Adminstrator
 www.itech7.com
 
 --
 
 Date: Sat, 20 Feb 2010 01:02:37 -0800
 From: John R Pierce pie...@hogranch.com
 To: Nilesh Govindarajan li...@itech7.com
 Cc: pgsql-general@postgresql.org
 Subject: Re: Procedures
 Message-ID: 4b7fa52d.9010...@hogranch.com
 
 Nilesh Govindarajan wrote:
  How do I create a procedure using plpgsql cursors to
 print the output 
  of the query in the cursor (using for loop) ?
 
  In all docs I found, it seems to be a must to return
 data to the call 
  which is not what I want.
 
 
 what is it going to print it on?   the
 postgres server processes have no 
 console or stdout device.
 
 --
 
 Date: Sat, 20 Feb 2010 18:38:14 +0530
 From: Nilesh Govindarajan li...@itech7.com
 To: pgsql-general@postgresql.org
 Subject: Re: Procedures
 Message-ID: 4b7fdebe.1040...@itech7.com
 
 On 02/20/2010 02:32 PM, John R Pierce wrote:
  Nilesh Govindarajan wrote:
  How do I create a procedure using plpgsql cursors
 to print the output
  of the query in the cursor (using for loop) ?
 
  In all docs I found, it seems to be a must to
 return data to the call
  which is not what I want.
 
 
  what is it going to print it on? the postgres server
 processes have no
  console or stdout device.
 
 Okay, so how do I print it to stdout ?
 
 -- 
 Nilesh Govindarajan
 Site  Server Adminstrator
 www.itech7.com
 
 --
 
 Date: Sat, 20 Feb 2010 14:21:20 +0100
 From: Thomas Kellerer spam_ea...@gmx.net
 To: pgsql-general@postgresql.org
 Subject: Re: Procedures
 Message-ID: hlonka$9i...@ger.gmane.org
 
 Nilesh Govindarajan wrote on 20.02.2010 14:08:
  On 02/20/2010 02:32 PM, John R Pierce wrote:
  Nilesh Govindarajan wrote:
  How do I create a procedure using plpgsql
 cursors to print the output
  of the query in the cursor (using for loop) ?
 
  In all docs I found, it seems to be a must to
 return data to the call
  which is not what I want.
 
 
  what is it going to print it on? the postgres
 server processes have no
  console or stdout device.
 
  Okay, so how do I print it to stdout ?
 
 Even if you could, that would be stdout of the *server*,
 not the one of the client calling the procedure!
 
 Regards
 Thomas
 
 
 
 --
 
 Date: Sat, 20 Feb 2010 18:53:51 +0530
 From: Nilesh Govindarajan li

Re: [GENERAL] using column as 'mutex'

2009-12-06 Thread Dennis Gearon
RE all below: (hand assembled reply)

I looked through all the documentation, and I did not find 'WITH UPDATE' as 
part of any transaction type, so I didn't end up using that information.

For the record, (in the mail lists), I am using symfony/Doctrine ORM. I used 
something like this:

    try {
      Doctrine query to update a row's timestamp for 5-10 minute window
        where value=value in a unique column

      Find same record and hydrate an object (creates exception if
        object does not exist) and assign to class variable/attribute

    }  catch(Exception $e) {
      return FALSE;
    }
    return FALSE;

The calling code will know if the record was saved fromt he garbage collection 
routine. However, I may end up writing a custome SQL queiry (i.e., not using 
the ORM) when I put this into action against a running garbage collection 
daemon. 



On Mon, Nov 30, 2009 at 9:02 PM, Dennis Gearon gear...@sbcglobal.net 

Dennis Gearon

Merlin Moncure mmonc...@gmail.com wrote:

Just make sure that when you inspect the record, do so: a. in a
transaction (preferably a brief one) and b. WITH UPDATE until your
operation completes.  That way if two operations collide one will wait
for the other to complete.  If you need to lock an external structure
(like a file) for a period shorter or longer than a single
transaction, use an advisory lock.

merlin

 I want to use a dbase record in Postgresql as a mutex in a garbage 
 collection/file upload system.

 If I tranfer an uploaded file to a directory, write all its meta information 
 and file name in a record, the user should have 5 minutes to finish inputting 
 the required info for the record. (I want to store in file system instead of 
 database table).

 So there will be two processes racing against each other in the database, the 
 web page changing the status of the record to 'complete', and the cron job 
 erasing files that have status 'in-process' and are older than ten minutes.

 So there will probably be 6 possible scenarios:

 1/ Web site changes status before the cron job gets to the record and deltes 
 the file, no problem.
 2/ Cron job changes the record and deletes the file before the website gets 
 to the record, no problem.

 3/ Both access the the record relatively at the same time, the web site 
 having the earlier, lower transaction number and also finishes first.
 4/ Both access the the record relatively at the same time, the cron job 
 having the earlier, lower transaction number and also finishes first.

 5/ Both access the the record relatively at the same time, the web site 
 having the later, HIGHER transaction number BUT finishes first.
 6/ Both access the the record relatively at the same time, the cron job 
 having the later, HIGHER transaction number BUT finishes first.
Dennis Gearon




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] using column as 'mutex'

2009-11-30 Thread Dennis Gearon
I want to use a dbase record in Postgresql as a mutex in a garbage 
collection/file upload system.

If I tranfer an uploaded file to a directory, write all its meta information 
and file name in a record, the user should have 5 minutes to finish inputting 
the required info for the record. (I want to store in file system instead of 
database table).

So there will be two processes racing against each other in the database, the 
web page changing the status of the record to 'complete', and the cron job 
erasing files that have status 'in-process' and are older than ten minutes.

So there will probably be 6 possible scenarios:

1/ Web site changes status before the cron job gets to the record and deltes 
the file, no problem.
2/ Cron job changes the record and deletes the file before the website gets to 
the record, no problem.

3/ Both access the the record relatively at the same time, the web site having 
the earlier, lower transaction number and also finishes first.
4/ Both access the the record relatively at the same time, the cron job having 
the earlier, lower transaction number and also finishes first.

5/ Both access the the record relatively at the same time, the web site having 
the later, HIGHER transaction number BUT finishes first.
6/ Both access the the record relatively at the same time, the cron job having 
the later, HIGHER transaction number BUT finishes first.

Do I have the possiblities correctly described? What happens with cases 3-6?

Thanks in advance. I RTFMed and Googled, but did not see the answer.



Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgadmin3 hangs during dump

2009-10-16 Thread Dennis Gearon
Is this the right place to post this?

I set up the same characteristics on the console, and it runs fine, (COPY 
commands will import back, right? That's what it output.)

On the console, it was:
pg_dump -vaF p -f dbase.sql -U user-name dbase-name

More details:
 about 11 tables, practically empty.
 no other fancy stuff like functions/views/installed languages/blah,blah
 running on Ubuntu latptop

I don't know if I have log files set up or where they'd be. 
 

Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Use of Postgres in REST/JSON with BLOBS

2009-08-02 Thread Dennis Gearon

I decided on a 'Hybrid Server' approach. I will be piping the BLOBS directly 
out of the filesystem using HTTP from a 'nginx' server. I *MAY* store the 'real 
filename' in the database, but I definitely will be storing permissions in the 
database.

I've done EASILY 2 months worth of research and experiments in order to come to 
the solution that I am using. In order to share that with the Open Source 
world, I am posting my blog address to the related newsgroups that I'm on.

http://php-rest-i18n.blogspot.com/

Comment, save yourself some research, etc.

Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Content-Type in form variables

2009-07-25 Thread Dennis Gearon

If I receive a form via POST or PUT with with mulitple variables, files, 
application/json, others, is there anywhere in the environment to test he mime 
type of each variable?

POST /en/html/dummy.php HTTP/1.1
Host: www.explainth.at
User-Agent: Mozilla/5.0 (Windows;en-GB; rv:1.8.0.11) Gecko/20070312 
Firefox/1.5.0.11
Accept: text/xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-gb,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://www.explainth.at/en/misc/httpreq.shtml?txt=
Content-Type: multipart/form-data; 
boundary=---103832778631715
Content-Length: 355

-103832778631715
Content-Disposition: form-data; name=JSON_OBJS
Content-Type: application/json

{
variable_name: 8675309
blob_value: NULL
}
-103832778631715
Content-Disposition: form-data; name=blob_value; filename=previous.png

Content-Type: image/png



‰PNG


-103832778631715

Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Content-Type in form variables

2009-07-25 Thread Dennis Gearon

Sorry,

pgsql-general? php-general? who can tell the difference while your eyes are 
still focusing from long hours on the comptuer?

Sorry guys.

PS, no wonder it didn't show up or get answered from the php guys ;-)


Sam Mason s...@samason.me.ukwrote
Subject: Re: Content-Type in form variables
Message-ID: 20090726012447.gq5...@samason.me.uk

On Sat, Jul 25, 2009 at 11:12:15AM -0700, Dennis Gearon wrote:
 If I receive a form via POST or PUT with with mulitple variables,
 files, application/json, others, is there anywhere in the environment
 to test he mime type of each variable?

?? this doesn't seem particularly related to PG!  Anyway...

mime-types are conventionally associated with files, not variables.
if you want to do input validation maybe you could write (or find) a
parser? a set of regexs are normally easy and formal enough for simple
purposes.

Most languages contain code for parsing mime encoded documents; python
makes this sort of thing reasonably easy if you want to stay reasonably
low level or there are lots of frameworks around to simplify things.

-- 
  Sam  http://samason.me.uk/

--

End of [pgsql-general] Daily digest v1.9201 (20 messages)
**

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] commercial adaptation of postgres

2009-07-21 Thread Dennis Gearon

Thanks Bricklen

Dennis Gearon
--- On Tue, 7/21/09, bricklen brick...@gmail.com wrote:

 From: bricklen brick...@gmail.com
 Subject: Re: [GENERAL] commercial adaptation of postgres
 To: Dennis Gearon gear...@sbcglobal.net
 Cc: pgsql-general@postgresql.org
 Date: Tuesday, July 21, 2009, 7:33 AM
 Greenplum uses a modified version of
 PostgreSQL for their MPP product.
  http://www.greenplum.com/
 
 On Mon, Jul 20, 2009 at 6:56 PM, Dennis Gearongear...@sbcglobal.net
 wrote:
 
  I once talked to a company that made a custome version
 of Postgres. It split tables up on columns and also by rows,
 had some other custome features. It was enormously faster
 from what I gathered.
 
  I could of sworn it began with the letter 'T', but
 maybe not. I don't see anything like that on the commercial
 page of the posgres site.
 
  Does anyone know what it is out there in enterprise
 commercially modified postgres servers? (on 64 bit machines,
 preferably)
 
 
  Dennis Gearon
 
  Signature Warning
  
  EARTH has a Right To Life
 
  I agree with Bolivian President Evo Morales
 
  # The right to life: The right for no ecosystem to be
 eliminated by the irresponsible acts of human beings.
 
  # The right of biosystems to regenerate themselves:
 Development cannot be infinite. There's a limit on
 everything.
 
  # The right to a clean life: The right for Mother
 Earth to live without contamination, pollution. Fish and
 animals and trees have rights.
 
  # The right to harmony and balance between everyone
 and everything: We are all interdependent.
 
 
  See the movie - 'Inconvenient Truth'
  See the movie - 'Syriana'
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] timestamp with time zone tutorial

2009-07-20 Thread Dennis Gearon

Hey Tom,
 I was trying to use 'US/Pacific-New' as my long, unabbreviated timezone 
and it wasn't working. I thought postgres wasn't accepting the unabbreviated, 
geopolitical, daylight savings time, time zones. Turns out, the server that I 
was on, (not my own box), didn't have that in the '/usr/share/zoneinfo/US' 
directory. My bad.

 I just have to read more on how to get it out relative to a different time 
zone than it went in. I'll find it.


Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'


--- On Sun, 7/19/09, Tom Lane t...@sss.pgh.pa.us wrote:

 From: Tom Lane t...@sss.pgh.pa.us
 Subject: Re: [GENERAL] timestamp with time zone tutorial
 To: Dennis Gearon gear...@sbcglobal.net
 Cc: pgsql-general@postgresql.org
 Date: Sunday, July 19, 2009, 8:00 PM
 Dennis Gearon gear...@sbcglobal.net
 writes:
       What I want is to be able to
 insert into my project's database, times given by anybody
 anywhere on the planet (the SUBMITTER), add the appropriate
 timezone in the insert statement so that it in 'GMT/UMT'
 neutral'. I believe this is the way Postgres does it,
 storing times in GMT time.
 
 So just do it.  What do you need to_timestamp
 for?  The basic
 timestamptz input function is far more likely to get it
 right
 for input that's not in a precise predetermined format.
 
            
 regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] commercial adaptation of postgres

2009-07-20 Thread Dennis Gearon

I once talked to a company that made a custome version of Postgres. It split 
tables up on columns and also by rows, had some other custome features. It was 
enormously faster from what I gathered.

I could of sworn it began with the letter 'T', but maybe not. I don't see 
anything like that on the commercial page of the posgres site.

Does anyone know what it is out there in enterprise commercially modified 
postgres servers? (on 64 bit machines, preferably)


Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] timestamp with time zone tutorial

2009-07-19 Thread Dennis Gearon

INSERTing timestampz, 'to_timestamp', output formatting, input formatting, 
SERVER TIME, USER_LOCAL_TIME, multi timezone applications.

Anyone wonder how those all work? **I** sure do!!!

Is there a tutorial anywhere on how to do all those? Can anyone who is 
knowledgeable about this:

 (1) Post a good description.
 (2) Post a link to a good description.
 (3) Give input to me so that I can write a good tutorial to post on
   the postgres site?

TIA, 
Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] timestamp with time zone tutorial

2009-07-19 Thread Dennis Gearon

I read it better, and it makes more sense now.

But,
I'd like it to show how to insert:
'strings' - which it does
timestampz value --using to_timestampz(...)
integers::timestampz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] timestamp with time zone tutorial

2009-07-19 Thread Dennis Gearon

None of the examples of converting a string to_timestamp() show using a time 
zone input as an input.Does it allow full length timezones for daylight savings 
time at the timestamp instant in time, or just an abbreviation for a fixed 
offset?



--- On Sun, 7/19/09, Adrian Klaver akla...@comcast.net wrote:

 From: Adrian Klaver akla...@comcast.net
 Subject: Re: [GENERAL] timestamp with time zone tutorial
 To: Dennis Gearon gear...@sbcglobal.net
 Cc: pgsql-general@postgresql.org
 Date: Sunday, July 19, 2009, 5:15 PM
 On Sunday 19 July 2009 4:56:09 pm
 Dennis Gearon wrote:
  I read it better, and it makes more sense now.
 
  But,
  I'd like it to show how to insert:
      'strings' - which it does
      timestampz value --using
 to_timestampz(...)
 
 For above:
 http://www.postgresql.org/docs/8.4/interactive/functions-formatting.html
 
      integers::timestampz
 
 See above or:
 http://www.postgresql.org/docs/8.4/interactive/functions-datetime.html
 In particular see
 9.9.1. EXTRACT, date_part
 
 epoch
 
     For date and timestamp values, the number of
 seconds since 1970-01-01 
 00:00:00 UTC (can be negative); for interval values, the
 total number of 
 seconds in the interval
 
     SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME
 ZONE '2001-02-16 
 20:38:40-08');
     Result: 982384720
 
     SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3
 hours');
     Result: 442800
 
     Here is how you can convert an epoch value
 back to a time stamp:
 
     SELECT TIMESTAMP WITH TIME ZONE 'epoch' +
 982384720 * INTERVAL '1 second';
 
 
 
 -- 
 Adrian Klaver
 akla...@comcast.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] timestamp with time zone tutorial

2009-07-19 Thread Dennis Gearon

Good Idea Adrian!
 What I want is to be able to insert into my project's database, times 
given by anybody anywhere on the planet (the SUBMITTER), add the appropriate 
timezone in the insert statement so that it in 'GMT/UMT' neutral'. I believe 
this is the way Postgres does it, storing times in GMT time.

 When I pull it out for anyone on the planet (for the QUERRIER), by their 
geographic location or address, I give the SELECT statement the timezone  value 
of the QUERRIER and adjust it to the location of the QUERRIER also supplying 
them the time in the geographical location of the SUBMITTER.



 From: Adrian Klaver akla...@comcast.net
snip
 Subject: Re: [GENERAL] timestamp with time zone tutorial
Dennis Gearon wrote:
  None of the examples of converting a string
 to_timestamp() show using a
  time zone input as an input.Does it allow full length
 timezones for
  daylight savings time at the timestamp instant in
 time, or just an
  abbreviation for a fixed offset?
 
 
 
 This might be easier if you could give an example of what
 you are trying to 
 accomplish :)
 
 
 -- 
 Adrian Klaver
 akla...@comcast.net
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] indexes on float8 vs integer

2009-07-12 Thread Dennis Gearon

Well, Brent,
 I'm just getting started on this design. I'm doing it at a hosting site, 
initially, so I have to find out if they have or will load this module.
 At first, I was just going to interpolate the distance as a bounding box 
based on the distance between latitude lines and longitude lines at that 
latitude. Then serve the data based on the integers for lat/long between two 
values. All the geographic calculations would have taken place in the server 
app, then postgres would only be working with integers.
 So, what is the base type for the point column?
 I had planned on using google maps as the geographic server, I was going 
to query them using their API and a data set of center location and labeled 
points within a certain range. 
 Lot's to learn here, that's for sure. I will file your reply and look at 
it in a week or so when I store the first data.
Dennis Gearon



--- On Sun, 7/12/09, Brent Wood b.w...@niwa.co.nz wrote:

 From: Brent Wood b.w...@niwa.co.nz
 Subject: Re: [GENERAL] indexes on float8 vs integer
 To: gear...@sbcglobal.net
 Cc: pgsql-general@postgresql.org
 Date: Sunday, July 12, 2009, 1:52 PM
 Hi Dennis,
 
 Is there any reason you are not using PostGIS to store the
 values as point geometries  use a spatial (GIST) index
 on them? I have tables with hundreds of millions of point
 features which work well. On disk data volume is not really
 worth optimising for with such systems, i suggest
 flexibility, ease of implementation  overall
 performance should be more valuable.
 
 If you need to store  query coordinates, then a map
 based tool seems relevant, and there are plenty of tools to
 do this soirt of thing with PostGIS data, such as Mapserver,
 GeoServer at the back end  OpenLayers in the front
 end.
 
 
 Cheers,
 
   Brent Wood
 
 
 Brent Wood
 DBA/GIS consultant
 NIWA, Wellington
 New Zealand
  Scott Marlowe scott.marl...@gmail.com
 07/12/09 10:31 PM 
 On Sat, Jul 11, 2009 at 10:15 PM, Dennis Gearongear...@sbcglobal.net
 wrote:
 
  Anyone got any insight or experience in the speed and
 size of indexes on Integer(4 byte) vs float (8byte). For a
 project that I'm on, I'm contemplating using an integer
 for:
 
      Latitude
      Longitude
 
  In a huge, publically searchable table.
 
  In the INSERTS, the representation would be equal to:
 
      IntegerLatOrLong = to_integer(
 float8LatOrLong * to_float(100) );
 
  This would keep it in a smaller (4 bytes vs 8 byte)
 representation with simple numeric comparison for indexing
 values while still provide 6 decimals of precision, i.e.
 4.25 inches of resolution, what google mapes provides.
 
  I am expecting this table to be very huge. Hey, I want
 to be the next 'portal' :-)
  Dennis Gearon
 
 Well, floats can be bad if you need exact math or matching
 anyway, and
 math on them is generally slower than int math.  OTOH,
 you could look
 into numeric to see if it does what you want.  Used to
 be way slower
 than int, but in recent versions of pgsql it's gotten much
 faster.
 Numeric is exact, where float is approximate, so if having
 exact
 values be stored is important, then either using int and
 treating it
 like fixed point, or using numeric is usually better.
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 NIWA is the trading name of the National Institute of Water
  Atmospheric Research Ltd.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] indexes on float8 vs integer

2009-07-12 Thread Dennis Gearon

I wish that I didn't have to say this, but that is over my head at this point. 
I see this HUGE, steep mountain ahead of me and a little sign in front of it 
saying, Learning Curve, start here.

:-)

Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'


--- On Sun, 7/12/09, Brent Wood b.w...@niwa.co.nz wrote:

 From: Brent Wood b.w...@niwa.co.nz
 Subject: Re: [GENERAL] indexes on float8 vs integer
 To: gear...@sbcglobal.net
 Date: Sunday, July 12, 2009, 9:10 PM
 You might look at UMN mapserver or
 Geoserver to provide PostGIS data via WMS/WFS and OpenLayers
 to plot these layers on top of Google Maps. These tools
 facilitate this sort of online map production pretty easily,
 although hosting can be an issue as teh requirements become
 more specific.
 
 Cheers,
 
   Brent
 
 
 Brent Wood
 DBA/GIS consultant
 NIWA, Wellington
 New Zealand
  Dennis Gearon gear...@sbcglobal.net
 07/13/09 1:05 PM 
 
 Well, Brent,
      I'm just getting started on this
 design. I'm doing it at a hosting site, initially, so I have
 to find out if they have or will load this module.
      At first, I was just going to
 interpolate the distance as a bounding box based on the
 distance between latitude lines and longitude lines at that
 latitude. Then serve the data based on the integers for
 lat/long between two values. All the geographic calculations
 would have taken place in the server app, then postgres
 would only be working with integers.
      So, what is the base type for the
 point column?
      I had planned on using google maps
 as the geographic server, I was going to query them using
 their API and a data set of center location and labeled
 points within a certain range. 
      Lot's to learn here, that's for
 sure. I will file your reply and look at it in a week or so
 when I store the first data.
 Dennis Gearon
 
 
 
 --- On Sun, 7/12/09, Brent Wood b.w...@niwa.co.nz
 wrote:
 
  From: Brent Wood b.w...@niwa.co.nz
  Subject: Re: [GENERAL] indexes on float8 vs integer
  To: gear...@sbcglobal.net
  Cc: pgsql-general@postgresql.org
  Date: Sunday, July 12, 2009, 1:52 PM
  Hi Dennis,
  
  Is there any reason you are not using PostGIS to store
 the
  values as point geometries  use a spatial (GIST)
 index
  on them? I have tables with hundreds of millions of
 point
  features which work well. On disk data volume is not
 really
  worth optimising for with such systems, i suggest
  flexibility, ease of implementation  overall
  performance should be more valuable.
  
  If you need to store  query coordinates, then a
 map
  based tool seems relevant, and there are plenty of
 tools to
  do this soirt of thing with PostGIS data, such as
 Mapserver,
  GeoServer at the back end  OpenLayers in the
 front
  end.
  
  
  Cheers,
  
    Brent Wood
  
  
  Brent Wood
  DBA/GIS consultant
  NIWA, Wellington
  New Zealand
   Scott Marlowe scott.marl...@gmail.com
  07/12/09 10:31 PM 
  On Sat, Jul 11, 2009 at 10:15 PM, Dennis Gearongear...@sbcglobal.net
  wrote:
  
   Anyone got any insight or experience in the speed
 and
  size of indexes on Integer(4 byte) vs float (8byte).
 For a
  project that I'm on, I'm contemplating using an
 integer
  for:
  
       Latitude
       Longitude
  
   In a huge, publically searchable table.
  
   In the INSERTS, the representation would be equal
 to:
  
       IntegerLatOrLong =
 to_integer(
  float8LatOrLong * to_float(100) );
  
   This would keep it in a smaller (4 bytes vs 8
 byte)
  representation with simple numeric comparison for
 indexing
  values while still provide 6 decimals of precision,
 i.e.
  4.25 inches of resolution, what google mapes
 provides.
  
   I am expecting this table to be very huge. Hey, I
 want
  to be the next 'portal' :-)
   Dennis Gearon
  
  Well, floats can be bad if you need exact math or
 matching
  anyway, and
  math on them is generally slower than int math. 
 OTOH,
  you could look
  into numeric to see if it does what you want. 
 Used to
  be way slower
  than int, but in recent versions of pgsql it's gotten
 much
  faster.
  Numeric is exact, where float is approximate, so if
 having
  exact
  values be stored is important, then either using int
 and
  treating it
  like fixed point, or using numeric is usually better.
  
  -- 
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org

[GENERAL] Date Time Arithmetic Speed

2009-07-11 Thread Dennis Gearon

I have an application that I am working on that may do some regular, cron 
generated time date conversions.

It would do:
 DATE + TIME = TIMESTAMP.

It would do a LOT of thesebut spread out over every day to keep the load down 
and allow the server to do it's may job, dish out the results.

What are the tradeoffs in storage space, indexed query speed of the DATE and 
TIME formats, and speed of the math for the various for the combinations of 
DATE and TIME that would be possible to derive timestamps from?


Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2009-07-11 Thread Dennis Gearon


Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] indexes on float8 vs integer

2009-07-11 Thread Dennis Gearon

Anyone got any insight or experience in the speed and size of indexes on 
Integer(4 byte) vs float (8byte). For a project that I'm on, I'm contemplating 
using an integer for:

 Latitude
 Longitude

In a huge, publically searchable table. 

In the INSERTS, the representation would be equal to:

 IntegerLatOrLong = to_integer( float8LatOrLong * to_float(100) );

This would keep it in a smaller (4 bytes vs 8 byte) representation with simple 
numeric comparison for indexing values while still provide 6 decimals of 
precision, i.e. 4.25 inches of resolution, what google mapes provides.

I am expecting this table to be very huge. Hey, I want to be the next 'portal' 
:-)
Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] singletons per row in table AND locking response

2009-07-07 Thread Dennis Gearon

When locking is involved, does a transaction wait for access to a row or table, 
or does it just fail back to the calling code? Would it be up to my PHP code to 
keep hammeing for access to a row/table, or could a user defined function do 
that?

I'd like to have a certain object in my PHP application have essentially 
individual SERIAL rows per object created site wide. So only one script 
instance at a time in apache can have access to a row to read and increment a 
value in a row.

Example, (totally random idea, example only), any user on site can create a 
group. Each group assigns group_user_ids per site member in his group, starting 
at zero for each new user joining a group, no matter their site_id.

My choices so far seem to be:
 IN PHP, Use a system file for locking only one instance of the class gets 
access to the table.
 IN PHP, Use the transaction failure to hammer the database for one 
instance of the class.
 IN PHP, Use the transaction failure to hammer the database for each ROW's 
instance of a class.
 IN POSTGRESQL, use the transaction failure to hammer the database for each 
ROW's instance of a class.

But maybe there's more to the locking than failed transactions for UPDATE, some 
kind of sequential queueing of access to tables or rows for transactions?

I'm trying to minimize the interfaces, cpu time, etc involved in getting access 
to the table.


extremely basic SQL for this idea.

CREATE TABLE group (
group_id SERIAL  NOT NULL,
CONSTRAINT PK_group PRIMARY KEY (group_id)
);

CREATE TABLE singletons_for_last_grp_mbr_id_issued (
group_id INTEGER  NOT NULL,
last_grp_mbr_id_issued INTEGER DEFAULT 0  NOT NULL,
CONSTRAINT PK_singletons PRIMARY KEY (counts_per_main, main_id)
);

CREATE UNIQUE INDEX IDX_One_Group_Row_Only ON 
singletons_for_last_grp_mbr_id_issued (group_id);

ALTER TABLE singletons_for_last_grp_mbr_id_issued 
   ADD CONSTRAINT group_singletons_for_last_grp_mbr_id_issued
   FOREIGN KEY (group_id) REFERENCES group (group_id)

Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] singletons per row in table AND locking response

2009-07-07 Thread Dennis Gearon

I could have just as easily described it as a table of SERIALS, one per row, 
instead of per column.

:0)

Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] feature suggestion and return to pgsql-general :-)

2009-04-23 Thread Dennis Gearon

Hi guys, I am back after a long time of doing other kinds of work. Don't know 
if anybody remembers me - not a biggee.

I am again working on a design of a web application for myself. Security seems 
MUCH more of an issue, EVERY day.

So I had these to suggestions/feature requests.

To prevent SQL injection, have either permissions-per-role, or DB wide, or DB 
instance configurable with defalaults and over ride configuration, blah blah, 
the following.

A flag/permission to allow ONLY prepared queries
A flag/permission to allow ONLY UDF/procedures.

These may be mutually exclusive?

But a way to set a database to absolutely be strongly sql injection proof.

Comments? 

Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings.

# The right of biosystems to regenerate themselves: Development cannot be 
infinite. There's a limit on everything.

# The right to a clean life: The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights.

# The right to harmony and balance between everyone and everything: We are all 
interdependent.


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table size/record limit

2004-10-22 Thread Dennis Gearon
Great Idea! When I get that far, I will try it.
Gaetano Mendola wrote:
snip
For partion in some way I don't mean only split it in more tables. You
can use some available tools in postgres and continue to see this table
as one but implemented behind the scenes with more tables.
One usefull and impressive way is to use the inheritance in order to obtain
a vertical partition
0) Decide a partition policy ( based on time stamp for example )
1) Create an empty base table with the name that you want see as public
2) Create the partition using the empty table as base table
3) Create a rule on the base table so an insert or the update on it is
~   performed as a insert or an update on the right table ( using the 
partition
~   policy at step 0 )

in this way you are able to vacuum each partition, reindex each 
partition and
so on in a more feseable way I do not immagine vacuum full or reindex a
3,600,000,000 records table...

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] correct example of a functional index usage?

2004-10-22 Thread Dennis Gearon
But it is possible to use multiple indexes on dates, and that is why the one at the 
bottom works, right?
Would a single index get used for
SELECT appointment
FROM the_table
WHERE 0  (date_mask   date_range);
Tom Lane wrote:

No, an index can be used for one or the other.  Since we don't yet have
bitmap combining of indexes, you don't get to apply two indexes in a
single query.  Even if you did, this would be relatively inefficient
since each index would return a whole lotta rows.
Why don't you just do the straightforward thing and look for
WHERE appointment_date = 'some_date'
  AND appointment_date = 'some-other-date'
AFAICS that solves the stated problem.  Maybe you were not being clear
about what you want?
regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] correct example of a functional index usage?

2004-10-22 Thread Dennis Gearon
The site seems to be down Tom.
Tom Lane wrote:
snip
I don't see any indexable operator there at all.  You might care to read
http://developer.postgresql.org/docs/postgres/xindex.html
which describes the behaviors Postgres indexes have.
regards, tom lane
 


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] table size/record limit

2004-10-21 Thread Dennis Gearon
Actually, now that I think about it, they use a special table type that the INDEX is 
also the DATUM. It is possible to recover the data, out of the index listing. So go 
down the index, then decode the indexing value - voila, a whole step saved. I have no 
idea what engine these table types are in, however.
Joshua D. Drake wrote:
Dennis Gearon wrote:
Google probably is much bigger, and on mainframes, and probably Oracle 
or DB2.

Google uses a Linux cluster and there database is HUGE. I do not know 
which database
they use. I bet they built their own specifically for what they do.

Sincerely,
Joshua D. Drake

But the table I am worried about is the one sized = 3.6 GIGA records.
Tino Wildenhain wrote:
Hi,
Am Do, den 21.10.2004 schrieb Dennis Gearon um 1:30:
I am designing something that may be the size of yahoo, google, 
ebay, etc.

Just ONE many to many table could possibly have the following 
characteristics:

   3,600,000,000 records
   each record is 9 fields of INT4/DATE
Other tables will have about 5 million records of about the same size.
There are lots of scenarios here to lessson this.
BUT, is postgres on linux, maybe necessarily a 64 bit system, 
cabable of this? And there'd be 4-5 indexes on that table.


Sure. Why not? 3...5mio records is not really a problem.
We had bigger tables with historic commercial transactions
(even on an old dual PIII/1000) with fine performance.
I bet however, yahoo, google at least are much bigger :-)
Regards
Tino


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] correct example of a functional index usage?

2004-10-21 Thread Dennis Gearon
Is the following example a good, and correct one, for using a functional 
index? (the 'flip_bits' function would have to be written, and for the 
correct size of bit(N) ):

   I think I came up with a better way to search through some dates. It 
would use a functional index. I am trying to see if any of some 
appointments fall between two dates.

To explain this, think of a 16 day long year, represented by the binary 
bits in two bytes of a bit string

   the_table rows: 
   year, date_mask, appointment_title
int, bit(16), text
   2004, B'1'::bit(16), 'appointment_title'::text

Day 1 is on the left, day 16 is on the right. Say I wanted to find 
appointments in the year 2004 that happened between day 2 and day 9. One 
way that would find them is to search for all records like above that 
produced a non zero result when the day mask was ANDed against 
B'0000' and year = 2004:

   SELECT appointment
   FROM the_table
   WHERE 0 != (date_mask  B'0000')
   AND
 year = 2004;
. This compares 9 dates in one operation, VERY fast. Except that it will 
not use an index - comparing dates alone would, even though there would 
be 16 times more rows to search, it'd be faster.

However, if I make two indexes, one on the standard field value, 
B'',

CREATE INDEX norm_date_mask ON the_table(date_mask);
and the other on the bit reversed value, B'0',
CREATE INDEX flipped_date_mask ON flip_bits(the_table(date_mask));
now I can eliminate all those outside of the date ranges in another, 
indexed way like so.

date_range = B'0000';
flipped_date_range = flip_bits( B'0000');
the query looks like this:
SELECT appointment
FROM the_table
WHERE date_mask  date_range
   AND
flipped_date  flip_bits(the_table(date_mask));
Now, I believe indexes are used for BOTH comparisons, and it will be a 
indexed, fast query, not a total search of the whole table. Plus, with a 
365/6 day sized date mask, 365/6 dates are searched in two operations.

If I needed SPECIFIC dates only, I could do the above search, and then 
add another AND condition that did the original AND against a smaller set.

I will experiment with this,but I'm pretty sure that I'm right.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] max length bit(x) bit varying(x)

2004-10-20 Thread Dennis Gearon
cc me please:

I can't find in the HTML documentation the max length of a bit string. 
Anyone know where it is?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] table size/record limit

2004-10-20 Thread Dennis Gearon
I am designing something that may be the size of yahoo, google, ebay, etc.
Just ONE many to many table could possibly have the following 
characteristics:

   3,600,000,000 records
   each record is 9 fields of INT4/DATE
Other tables will have about 5 million records of about the same size.
There are lots of scenarios here to lessson this.
BUT, is postgres on linux, maybe necessarily a 64 bit system, cabable of 
this? And there'd be 4-5 indexes on that table.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] generic sql question

2004-10-20 Thread Dennis Gearon
My question is it possible to speed up a query doing preselects? What I'm working on 
could end up being a very large dataset. I hope to have 100-1000 queries per second 
(0r more?), and if very large tables are joined with very large tables, I imagine that 
the memory would be get very full, overfull?
So in the schema below the following queries, usrs own articles, articles are of 
article types, issues have dates and names, and issues_published has an issue id and 
sets of articles, and article can be in many issues.
So if I wanted to find articles of a certain article type within a certain date range 
for the article and had actually been published, I believe that this query could find 
it, joining three tables and then doing the qualifications for date and type: (assume 
values in {} are escaped and proper syntax)
-
SELECT article_id
FROM issues_published, issues, articles
WHERE issues_published.article_id = articles.article_id
  AND
 issues_published.issue_id = issues.issue_id
  AND
 articles.article_type = {article_type_id desired}
  AND
 article.article_date  {highest date}
  AND 
 issues.article_date  {lowest date};
 
But would the following reduce the size of the join in memory?

SELECT article_id
FROM (select *
 from articles
 where article_date  {highest date}
  AND 
 article_date  {lowest date} ) as articles_in_range, issues, issues_published

WHERE issues_published.article_id = articles_in_range.article_id
  AND
 issues_published.issue_id = issues.issue_id
  AND
 articles_in_range.article_type = {article type desired}
-
CREATE TABLE usr (
   usr_id SERIAL NOT NULL,
   PRIMARY KEY (usr_id)
);
CREATE TABLE article_types (
   ariticle_type_id SERIAL NOT NULL,
   article_type VARCHAR(40) NOT NULL,
   PRIMARY KEY (ariticle_type_id)
);
CREATE TABLE articles (
   article_id SERIAL NOT NULL,
   ariticle_type_id INT4 NOT NULL,
   author INT4 NOT NULL,
   body TEXT NOT NULL,
   date_written DATE NOT NULL,
   PRIMARY KEY (article_id, ariticle_type_id, author)
);
CREATE TABLE issues (
   issue_id SERIAL NOT NULL,
   issue_title VARCHAR(40) NOT NULL,
   issue_date DATE NOT NULL,
   PRIMARY KEY (issue_id)
);
CREATE TABLE issues_published (
   issue_id INT4 NOT NULL,
   article_id INT4 NOT NULL,
   PRIMARY KEY (issue_id, author, ariticle_type_id, article_id)
);
/*==*/
/*  Foreign Keys*/
/*==*/
ALTER TABLE articles
   ADD FOREIGN KEY (author) REFERENCES usr (usr_id);
ALTER TABLE articles
   ADD FOREIGN KEY (ariticle_type_id) REFERENCES article_types (ariticle_type_id);
ALTER TABLE issue_articles
   ADD FOREIGN KEY (issue_id) REFERENCES issues (issue_id);
ALTER TABLE issue_articles
   ADD FOREIGN KEY (author,ariticle_type_id,article_id) REFERENCES articles (author, 
ariticle_type_id, article_id);
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] table size/record limit

2004-10-20 Thread Dennis Gearon
Google probably is much bigger, and on mainframes, and probably Oracle or DB2.
But the table I am worried about is the one sized = 3.6 GIGA records.
Tino Wildenhain wrote:
Hi,
Am Do, den 21.10.2004 schrieb Dennis Gearon um 1:30:
I am designing something that may be the size of yahoo, google, ebay, etc.
Just ONE many to many table could possibly have the following 
characteristics:

   3,600,000,000 records
   each record is 9 fields of INT4/DATE
Other tables will have about 5 million records of about the same size.
There are lots of scenarios here to lessson this.
BUT, is postgres on linux, maybe necessarily a 64 bit system, cabable of 
this? And there'd be 4-5 indexes on that table.

Sure. Why not? 3...5mio records is not really a problem.
We had bigger tables with historic commercial transactions
(even on an old dual PIII/1000) with fine performance.
I bet however, yahoo, google at least are much bigger :-)
Regards
Tino


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] connection or query affected

2004-10-12 Thread Dennis Gearon
please cc me
If I am using some server side langauge to access Postgres - php, 
python, perl, asp, if I make a connection, do the following actions 
affect the connection, or the individual query that contains them:

   turn off autocommit
   start transaction
   commit transaction
   SET schema
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] connection or query affected

2004-10-12 Thread Dennis Gearon
Steven Klassen wrote:
* Dennis Gearon [EMAIL PROTECTED] [2004-10-12 08:13:07 -0700]:

  turn off autocommit

Per connection.

  start transaction
  commit transaction

They're statements themselves that change the state of the
connection. You start a transaction, run your queries, and then
commit/rollback.

  SET schema

It depends if you're setting your search path for subsequent queries
which would be tracked per connection or you're actually prepending
the schema where the table exists in the query.
E.g.
$dbh-query(SET search_path='my_schema');
- or -
$dbh-query('SELECT foo FROM my_schema.bar WHERE active');
Thanks, I meant the first of the two schema related queries above.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] flattened tables with normalized tables

2004-10-07 Thread Dennis Gearon
If I want to set up a dbase with normalized tables for inserts,and a 
flattened table for selects, am i going in the right direction for 
speeding up a busy site?

Also, if some of you are also doing this, how and how often do you do 
the SELECT from the normalized tables to the flattened table?
And, do you have to write a post trigger to get all the references to 
match up in the flattened table?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] flattened tables with normalized tables

2004-10-07 Thread Dennis Gearon
Michael Fuhr wrote:
On Thu, Oct 07, 2004 at 10:07:47AM -0700, Dennis Gearon wrote:
If I want to set up a dbase with normalized tables for inserts,and a 
flattened table for selects, am i going in the right direction for 
speeding up a busy site?

Are you familiar with views?  If so, is there a reason not to use
them?  What kinds of queries are you making?  Are you experiencing
performance problems with queries on the normalized tables?  Have
you investigated whether those queries can be sped up?

Also, if some of you are also doing this, how and how often do you do 
the SELECT from the normalized tables to the flattened table?
And, do you have to write a post trigger to get all the references to 
match up in the flattened table?

General Bits had an article on materialized views a while back:
http://www.varlena.com/varlena/GeneralBits/64.php
http://www.varlena.com/varlena/GeneralBits/Tidbits/matviews.html
Before deciding on a solution, be sure you fully understand the
problem you're trying to solve.
I am just planning ahead. I hope to have the site I'm building mushroom into a high 
traffic site, and I want to be prepared. In the order that is necessary, with help 
fromt he list and probably the manual/books, I will throw:
tuning,
hardware selection
dedicated hardware,
materialized views
whatever else is appropriate at the right time,
at the problem.
I will also do all the things that you suggest as well, before I try materialized 
views. I found the same article that you quoted at a different location, so I'm up to 
speed on that.
About regular views, how does that speed things up, other than the initial SQL 
interpretation of the view not needing to be done?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] flattened tables with normalized tables

2004-10-07 Thread Dennis Gearon
Michael Fuhr wrote:
On Thu, Oct 07, 2004 at 09:08:18PM -0700, Dennis Gearon wrote:
About regular views, how does that speed things up, other than the initial 
SQL interpretation of the view not needing to be done?

I didn't mean to imply that views would speed things up -- I was
merely suggesting them as an alternative to your flattened table
if part of its purpose would be to simplify queries.  You might
want to perform some experiments to see if the performance gains
from a materialized view are worth the extra complexity.
I'll do the experiments! It'll probably be about 6 months out. I plan ahead :-)
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] flattened tables with normalized tables

2004-10-07 Thread Dennis Gearon
Bruno Wolff III wrote:
On Thu, Oct 07, 2004 at 22:35:50 -0600,
  Michael Fuhr [EMAIL PROTECTED] wrote:
On Thu, Oct 07, 2004 at 09:08:18PM -0700, Dennis Gearon wrote:
About regular views, how does that speed things up, other than the initial 
SQL interpretation of the view not needing to be done?
I didn't mean to imply that views would speed things up -- I was
merely suggesting them as an alternative to your flattened table
if part of its purpose would be to simplify queries.  You might
want to perform some experiments to see if the performance gains
from a materialized view are worth the extra complexity.

It may even turn out there aren't any performance gains from having a
materialized view. That will depend on the mix of operations in production.
Well, one particular query / view will probably draw from 11-15 tables. Several of 
those tables should have millions and millions of rows. However,as normalzed as all 
the data is, and having used surrogate, integer primary keys, the tables shouldn't be 
that big, most of them.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] database constraints

2004-10-06 Thread Dennis Gearon
Use a post trigger function, ON UPDATE, INSERT which essentially has 
this in it:

if ( ISNULL(new.a) AND ISNULL(new.b) ){ RAISE NOTICE blah blah; }
I work with PHP a lot, just a little plpgsql, so, the grammar may be 
wrong above.
Ben [EMAIL PROTECTED] wrote:
quote ---
If I have have the table:

create table foo
(
   a int references bar(id),
   b int references baz(id)
)
... how do I make sure one and only one of the columns a and b are 
non-null? Is it even reasonable?
/quote --

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] storage representation of DATE and TIME types

2004-10-05 Thread Dennis Gearon
please cc me as I am on digetst:

What is the storage format of 'date'? Is it like a timestamp?
I want to know in order to choose representations in a table that will 
receive LOTS of reads with a WHERE clause that chooses dates, and TIMES, 
past a supplied reference.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] compiler for functions

2004-09-28 Thread Dennis Gearon
The postgres engine saves the function bodies, at least, without removing the carriage 
returns.
Could it be changed to report the LINE within the function body that has the problem, 
based on the carriage return?
I don't have an editor that counts characters within a document, but even if I did, 
Postgres doesn't consistently use that to say where the error is. It sometimes uses 
characters within a line,sometimes characters within the document.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] difference 7.3.xx vs 7.4.xx

2004-09-21 Thread Dennis Gearon
I just read release notes for 7.4 where it said:( in plpgsql)
   allows declaration of record type without %ROWTYPE
So, before that, there was no way to have a record returned of arbitrary 
fields from a User Defined Function?

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] basics on User Defined Functions

2004-09-20 Thread Dennis Gearon
Is this the correct way to return values in a record from a UDF in 7.4.+ ?
Is it ok to modify the arguments as in 'arg_int'?
-- return type should be RECORD
--
CREATE OR REPLACE FUNCTION test_function( int )
RETURNS RECORD AS ' 
DECLARE

   arg_int   ALIAS FOR $1;
   var_record_out  record;
   var_message_out text;
BEGIN
   var_message_out := ''This was successful'';
   arg_int := rg_int * 3;
   SELECT INTO var_record_out 
   :arg_int AS intX3,
   :var_message_out AS message;
   RETURN var_record_out;
END;
' LANGUAGE 'plpgsql';

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Release of 8.0.0

2004-09-10 Thread Dennis Gearon
WOW, that is the most comprehensive, 'nearing to commercial capability' update of 
Postgres (and any OTHER OSS project) that I've seen in all my readings about and 
dealings with Postgres. I would be VERY surprised if Postgres doesn't win some major 
awards (and accounts) now that this has happened.
If there were some way I could send all of the dev group to some remote, computer free 
island to relax, I would.
FANTASTIC job you guys.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] test

2004-09-08 Thread Dennis Gearon
I don't seem to see my messages in the digest. Is this a mail setting? I thought that 
only blockable in the single message mode.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] functions/stored procedures

2004-08-31 Thread Dennis Gearon
please CC me.
Are the UDF/functions/procedures written by users available in all schemas in a DB?
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] connection parameters

2004-07-03 Thread Dennis Gearon
Please CC me.
When I connect to a database from PHP script called from the web via Apache, running on the same 
machine as the DB, that's server name localhost, right?
I can't seem to get a php script to connect with the same parameters that I can use 
using telnet and 'psql'.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] username length character limits?

2004-07-03 Thread Dennis Gearon
Please CC me:
If I create the user 'web_user'
with password 'password'
I can connect using 'psql' just fine.
If I create the user 'D1Khb2g5m7FGk_web_user'
with password 'password'
I CANNOT connect using 'psql', I get authentication error.
-
ANYONE have any ideas why?
-
PG_VERSION says it's 7.3. I don't know how to find out any more accurately the 
subversion of that.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] username length character limits?

2004-07-03 Thread Dennis Gearon
I now have tried an equal length name, and it logs in fine:
't012345678901_web_user' with pword 'password' works fine.
The problem below is consistent. I can drop and recrate the user, and it always cannot 
be logged into using that name. I can successfully alter the user, so the system is 
finding it, I just can't authenticate into it.
Dennis Gearon wrote:
Please CC me:
If I create the user 'web_user'
with password 'password'
I can connect using 'psql' just fine.
If I create the user 'D1Khb2g5m7FGk_web_user'
with password 'password'
I CANNOT connect using 'psql', I get authentication error.
-
ANYONE have any ideas why?
-
PG_VERSION says it's 7.3. I don't know how to find out any more 
accurately the subversion of that.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] username length character limits?

2004-07-03 Thread Dennis Gearon
another name that does not work is 'H1q2W3e4R5_web_user'.
It seems to not like many alternations between numbers and letters.
Dennis Gearon wrote:
Please CC me:
If I create the user 'web_user'
with password 'password'
I can connect using 'psql' just fine.
If I create the user 'D1Khb2g5m7FGk_web_user'
with password 'password'
I CANNOT connect using 'psql', I get authentication error.
-
ANYONE have any ideas why?
-
PG_VERSION says it's 7.3. I don't know how to find out any more 
accurately the subversion of that.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] username length character limits?

2004-07-03 Thread Dennis Gearon
tried it with two underscores and short name:
   H1q2W_web_user
with only one underscore and a long name:
   H1q2W3e4R5t6Y7u8Iweb_user
with only oneunderscore and a short name:
   H1q2Wweb_user
with no underscore and a long name:
   H1q2W3e4R5t6Y7u8Iwebuser
with no underschore and a short name:
   H1q2W3webuser

NONE OF THESE NAMES WORK AS USER NAMES WITH THE PASSWORD 'password'.

I have no idea why.
Where does it say what is the legal:
   length
   characters
   character set (?)
for user names?
one underscore and a short name.
Dennis Gearon wrote:
Please CC me:
If I create the user 'web_user'
with password 'password'
I can connect using 'psql' just fine.
If I create the user 'D1Khb2g5m7FGk_web_user'
with password 'password'
I CANNOT connect using 'psql', I get authentication error.
-
ANYONE have any ideas why?
-
PG_VERSION says it's 7.3. I don't know how to find out any more accurately the 
subversion of that.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] username length character limits?

2004-07-03 Thread Dennis Gearon
Stephan Szabo wrote:
On Sat, 3 Jul 2004, Dennis Gearon wrote:

Stephan Szabo wrote:
On Sat, 3 Jul 2004, Dennis Gearon wrote:

Please CC me:
If I create the user 'web_user'
with password 'password'
I can connect using 'psql' just fine.
If I create the user 'D1Khb2g5m7FGk_web_user'
with password 'password'
I CANNOT connect using 'psql', I get authentication error.

Are you sure that you're not just running into a case problem
with case folded names (non-quoted names from sql statements) versus
non-case folded names (programs which treat their arguments as quoted like
psql).
I am putting the exact same string into both situations:
	A/ creating the username

How did you create the user.  Did you use the shell script or a create
user statement from inside a session in the database?

B/ trying to log in.
If the database is not going to respect case in one, it shouldn't
respect case in the other, right?

IIRC, psql (and the createuser shell script and such) treat it as if you
had double quoted its argument because of the way shells handle quotes
which would necessitate something like 'FOO' to use a quoted uppercase
name. So, if you say psql FOO -U BAR, you're saying log into database
FOO as user BAR.

I created the user from inside of a psql session. 

And I cannot either connect from invoking a different psql session from the shell,
nor from PHP using it's compiled in c library for postgres.
It seems it's another gotcha with this database. I wouldn't have thought it would so 
difficult to insert upper(whatever was enterd) into the database, and validate it the 
same way.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] username length character limits?

2004-07-03 Thread Dennis Gearon
Stephan Szabo wrote:
On Sat, 3 Jul 2004, Dennis Gearon wrote:

Stephan Szabo wrote:

IIRC, psql (and the createuser shell script and such) treat it as if you
had double quoted its argument because of the way shells handle quotes
which would necessitate something like 'FOO' to use a quoted uppercase
name. So, if you say psql FOO -U BAR, you're saying log into database
FOO as user BAR.

I created the user from inside of a psql session.
And I cannot either connect from invoking a different psql session from the shell,
nor from PHP using it's compiled in c library for postgres.
It seems it's another gotcha with this database. I wouldn't have thought
it would so difficult to insert upper(whatever was enterd) into the
database, and validate it the same way.

Create user Foo and Create user Foo are creating users with different
names. Since we want to allow both names to exist at the same time and for
you to refer to either, there needs to be a mechanism for doing so.  I
don't know about PHP, but as I said above, psql does not treat its
arguments as normal identifiers (thus lowercasing them) because it was
decided to be simpler for people to refer to either user.  For the former,
it is foo and the latter Foo.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
   http://archives.postgresql.org

So, if I WANT a user name with case:
((A))
Creating user in psql, it will respect case automatically?
To OPEN psql using that user, I have to put it in double quotes?
(to those new to postgres, the '#' means superuser is logged in)
---
database_name# create user CasedNameForUser with password 'password';
database_name#\q
user/system_prompt psql -U CasedNameForUser -d dbase -p port
password:
Congratulations statements 
dbase_name_as_prompt
###OR###
((B))
Creating user in psql, I will have to use double quotes?
To OPEN psql using that user, , it will respect case automatically?
(to those new to postgres, the '#' means superuser is logged in)
---
database_name# create user CasedNameForUser with password 'password';
database_name#\q
user/system_prompt psql -U CasedNameForUser -d dbase -p port
password:
Congratulations statements 
dbase_name_as_prompt
and the converse, if I WANT a user name withOUT case:
What then?
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] ~Strange Operators~

2004-06-30 Thread Dennis Gearon
Tom Lane wrote:
Chris Gamache [EMAIL PROTECTED] writes:
~=~
~=~
~~
~~
~=~
~~

They aren't familiar to me, but I can tell that they are text comparison
operators. The details of what and how they compare is a mystery to me! 

Those are the non-locale-aware operators that Peter added to support
LIKE.   I'm surprised to hear they aren't documented.
regards, tom lane
What are they, just binary?
BTW, isn't that what locale C is, binary?
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Internationalization

2004-06-30 Thread Dennis Gearon
Tom Lane wrote:
Dennis Gearon [EMAIL PROTECTED] writes:
Tom Lane wrote:
The indexes on the shared system tables (eg, pg_database) are the only
issue here.  One possible solution is to require that no locale-aware
datatypes ever be used in these indexes.  I think right now this is true
because name doesn't use locale-aware sorting; but we'd have to be
careful not to break the restriction in future.
Tom what about table names? Isn't it part of the SQL spec to be able
to set table names to other langauges other than English?

[shrug...]  So which language/encoding would you like to force everyone
to use?
The issue is not really whether you can create a database name that
looks like however you want.  The issues are (a) what it will look like
to someone else using a different encoding; and (b) how it will sort if
you ask for select * from pg_database order by datname, relative to
someone else's database name that he thinks is in a different locale and
encoding than you think yours is.
AFAICT the Postgres user community is not ready to accept a thou shalt
use Unicode decree, so I don't think that mandating a one-size-fits-all
answer is going to fly.
regards, tom lane
So for now, my database is set up as:
show all shows
--
server encoding SQL_ASCII 

I didn't see anything that said what the LC_COLLATE and LC_TYPE settings were when 
initdb was done.
How can I find that out?
in postgresql.conf
--
LC_MESSAGES = 'C'
LC_MONETARY = 'C'
LC_NUMERIC = 'C'
LC_TIME = 'C'
So I have what:
8 bit encoding with standard ASCII ?
I can put what langauges in it?
It will sort in standard ASCII order, all not English characters will sort last?   
 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] backups

2004-06-30 Thread Dennis Gearon
Bruno Wolff III wrote:
On Wed, Jun 30, 2004 at 18:23:08 -0500,
  [EMAIL PROTECTED] wrote:
What do other sites with mondo databases do?

There have been comments from people using storage systems that they
can freeze the storage system and get a consistant snap shot of the
file system. This can be used to do a restore. It will look just like
postgres crashed when coming back up.
If you find one of the posts about this in the archives the poster may
have more details on their storage systems.
Just shutting it down doesn't work?
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] unicode and sorting(at least)

2004-06-24 Thread Dennis Gearon
All of the ISO 8xxx encodings and LATINX encodings can handle two langauges, English and at least one other. Sometimes they can handle several langauges besides English, and are actually designed to handle a family of langauges. 

The ONLY encodings that can handle a significant amount of multiple langauges and 
character sets are the ISO/UTF/UCS series. (UCS is giving way to UTF). In fact they 
can handle every human langauge ever used, plus some esoteric ones postulated, and 
there is room for future languages.
So, for a column to handle multiple langauges/character sets, the languages/character 
sets have to be in the family that the database's encoding was defined for(in postgres 
currently, choosing encoding down to the column level is available on several 
databases and is the SQL spec), OR, the encoding for the database has to be UTF8 
(since we don't have UTF16 or UTF32 available)
Right now, the SORTING algorithm and functionality is fixed for the database cluster, which contains databases of any kind of encodings. It really does not do much good to have a different locale than the encoding, except for UTF8, which as an encoding is langauge/character set neutral, or SQL_ASCII and an ISO8xxx or LatinX encoding. Since a running instance of Postgres can only be connected to one cluster, a database engine has FIXED sorting, no matter what language/character set encoding is chosen for the database. 

It so happens that most non UTF encodings are designed to sort well in an extended 
ascii/8 bit environment, which is what the ISO8 and LatinX encodings actually are. 
I'm not sure that it's perfect though. So, if SQL_ASCII is chosen for the LOCALE, and 
the encoding is ISO8xxx or LATINx, it will probably sort OK.
UTF8/16/32 is built the same way. However, this only applies per character, and only 
works painlessly on UTF32, which has fixed width characters. UTF8/16 OTOH, have 
variable length characters (in multiples of 8 bits). Since SQL_ASCII sorts in a binary 
fashion, UTF8/16 won't sort correctly under SQL_ASCII locale, I believe.
Tatsuo Ishii wrote:
On Wed, 23 Jun 2004, Dennis Gearon wrote:

This is what has to be eventually done:(as sybase, and probably others do it)
	http://www.ianywhere.com/whitepapers/unicode.html
Actually, what probably has to be eventually done is what's in the SQL
spec.
Which is AFAICS basically:
Allow multiple encodings
Allow multiple character sets (within an encoding)

Could Please explain more details for above. In my understanding a
character set can have multiple encodings but...
--
Tatsuo Ishii

Allow one or more collations per character set
Allow columns to specify character set and collation
Allow literals in multiple character sets
Allow translations and encoding conversions (as supported)
Allow explicit COLLATE clauses to control ordering and comparisons.
Handle identifiers in multiple character sets
plus some misc things like allowing sets that control the default
character set for literals for this session and such.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] langauges, locales, regex, LIKE

2004-06-24 Thread Dennis Gearon
Richard Huxton wrote:
Dennis Gearon wrote:
If I've read everything right, in order to get:
multiple languages on a site
with the functionality of ALL of:
REGEX
LIKE
Correctly sorted text
A site would have to:
create a cluster for every language needed
run a separate database instance for every language
and have the database instances each have their own port
and use 8 bit encoding for that specific language

You'd need a separate database, not a separate cluster. Each database 
can then have their own encoding and locale.
If I wanted all the languages to be running concurently, I can't switch clusters 
that the database is connected to on the fly, right? The database stays in the cluster 
it was started in, right? So, if that's true, then I need separate database instances 
if I want truly accurate sorting.

because:
Sorting is fixed at cluster/directory creation per single
database instance

To clarify, a cluster is a group of databases that share user logins and 
can all be accessed via the same server.

And LIKE only works on C Locale with an eight bit encoding
and sorting (MAYBE?) works only on 8 bit encoding
when using C Locale.

You can sort, and I believe use LIKE on UTF etc. However, index use is a 
different matter.
Yup, there is no facility to declare character sets for indexes.

If anyone can correct me on this, I'd love to hear it.
Boy, the old LOCALE system has really got to go someday.

The issue isn't so much the difficulty of supporting multiple locales 
(AFAIK). I believe it's more to do with interactions. If you have a 
table containing multiple languages in the same column, what does it 
mean to sort that table? Do you sort by language-name then by languages? 
  If you don't, what rules do you follow?

What happens if we compare different languages?
Does fr/fr:a == en/gb:a?
Does en/gb:hello == en/us:hello?
Messy, isn't it?
Without languge specific characters, they will sort exactly the same.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] langauges, locales, regex, LIKE

2004-06-24 Thread Dennis Gearon
John Sidney-Woollett wrote:
For what it's worth, we have a unicode 7.4.1 database which gives us the 
sorting and searching behaviour that we expect (with the exception of 
the upper and lower functions). We access the data via jdbc so we don't 
have to deal with encoding issues per se as the driver does any 
translation for us.

Currently we don't use any LIKE statements, but if we did, and wanted 
them optimized then we'd use the appropriate OP Class when defining the 
index. We also don't use any REGEX expressions. And we'll shortly be 
experimenting with tsearch2...

List of databases
Name  |  Owner   | Encoding
---+--+--
test  | postgres | UNICODE
Setting the psql client encoding to Latin1 and inserting the following 
data...

# select * from johntest;
id | value
+---
 1 | test
 2 | tést
 3 | tèst
 4 | taste
 5 | TEST
 6 | TÉST
 7 | TÈST
 8 | TASTE
(8 rows)
and then extracting the data in sorted order works as we would expect
# select * from johntest order by value (no index on the value field)
id | value
+---
 8 | TASTE
 5 | TEST
 7 | TÈST
 6 | TÉST
 4 | taste
 1 | test
 3 | tèst
 2 | tést
(8 rows)
however, applying the UPPER function to the data does not work as 
expected, problem with ids 6,7,3,2 - should be ordered (3,7 or 7,3) , 
(6,2 or 2,6)

# select * from johntest order by upper(value);
id | value
+---
 4 | taste
 8 | TASTE
 1 | test
 5 | TEST
 7 | TÈST
 6 | TÉST
 3 | tèst
 2 | tést
(8 rows)
using a LIKE operation also works as expected (again no index on value 
field)

# select * from johntest where value like 't%';
id | value
+---
 1 | test
 2 | tést
 3 | tèst
 4 | taste
(4 rows)
Like works, but it can't use an index, and so would have horibble performance vs. the 
situation where it CAN use an index. I believe this is how Postgres is working now.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] hackers dudes

2004-06-24 Thread Dennis Gearon
So, do you guys ever get together physically with a white board?
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] JDBC driver inserting into a table with Bytea type gets out of

2004-06-21 Thread Dennis Gearon
AFAIK, every database uses some sort of 'large object interface'. Surely 
you could still move between DBs even using BLOB fields?

Sailer, Denis (YBUSA-CDR) [EMAIL PROTECTED] wrote:
quote 
---
There was a posting in the mailing list archives that I can't find 
anymore.  The web site right now is presenting a list of items from a 
search in a reasonable amount of time, but takes 5-10 minutes to 
retrieve the detail for each one as they are clicked.  Rather frustrating. 

This person was getting out of memory conditions inserting rows when the 
data in the bytea field was larger than approx.  1400 Kb.  This person 
had posted a fix to the group that seemed to work for him.  The inserts 
were coming from a java application using the JDBC driver.

I just tried the latest JDBC driver and the problem still exists.  We 
have no plans to convert to using the large object interface.  This ties 
our code to PostgreSQL and would go against our need to keep the code 
generic enough to work against many different databases.

Is this issue acknowledged as a bug?
If yes, when is this issue going to be implemented in the code base?
If it is already fixed, what are the steps to implement the changes to 
my 7.4.2 installation?  Build my own JDBC driver?

/quote 
 

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] C_LOCALE vs. UTF8

2004-06-14 Thread Dennis Gearon
H..,
There was a post running about using C_LOCALE is the only way to use 
LIKE, and I put something similar to the below in comment to it. It 
never showed up on the list. Oh well.

If I wanted a multinational database capable of many langauges,and 
wanted the most functionality of sorts, order bys, indexes, LIKE, etc, 
what is the best encooding?
I would have THOUGHT it would be UTF8, but maybe C_LOCALE?

Is there a chart somewhere showing the main differences between using 
C_LOCALE and any other encoding?  A nice simple table in the manual 
would be really nice. If you give me the particulars, I'll write the table.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] generic insert into table

2004-06-07 Thread Dennis Gearon
please CC me, I am on digest
-
I have the following code from an application that is 'mysql_centric'. I 
want to make it generic across all databases, if it's possible, 
especially postgres :-)

mysql version:
   INSERT INTO calendar_setting SET setting='colorEvent',value='#C2DCD5';
There is no data in this table at this time.
Isn't this the same as:
   INSERT INTO calendar_setting( 'colorEvent' ) VALUES ( '#C2DCD5');
would this work on all db's?
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] GRANT question

2004-06-03 Thread Dennis Gearon
How about using phpPgAdmin? Or something like that?
Also, is there a SQL statement that will return all the possible 
privileges? Use that in a subselect and then grant WHERE?
Chris Ochs [EMAIL PROTECTED] wrote:
quote 
--

Is there a shortcut to grant all privileges on a schema and it's objects to
a user  without having to issue GRANT statements for each individual object
in the schema?  Given all the possible types of objects I doubt it but
thought I would ask just in case there was a way to do it.
 

No. People have asked about similar ways to do lots of grants at once,
but I haven't seen anyone claim to be working on this.
/quote ---
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Interesting Unique Index Needed - Functional?

2004-05-16 Thread Dennis Gearon
Given:
CREATE TABLE Usrs(
usr_id SERIAL NOT NULL PRIMARY KEY,
usr  VARCHAR(64) NOT NULL UNIQUE
);
CREATE TABLE Emails(
email_id SERIAL NOT NULL PRIMARY KEY,
email  VARCHAR(128) NOT NULL UNIQUE
);
CREATE TABLE EmailTypes(
email_type_id SERIAL NOT NULL PRIMARY KEY,
email_type  VARCHAR(64) NOT NULL UNIQUE,
pri BOOL NOT NULL,
multiples_allowed BOOL NOT NULL
);
CREATE TABLE UsrEmails(
usr_id INT4 NOT NULL,
email_id INT4 NOT NULL,
email_type_id INT4 NOT NULL,
validated BOOL NOT NULL DEFAULT 'T'::BOOL,
validation _hash VARCHAR(64) NOT NULL
);
ALTER TABLE UserEmails
   ADD CONSTRAINT Usrs_11_1M_UsrEmails FOREIGN KEY (usr_id) REFERENCES 
Usrs(usr_id);
ALTER TABLE UserEmails
   ADD CONSTRAINT Emails_11_0M_UsrEmails FOREIGN KEY (email_id) 
REFERENCES Emails(email_id);
ALTER TABLE UserEmails
   ADD CONSTRAINT EmailTypes_11_0M_UsrEmails FOREIGN KEY 
(email_type_id) REFERENCES EmaiTypesl(email_type_id);

INSERT INTO Usrs( usr ) VALUES( 'John' );
INSERT INTO Emails( email ) VALUES( 'some email one' );
INSERT INTO Emails( email ) VALUES( 'some email two' );
INSERT INTO Emails( email ) VALUES( 'some email three' );
INSERT INTO Emails( email ) VALUES( 'some email four' );
INSERT INTO Emails( email ) VALUES( 'some email five' );
INSERT INTO EmailTypes( email_type, pri, multiples_allowed ) VALUES( 
'home', 'T'::BOOL, 'N'::BOOL );
INSERT INTO EmailTypes( email_type, pri, multiples_allowed) VALUES( 
'work', 'F'::BOOL, 'Y'::BOOL );
INSERT INTO EmailTypes( email_type, pri, multiples_allowed ) VALUES( 
'extra_contact', 'T'::BOOL, 'Y'::BOOL );

Now, for a little business logic:
   1/ emails are entered by client, choosing which type, and having to 
supply at least the primary type.
   2/ At first UsrEmails( validated ) = 'FALSE and the validation_hash 
is some 160 bit number using the newer hash type.
   3/ The usual, 'send a reply to this email or if you are computer 
illiterate, click on this link' validation message gets sent out for 
each email entered. The hash is embedded in the subject or GET 
parameters as usual.
   4/ User can have MULTIPLE emails PENDING (validated = 'FALSE') for 
any of the EmailTypes, but only ONE email of any type which has 
EmailTypes( multiples_allowd ) = 'TRUE' AND UserEmails( validated ) = 'TRUE'

How can I enforce number two, i.e.
   How can I have a Unique index on UserEmails( usr_id, email_type_id ) 
where EmailTypes( multiples_allowd ) = 'TRUE' for that type AND 
UserEmails( validated ) = 'TRUE'
  

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] one to many

2004-05-16 Thread Dennis Gearon
Peter Eisentraut wrote:
Dennis Gearon wrote:
 

How do I set up a one to many relationship in Postgres, (any DB for
that matter.)
   

Read about foreign keys:
http://www.postgresql.org/docs/7.4/static/ddl-constraints.html#DDL-CONSTRAINTS-FK
 

That only takes care of if the PARENT is deleted/updated, if I read it 
right. I'm trying to take care of the case where the CHILD is deleted or 
updated, i.e. preserving 1-Many part of a rlationship.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] one to many

2004-05-16 Thread Dennis Gearon
Stephan Szabo wrote:
On Sun, 16 May 2004, Dennis Gearon wrote:
 

CC me please.
How do I set up a one to many relationship in Postgres, (any DB for that
matter.)
I.E., if a delete or update of a child table causes a row in the parent
table to no longer refer to any rows in the child table, to either cause
Postgres to error out or delete the parent? I can see it does it for
when a parent is upudated or deleted.
   

I don't think there's any built in direct way to do this right now, but
you could probably build triggers that would do it for you (the current
foreign key triggers might give a starting point.  You'd probably also
want to use CREATE CONSTRAINT TRIGGER to be able to defer the trigger
(similarly to how the foreign key deferred works).
 

I didn't know you could set up triggers to be deferred! AWESOME! That 
makes for some additional flexibility that I could use.

Thank you very much Stephen. I was beginnning to consider TRIGGERS as 
the solution; Now I know that they will work.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] bytea

2004-05-11 Thread Dennis Gearon
when bytea, text, and varchar(no limit entered) columns are used, do 
they ALWAYS use an extra table/file? Or do they only do it after a 
certain size of input?

Also, if I wanted to put a *.pdf file in a bytea column, what functions 
do I use to escape any characters in it?

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] bytea

2004-05-11 Thread Dennis Gearon
I forgot, please CC me, I am on digest.
Dennis Gearon wrote:
when bytea, text, and varchar(no limit entered) columns are used, do 
they ALWAYS use an extra table/file? Or do they only do it after a 
certain size of input?

Also, if I wanted to put a *.pdf file in a bytea column, what 
functions do I use to escape any characters in it?



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] bytea

2004-05-11 Thread Dennis Gearon
Thanks for all the answers everybody, but I need to know also an answer 
to the other question:

Does the bytea make its own files automatically for large objects?

Also, how about backups with tables having bytea columns.?

Jonathan Bartlett wrote:

Also, if I wanted to put a *.pdf file in a bytea column, what functions
do I use to escape any characters in it?
   

What programming language are you using?

In Perl, you do something like:

$sth-bind_param(1, $file_data, DBI::SQL_BINARY); #DBI::SQL_BINARY is
deprecated, but it works
In php you do:

$file_data = pg_escape_bytea($file_data);

$db-query(insert into blah(blahh) values ('${file_data}'::bytea);

To retrieve the info in Perl, it's just a regular fetchrow()

my ($file_data) = $sth-fetchrow();

In php, you have to run stripcslashes() on the data.

list($file_data) = $query-fetchrow();
$file_data = stripcslashes($file_data);
Jon

 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
   

 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] graphical ERD tool

2004-05-07 Thread Dennis Gearon
OK,
   I've given up on the tool I'm using. Anyone recommend a good, 
graphical ERD CASE tool for postgres? I'm on windblowsXP.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] graphical ERD tool

2004-05-07 Thread Dennis Gearon
I also like the looks of Mogwai. I am using Dezign, and it's OK. It used 
to be better in the last version. NOW, it can't print anything more than 
10 entities on my machine, and it's 1gig of ram/1.8ghz. It does a jpeg 
in 0.1 second, so what it's printing problem I don't know.

Anyway, I am mostly interested in original design, but importation will 
be some time in the future, I'm sure. I will look at Mogwai and hope 
they get importation working soon.

ERWin looks really good, as does CASE2. However, CASE2 seems mostly for 
maintenance and not origiinal design.

Dann Corbit wrote:

-Original Message-
From: Dennis Gearon [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 07, 2004 3:01 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] graphical ERD tool

OK,
   I've given up on the tool I'm using. Anyone recommend a good, 
graphical ERD CASE tool for postgres? I'm on windblowsXP.
   

What are you using?

I like ERWin/ERX, but it is not specifically tailored to PostgreSQL
(e.g. does not understand sequences, etc.).
So there are lots of manual steps after your bare SQL generation.
I could not get DBManager to work even with SQLite, but I think some
others like that one.
I downloaded Case Studio to try that, and also was unable even to
import.
I tried Visual Case 2, but that did not work for me, even though
PostgreSQL is listed as supported.
This looked real promising:
http://mogwai.sourceforge.net/
But when I try to reverse engineer PostgreSQL tables, the list of tables
is empty, even though the connection seems fine.
This is nowhere close to ready:
http://sourceforge.net/projects/pydbdesigner/
Don't bother:
http://sourceforge.net/projects/yadbd/
 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


  1   2   3   >