[GENERAL] When is commited data available

2011-05-26 Thread Fredric Fredricson

Hi,
I wonder when the committed data is available to other connections, or 
more specifically if there is a delay after COMMIT return successfully 
and the data will appear in SELECTs made by other connections.


A more detailed description of my problem:

I use postgresql as backend to a REST service. The REST service is 
written php and run on an Apache server. For all reads I use a 
persistent connection (php function pg_pconnect()) and for all write 
operations I create a new connection for each call (php function 
pg_connect()).
What I think I see now and then is that a client make a call to update 
the database and after the call has returned the client immediately 
(20-150ms delay) access the database again only to find that it looks 
like the update was never made. There are a lot of triggers involved and 
some have (intentional) side effects such as inserting into other tables 
and stuff. Later investigation reveals that the update was indeed made.


Am I totally barking up the wrong tree here or could this happen? And if 
it can, is there a reliable work around that does not involve waiting X 
ms and hope for the best.


These are the config variables that I assume may have something to do 
with it:

#fsync = on
#synchronous_commit = on
#wal_sync_method = fsync
#wal_writer_delay = 200ms
#commit_delay = 0
(all default values)

Regards,
Fredric

PS. I realize this could be a caching problem in http but I have spent 
some time investigating this and I am pretty sure it is not.


<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto-Increment in Postgres

2011-02-11 Thread Fredric Fredricson

On 02/11/2011 12:13 PM, Adarsh Sharma wrote:

Dear all,


I have an Integer column in Postgres database table. Let's say the 
column has below values :



1
2
3
4
5
6
7
8
9
10


Now if i deleted some rows where id= 3 ,5 and 8  or it have these type 
of data then


The data look like as :

1
2
4
6
7
9
10

I want to have it id's as
1
2
3
4
5
6
7
and next data is inserted right at 8 .

I follow these steps

*alter table meta_test drop column metadataid;

alter table meta_test add metadataid serial;*

But this adds the column at the end but i want to it as primary key.

Please help how to achieve it.


Thanks & best Regards
Adarsh Sharma
1. The SERIAL data type will not give you what you want. SERIAL will 
increment by one for each INSERT regardless of the content of you table 
(and, specifically, the value of you "metadataid" field). SERIAL does 
not check for unused ids, if you need this you should probably write 
trigger procedure to handle that.


2. SERIAL does not make the field unique, to do that use "SERIAL UNIQUE".

3. ALTER TABLE tablename ADD PRIMARY KEY metadataid ;

4. Once you added a serial with ALTER TABLE a sequence is created that 
start counting on 1. The next insert will add a row with metadataid = 1. 
If you have a UNIQUE or PRIMARY KEY constraint this will fail if there 
already is a row with metadataid set to 1. SERIAL has no magic that 
handle this.


Useful resources:
http://www.postgresql.org/docs/8.4/interactive/datatype-numeric.html#DATATYPE-SERIAL
http://www.postgresql.org/docs/8.4/interactive/functions-sequence.html
http://www.postgresql.org/docs/8.4/interactive/sql-altertable.html

/Fredric

<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] (Hopefully stupid) select question.

2011-01-24 Thread Fredric Fredricson

On 01/24/2011 05:02 PM, A.M. wrote:

On Jan 24, 2011, at 10:50 AM, Fredric Fredricson wrote:


I have been fighting with a select and can find no satisfactory solution.

Simplified version of the problem:

A table that, in reality, log state changes to an object (represented as a row 
in another table):

CREATE TABLE t (
id SERIAL UNIQUE,
ref INTEGER, -- Reference to a row in another table
someData TEXT,
inserted DATE DEFAULT CURRENT_TIMESTAMP
) ;
Then we insert multiple rows for each "ref" with different "someData".


Now I want the latest "someData" for each "ref" like:

ref | someData (only latest inserted)
-
1  | 'data1'
2  | 'data2'
etc...

The best solution I could find depended on the fact that serial is higher for 
higher dates. I do not like that because if that is true, it is an indirect way 
to get the data and could possibly, in the future, yield the wrong result if 
unrelated changes where made or id's reused.

Here is my solution (that depend on the SERIAL):
SELECT x.ref,x.someData
  FROM t as x
  NATURAL JOIN (SELECT ref,max(id) AS id FROM t GROUP BY ref ORDER BY ref) AS y 
;

Can somebody come up with a better solution? (without resorting to stored 
procedures and other performance killers).

I would argue that relying on the id is safer than relying on the current timestamp 
because CURRENT_TIMESTAMP refers to the time that the transaction is started, not when 
the transaction was committed (or the row was "actually" inserted). In 
addition, it is technically possible for two transactions to get the same 
CURRENT_TIMESTAMP. SERIAL values are never reused. You could also create a security view 
which exposes the historical data but without the primary key in the actual table.
Well, in my case the transaction time is not an issue really. The 
database is a backend to a REST Web service and all transactions are 
short (as dictated by the web server).

But I see your point.

I recommend http://pgfoundry.org/projects/tablelog which uses "performance 
killers" like stored procedures to handle things properly- at least take a look to 
see how things are handled.
I looked at this page and it is not what I need for this particular 
problem, since I log only specific changes in state and these changes 
are represented as rows in this state-log table (the row in the 
referenced table is not changed).


But I do log changes in about 80% of my tables and I use a technique 
similar to the one described in the table log. I have a script that 
parse my sql-code and auto-generate sql statemens that creates a 
"shadow"-table and the triggers required. I also have a mandatory 
"header" on all my logged tables and store an entry in a change log 
table with information about user name (external user, not SQL ROLE) and 
timestamp. This way all changes can be traced in time and I can, in 
theory, get a snapshot of my entire data at an arbitrary point in time. 
I say "in theory" because I have not implemented it and with a lot of 
unions and such I expect the performance to suck. I will however use it 
for parts of the data, which is why I implemented it.


And about performance. In my application insert performance is not an 
issue, I suspect it rarely is in systems run by human hands. Read 
performance on the other hand can definitely be an issue since reads are 
much more frequent and contains more data. I use views a lot and at one 
point I had nested views that used stored procedures and I started to 
get select times in the region of 7-800ms for simple selects with a 
couple of hundred rows in the result set. Not funny. I removed the 
stored procedures (it was painful!) and the nested views and got select 
times down to 20-40ms. Not entirely satisfactory, maybe, but much better 
and with some decent hardware I guess it would be even better.

Hence my remark about stored procedures as "performance killers".

Thanks,
Fredric

Cheers,
M


<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] (Hopefully stupid) select question.

2011-01-24 Thread Fredric Fredricson

On 01/24/2011 04:56 PM, Tom Lane wrote:

Fredric Fredricson  writes:

... Now I want the latest "someData" for each "ref" like:
The best solution I could find depended on the fact that serial is
higher for higher dates. I do not like that because if that is true, it
is an indirect way to get the data and could possibly, in the future,
yield the wrong result if unrelated changes where made or id's reused.
Here is my solution (that depend on the SERIAL):
SELECT x.ref,x.someData
FROM t as x
NATURAL JOIN (SELECT ref,max(id) AS id FROM t GROUP BY ref ORDER BY
ref) AS y ;

Well, you could just substitute max(inserted) for max(id).  But you
I tried this, but that did not get me "someData" because I need "id" for 
that.

should also consider using DISTINCT ON --- look at the "weather reports"
example in the SELECT reference page.

DISTINCT ON did the trick. Thank you!
My select is now much simpler:
SELECT DISTINCT ON (ref) ref, someData FROM t ORDER BY red,date DESC;

Also 20-30% faster in my setup.

I tried DISTINCT but I wasn't aware of the "DISTINCT ON" functionality. 
You live - you learn.

BTW, "inserted DATE DEFAULT CURRENT_TIMESTAMP" looks pretty fishy.
You sure the column type shouldn't be timestamp or timestamptz, to
support multiple updates per day?

Sorry, my typo, it is really a TIMESTAMP(0), of course.

/Fredric

regards, tom lane



<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] (Hopefully stupid) select question.

2011-01-24 Thread Fredric Fredricson

I have been fighting with a select and can find no satisfactory solution.

Simplified version of the problem:

A table that, in reality, log state changes to an object (represented as 
a row in another table):


CREATE TABLE t (
id SERIAL UNIQUE,
ref INTEGER, -- Reference to a row in another table
someData TEXT,
inserted DATE DEFAULT CURRENT_TIMESTAMP
) ;
Then we insert multiple rows for each "ref" with different "someData".


Now I want the latest "someData" for each "ref" like:

ref | someData (only latest inserted)
-
 1  | 'data1'
 2  | 'data2'
etc...

The best solution I could find depended on the fact that serial is 
higher for higher dates. I do not like that because if that is true, it 
is an indirect way to get the data and could possibly, in the future, 
yield the wrong result if unrelated changes where made or id's reused.


Here is my solution (that depend on the SERIAL):
SELECT x.ref,x.someData
  FROM t as x
  NATURAL JOIN (SELECT ref,max(id) AS id FROM t GROUP BY ref ORDER BY 
ref) AS y ;


Can somebody come up with a better solution? (without resorting to 
stored procedures and other performance killers).


/Fredric
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql as a dictionary coder backend?

2011-01-23 Thread Fredric Fredricson

On 01/23/2011 12:29 PM, Attila Nagy wrote:

 Hello,

I'm looking for a database backend for a dictionary coder project. It 
would have three major tasks:
- take a text corpus, get their words and substitute each word by a 64 
bit integer (the word:integer is always constant) and store the result 
(encoding)
- take the previous result and substitute the integers with words 
(decoding)
- the words should be reference counted, so if a word can be no longer 
found in any of the encoded messages, delete it (and optionally free 
it's integer ID, but 64 bit is believed to be enough for a long time, 
although having smaller IDs result smaller encoded files). This could 
be achieved by informing the database of the words of a deleted 
message, so it could decrement those refcounts and delete the records 
if needed.
Why do you need 64 bits? An language contains somewhere around 1.000.000 
words so 32 bits should be enough for more than 2000 languages. I read 
somewhere that the Oxford Dictionary contains 150.000 words with a total 
of 600.000 word forms.

Anyways, 64 bit seem to be a bit overkill, or am I missing something.


I can easily do this with any RDBMS, with a table of three columns: 
auto incremented ID, word and refcount, with a unique index on word.

The challenge could be:
- that it should scale to several TBs of size and several (hundred) 
billion of records. One scenario would be to store about 40 TBs of 
words and the average word length would be about 50-60 bytes (that's 
about 800*10^9 records). It should work well both for inserting and 
searching (encoding and decoding) words.
50-60 bytes! Oh, so we are not talking about natural language here. 
Sorry, I just assumed that.
Still, I think performance will be a big issue here. I have never tried 
postgresql on anything faster than a fast PC but in my humble experience 
an insert will take at least one ms. With this speed 800*10^9 records 
would take 25 years to insert.
I think you have to think bigger than a single server (ok, that was 
stating the obvious).
- I need atomicity and durability, but having these on a word (record) 
level takes too much IOPS and have no use, so it would be good to have 
an interface for inserting about 1000-50 words in one call, assign 
a unique ID to each unique words and store them (if the word has had 
already an ID, increment its refcount) and give back the IDs for each 
words. This transaction could be committed as one, so the transactions 
could be big, sparing IOPS.
- I need concurrency, so when the above happens from two sources at 
the same time, the same word in the two transactions must get the same ID
Unless the ID is some kind of hash you will have to serialize inserts of 
new words.


Is postgresql a good choice for doing this and if yes, what would be 
the optimal (for both time and space efficiency at encoding and 
decoding) use case?
I might be wrong but this kind of project should probably not rely on a 
"standard" RDBMS. The data structure itself does not seem to complex and 
the performance requirements are quite demanding.


But then, I'm no expert so don't take my word for it.
/Fredric


Thanks,




<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq: how to retrieve query result?

2011-01-17 Thread Fredric Fredricson

On 01/17/2011 09:00 PM,  ? wrote:
I retrieve asynchronous queries results using PQgetResult() which 
returns NULL not in case of some error, but when results end - so 
there is no way to know if database responded successfully or there 
happened some error?

For example, error can happen when column names are miswritten.
I'm no authority on this but my understanding is that it if PQsendQuery 
returns "OK" the query was successful (for some definition of "successful").


/Fredric
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger Performance

2011-01-16 Thread Fredric Fredricson

On 01/15/2011 11:52 PM, Randall Smith wrote:

Hi,

I've created a trigger that checks the uniqueness of two columns in a
table.  Traditionally, one would use a unique constraint, but in my
case, the size of the unique index would be too large and some
performance loss is acceptable.  However, the trigger performance seems
to be far below what's indicated by an explain analyze of the query used
in the trigger.

The unique fields consist of a an indexed int8 (volume_id) and a text
field (name).  The average ratio of volume_id to name is 1 to 10,000.
The query I'm using to check uniqueness in the trigger is:
A possible work around could be to store an extra field that contains 
the md5 hash of the text field and use that field in the index instead 
of the text itself. The md5 sum could be calculated by your trigger.


/Fredric


 ...
 IF (SELECT EXISTS (SELECT 1 FROM t1 WHERE
 volume_id = NEW.volume_id AND name = NEW.name)) THEN
 RAISE EXCEPTION '% already exists on volume', NEW.name;
 END IF;
 ...

This trigger is called only BEFORE INSERTS.

In testing, without the trigger, inserts are at 10,000 every 2 seconds.
With the trigger, they are 10,000 every 40 seconds.  The output of
explain analyze suggests that this shouldn't be taking so long.

 EXPLAIN ANALYZE SELECT EXISTS (
 SELECT 1 FROM t1 WHERE volume_id = 300 AND name = 'whodat');

  Result  (cost=8.35..8.36 rows=1 width=0) (actual time=0.019..0.020
rows=1 loops=1)
InitPlan 1 (returns $0)
  ->   Index Scan using volume_id_idx on t1  (cost=0.00..8.35 rows=1
width=0) (actual time=0.016..0.016 rows=0 loops=1)
Index Cond: (volume_id = 300)
Filter: (name = 'whodat'::text)
  Total runtime: 0.053 ms
(6 rows)

0.053 ms/record / 1,000 ms/sec * 10,000 records = .53 seconds

According to that stat, this lookup should be adding about 0.5 seconds
to 10,000 records, far from the actual 38 seconds it is adding.  I've
tried to change up the query in the trigger to see if I could get
different results with not much luck.  Any idea what might be taking up
the extra time or what I can do to troubleshoot?

Thanks.

-Randall








<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need advise for database structure for non linear data.

2011-01-03 Thread Fredric Fredricson


On 01/03/2011 12:11 PM, Andre Lopes wrote:

[snip]
The problem with this task is that the information is not linear, if I 
try to design tables with fields for all possible data I will end up 
with many row fields with NULL values. There are any problem with 
this(end up with many row fields with NULL values)? Or should I user 
other kind of structure? For example store the data in one field and 
that field containing an associative array with data.
As far as I understand NULL values are not really stored and a column 
with many NULLs is not a problem as such, but if it is part of an index 
the index might not be very useful.


At least that's my understanding of how SQL databases work. If I got 
this wrong I hope someone will correct me.

[snip]

/Fredric
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table add column - specify where the column will go?

2010-11-24 Thread Fredric Fredricson

On 11/24/2010 12:31 PM, Florian Weimer wrote:

* Grzegorz Jaśkiewicz:


2010/11/24 Florian Weimer:

* Grzegorz Jaśkiewicz:


just never use SELECT *, but always call columns by names. You'll
avoid having to depend on the order of columns, which is never
guaranteed, even if the table on disk is one order, the return columns
could be in some other.

This can't be true because several SQL features rely on deterministic
column order.  Here's an example:

SELECT 1 AS a, 2 AS b UNION SELECT 3 AS b, 4 AS a;

  a | b
---+---
  1 | 2
  3 | 4
(2 rows)

Yes, most DBs do a good job to keep it consistent, but they don't have
to. So unless you specify column names explicitly (like you did in the
example above), there's no guarantees.

If the database looked at the column names, the result would be
(1, 2), (4, 3), not (1, 2), (3, 4).

It seems that UNION does not do what you think it does.
Consider you original example, slightly altered:
# SELECT c1,c2 FROM (SELECT 1 AS c1, 2 AS c2 UNION SELECT 2 AS c2, 1 AS 
c1) AS x;

 c1 | c2
+
  1 |  2
  2 |  1


If you change a column name in the first SELECT in the UNION:
# SELECT c1,c2 FROM (SELECT 1 AS *c3*, 2 AS c2 UNION SELECT 2 AS c2, 1 
AS c1) AS x;

ERROR:  column "c1" does not exist
LINE 1: SELECT c1,c2 FROM (SELECT 1 AS c3, 2 AS c2 UNION SELECT 2 AS...
   ^

But if you change the column names in the second SELECT in the UNION 
this is ignored:
# SELECT c1,c2 FROM (SELECT 1 AS c1, 2 AS c2 UNION SELECT 2 AS *c3*, 1 
AS *c4*) AS x;

 c1 | c2
+
  1 |  2
  2 |  1
Apparently, in a UNION the column names are derived from the first 
statement only.


Postgresql 8.4.5

/Fredric
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Best practice to get performance

2010-11-18 Thread Fredric Fredricson

Hi,
I have designed a handful databases but is absolutely no SQL-expert. Nor 
have I had any formal database training and have never worked with 
someone who had. What I know about SQL I have read in the documentation, 
found with google, and learned from my numerous mistakes.


This question I have is somewhat related to the "unlogged tables" 
proposal that is discussed in another thread.


The background is that I am designing a data storage that, unlike all 
other data storage, have some performance requirements (yes, that was a 
joke ;-).
What I have done to handle this is to create "lookup" tables that cache 
preprocessed information. The simplest is row count but also results 
from selects with joins and group clauses. These tables are updated 
either on demand (first call), by triggers, or periodically.


I assumed this was fairly standard practice and when I read about 
unlogged tables these tables was the first use that came to my mind. 
Since the lookup tables are used for performance and contain redundant 
data loosing the data at a restart is no real problem.


What puzzle me though is that this use is never mentioned in the 
discussions, at least as far as I can see. Am I doing something 
"strange"? Is this something you should not have to do if you have 
"proper" database design?


Regards
/Fredric


Re: [GENERAL] The first dedicated PostgreSQL forum

2010-11-17 Thread Fredric Fredricson

On 11/17/2010 03:09 PM, Tony Caduto wrote:

On 11/15/2010 5:53 PM, Lee Hachadoorian wrote:

If anyone's interested, I've started accessing the postgres list through
gmane.org (along with several other mailing lists I subscribe to). It's
gives you the choice of reading the list as a threaded archive, a blog,
or through an NNTP newsreader or an RSS feed. Everyone chooses their
preferred interface, the community is not fractured by interface 
preference.




Honestly those options suck.
The mailing lists etc are fine, but they are kind of old school, 
people coming from other databases
expect a web based forum plain and simple.  To attract more users the 
forums are a GREAT idea.


Stop this ridiculous complaining about interface fracture etc .

I can tell you for SURE that many people who are not OLD SCHOOL hate 
mailing lists,  A web based forum

gives everyday users more of a chance of interacting with the community.

It would be a good idea to stop the bickering and just implement the 
forums.


Sorry if people don't like my honest answer :-)
That may be your honest opinion but some of us actually prefer mailing 
lists. I use both, for different purposes and have noticed that there 
are quite a lot of really bad forum implementations.


When I use mailing lists I filter the mail to different folders (as I 
suspect 99.9% of all mailing list users do). I always choose mail client 
where filtering is convenient. If you are not free to choose mail client 
and are stuck with a bad one I can understand if you don't appreciate 
mailing lists but for me it is far more convenient than a forum.


But you have a point that mailing lists are not for everyday users but I 
doubt that "everyday users", that find configuring an email client 
difficult, would have any reason to visit this mailing list.


Still, the best option would of course be to have both, that would give 
people choice, but I share the concern for anonymous posts spamming the 
mail lists with questions that can be answered with a simple search in 
the (excellent) documentation.


My $.02
/Fredric


<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why facebook used mysql ?

2010-11-15 Thread Fredric Fredricson

On 11/09/2010 06:01 PM, Andy wrote:

MySQL is GPL'd, just like Linux is.

Well it is and it isn't. A couple of years ago when I was involved with 
choosing DB for a (proprietary) application we could not figure MySQLs 
license out. It was GPL'd but at the same time if you wanted to use it 
commercially you had to pay. As far as we could tell you should not need 
LGPL to make calls to a database.


We choose Postgresql based on features (MySQL did not have stored 
procedures back then) so we never resolved the license "dilemma" but it 
sure looked strange.


/Fredric

PS. The license cost as such would not be prohibitive but in our case we 
did not want the administration that follows licenses. The application 
was a industrial machine and there where several thousands already out 
that that would be upgraded and a couple of hundreds a year produced.
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is "return code" for WAL send command

2010-10-25 Thread Fredric Fredricson

 On 10/25/2010 01:19 PM, David W Noon wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, 25 Oct 2010 09:49:02 +0200, Fredric Fredricson wrote abour Re:
[GENERAL] What is "return code" for WAL send command:


  On 10/23/2010 02:01 PM, zhong ming wu wrote:

[snip]

For some reason it failed recently with "return codes 32512" and I
couldn't find this num. as a valid rsync exit code

During the time that the error was going on I tried the rsync
command manually and didn't error


I guess you have fallen into the common pitfall that cron does not
read bash ini files and does not set the $PATH variable. If you want
your scripts to be run by cron you must make them independent of $PATH
and other variables set by .bashrc etc.

One can also specify environment variables at the top of the crontab.
This overcomes the issue of cron jobs not reading /etc/profile and
other start-up scripts that are processed by a normal login shell.  For
example, here is my personal crontab:

Ah, yes of course, It was a while ago I played with cron.
"man 5 crontab" is also a good command to know, or
"info crontab" if you prefer those modern inventions.

/Fredric

<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is "return code" for WAL send command

2010-10-25 Thread Fredric Fredricson

 On 10/23/2010 02:01 PM, zhong ming wu wrote:


Hello

I have a master PG 8.1.11 sending WAL files to a cold-standby using rsync

For some reason it failed recently with "return codes 32512" and I 
couldn't find this num. as a valid rsync exit code


During the time that the error was going on I tried the rsync command 
manually and didn't error


I guess you have fallen into the common pitfall that cron does not read 
bash ini files and does not set the $PATH variable. If you want your 
scripts to be run by cron you must make them independent of $PATH and 
other variables set by .bashrc etc.


My $.02.
/Fredric


Can someone tell me what kind if error is 32512

Thanks

Mr. Wu



<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to allow PostgreSQL to accept remote connection?

2010-04-24 Thread Fredric Fredricson

Andre Lopes wrote:

Thanks for the reply's,

How can I configure pg_hba.conf to accept connections from all IP's

What I have in this file is:

#ipv4|host all all 127.0.0.1/32 <http://127.0.0.1/32> md5
#ipv6
||host all all ::1/128 md5|

What I need to change?
||According to 
http://www.postgresql.org/docs/8.2/interactive/auth-pg-hba-conf.html you 
should add a line

host all all 0.0.0.0/0 md5
/Fredric
PS. When everything else fails, read the manual.



Best Regards,


On Sat, Apr 24, 2010 at 4:15 PM, Fredric Fredricson 
<mailto:fredric.fredric...@bonetmail.com>> wrote:


Raymond O'Donnell wrote:

On 24/04/2010 15:58, Andre Lopes wrote:
 


Hi,

I have a virtual machine with VMWARE, and I need to
connect from outside
the virtual machine to PostgreSQL.

How can I configure the PostgreSQL to accept outside
connections?
   



It should be just the same as a "real" machine put the IP
address of
the VM's network interface in "listen_addresses" in
postgresql.conf.
 


You will probably also have to edit pg_hba.conf file (chaper 20.1
in the manual).
/Fredric

Ray.


 






<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Order by and strings

2010-02-09 Thread Fredric Fredricson

Scott Marlowe wrote:

On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
 wrote:
  

I use locale en_US.UTF-8 but why this should affect how leading characters
in strings are ignored is beyond me.



P.s. this page may shed some light on the subject:

http://en.wikipedia.org/wiki/Collation
OK, thanks. It did shed some light on the subject. Only I wonder what 
would happen if these sort algorithms where used on things like article 
numbers in the industry. That would confuse the hell out of the people 
there.
Alas, not my problem. I have a work around that works for my little part 
of the universe. At least for now.


You live, you learn.
/Fredric
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Order by and strings

2010-02-09 Thread Fredric Fredricson

Justin Graf wrote:

On 2/8/2010 7:09 PM, Fredric Fredricson wrote:

Hi!
New to the list with a question that I cannot find the answer to in 
the manual or on the internet but I suspect is trivial. If somebody 
could point me in the correct direction I would be greatful.


This is what I do (condensed, of course):
# create table tmp ( x text ) ;
CREATE TABLE
# insert into tmp(x) values ('a'),('c'),('-b') ;
INSERT 0 3
# select * from tmp order by x ;
 x

 a
 -b
 c
(3 rows)


It has to do with the collation you are using
I use locale en_US.UTF-8 but why this should affect how leading 
characters in strings are ignored is beyond me.


Another, in my mind, counter-intuitive example:
# create table tmp ( x text ) ;
# insert into tmp(x) (values 
('a'),('-b'),('c'),('aa'),('---a-b'),('ac'),('1'),('-2'),('+3'),('4'),('-'),('+')) 
;

# select * from tmp order by x ;
  x

-
+
1
-2
+3
4
a
aa
---a-b
ac
-b
c
(12 rows)

In what universe would you expect this sort order? And how to make it 
'sane'?
I found a work-around, "order by ascii(x),x", but this continues to 
baffle me.


It seems to me that if there are any alphanumeric characters in the 
string the rest are ignored in the sort.

Where did this rule come from?
I really would appreciate an explanation for this behavior.

/Fredric

PS. I was wrong about the server version, it is 8.3.8.
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Order by and strings

2010-02-08 Thread Fredric Fredricson

Hi!
New to the list with a question that I cannot find the answer to in the 
manual or on the internet but I suspect is trivial. If somebody could 
point me in the correct direction I would be greatful.


This is what I do (condensed, of course):
# create table tmp ( x text ) ;
CREATE TABLE
# insert into tmp(x) values ('a'),('c'),('-b') ;
INSERT 0 3
# select * from tmp order by x ;
x

a
-b
c
(3 rows)

I would expect a string that start with a hyphen to be sorted before or 
after 'a' and 'c' and not between them. I have tried with a few other 
characters (space, opening parenthesis, etc) but the result is the same.


What I want is the strings sorted by their ascii (or UTF-8) values, 
without some "smart" heuristic. How do I accomplish this?


I cannot find this described in the manual (it should be there!) or on 
the net.


/Fredric

PS. 8.2, Fedora Linux

<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general