Re: [GENERAL] repmgr

2014-11-11 Thread Robin Ranjit Singh Chauhan
I set up repmgr and its working.  I confirmed that using:

select * from pg_stat_replication;
select pg_is_in_recovery();

However, on both master and slave there is only one entry
in repmgr_cane.repl_nodes : the master.

Is that expected?



On Mon, Nov 10, 2014 at 4:29 PM, John R Pierce pie...@hogranch.com wrote:

 On 11/10/2014 4:10 PM, Robin Ranjit Singh Chauhan wrote:

 I havent been able to find much about repmgr on postgres 9.3

 Is repmgr still a significant value add given the newer replication
 features built in?


 repmgr is a management tool for setting up and controlling the built in
 replication features.



 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Robin R Chauhan
CEO, Pathway Intelligence Inc
ro...@pathwayi.com
Office: 778-588-6217 Ext. 201
Cell: 604-865-0517
http://pathwayi.com/


[GENERAL] repmgr

2014-11-10 Thread Robin Ranjit Singh Chauhan
I havent been able to find much about repmgr on postgres 9.3

Is repmgr still a significant value add given the newer replication
features built in?

Thanks.
=R
-- 
Robin R Chauhan
CEO, Pathway Intelligence Inc
ro...@pathwayi.com
Office: 778-588-6217 Ext. 201
Cell: 604-865-0517
http://pathwayi.com/


Re: [GENERAL] Employee modeling question

2014-09-04 Thread Robin


Robin St.Clair
On 04/09/2014 20:44, Nelson Green wrote:
On Thu, Sep 4, 2014 at 9:48 AM, François Beausoleil 
franc...@teksol.info mailto:franc...@teksol.info wrote:


Hello Nelson,

Le 2014-09-04 à 10:39, Nelson Green nelsongree...@gmail.com
mailto:nelsongree...@gmail.com a écrit :

 Good morning,

 Hopefully this is the correct place to ask this type of question.

 I am in the early stages of designing a system to track employee
 information, including some aspects of their payroll, one of
which is
 the source of the salary funds within the business. I need to make
 this generic enough to accommodate instances where an employee's
 salary is sourced from a department's budget, as well as those
 salaries of employees that are employed by companies too small to be
 departmentalized. So when an employee is employed by a
department, the
 department is the source of the employee's salary, and the
business is
 the department's parent entity. But when an employee is employed
by a
 business that has no departments, the business entity is the
source of
 the employee's salary, and the parent entity. I am struggling
with the
 correct logical implementation of this scenario.

 So basically I have a situation where an employee will most
likely be
 associated with a department within a business, but this can not be
 guaranteed, and I'm not 100% sure how to handle this. I am going to
 face the same problem with contracts where a department can
out-source
 a function, or a business can. I think there may even be instances
 where a business with departments may out-source functionality
that is
 not charged to a department.

 I could store the department and business attributes with the
 employee, but without proper constraints the referenced department
 could conceivably not correspond to the referenced business. Or I
 could ensure that all businesses have at least one department,
 defaulting to the business when the business has no department, but
 then I'd be storing duplicate data. The other alternative I've
come up
 with is an exclusive constraint where the employee instance can only
 reference a department or a business, but not both.

 None of these solutions seems ideal, although the exclusivity
solution
 seems like it would work the best, and I have had to create
 exclusivity constraints before. So, am I missing a more obvious
 solution, or am I even on track here?

I’ve found this « Universal Person and Organization Data Model »
very useful to understand complex questions like that:
http://www.tdan.com/view-articles/5014

Hope this helps,
François Beausoleil

Thanks François. This is pretty generic stuff, but my first reading 
has got me thinking that I should at least pick up some ideas from it. 
I will give it a go. And I've got a new web site that I've now known 
about before, so thanks for that as well.


Regards,
Nelson


I strongly suggest you read the writings of Joe Celko, he has been 
addressing this kind of problem for decades. Ultimately, it is very 
important not to confuse behaviour with entities (yes employment is 
behavioural)


Cheers



Re: [GENERAL] Database block lifecycle

2014-08-13 Thread Robin


On 13/08/2014 17:23, Jeff Janes wrote:


On Wed, Aug 13, 2014 at 1:07 AM, pinker pin...@onet.eu 
mailto:pin...@onet.eu wrote:



 btw. 512MB if we assume up to 600 connection is a reasonable value?


Reasonable value for what?

 For normal server load.


512MB is being questioned as a reasonable value for what? 
 shared_buffers?  work_mem?  maintenance_work_mem?


Cheers,

Jeff


Generally speaking, folk imagine that DBMS performance is all about disk 
access - in reality chucking as much memory as possible at the server(s) 
is an optimal investment. analyse your queries and store time critical 
stuff in memory


R+C



Re: [GENERAL] Natural key woe

2014-05-13 Thread Robin

Oliver

I've read your email, with interest. I haven't had to deal with this 
sort of problem in PostgreSQL, but I have frequently dealt with it in a 
Sybase environment, first encountered about 25 years ago.


I am most curious to know why you didn't use the same sequence for both 
tables, I must be missing something. If there is a gotcha, I'd like to 
know about it as I can see this being an issue in a load sharing 
environment.
Many years ago, before auto sequencing was common, we set up explicit 
functions to generate sequwnce numbers. Whilst this had some perormance 
costs in multi-user systems, it did have benefits in terms of making it 
easier to restrict the need for row locking to the underlying data table.


Robin St.Clair
On 13/05/2014 10:44, Oliver Kohll - Mailing Lists wrote:

I'm sure no one else on this list has done anything like this, but here's a 
cautionary tale.

I wanted to synchronise data in two tables (issue lists) - i.e. whenever a 
record is added into one, add a similar record into the other. The two tables 
are similar in format but not exactly the same so only a subset of fields are 
copied. Both tables have synthetic primary keys, these can't be used to match 
data as they are auto-incrementing sequences that might interfere. What I could 
have done perhaps is get both tables to use the same sequence, but what I 
actually did is:

* join both tables based on a natural key
* use that to copy any missing items from table1 to table2
* truncate table1 and copy all of table2's rows to table1
* run this routine once an hour

The natural key was based on the creation timestamp (stored on insert) and the 
one of the text fields, called 'subject'.

The problem came when someone entered a record with no subject, but left it 
null. When this was copied over and present in both tables, the *next* time the 
join was done, a duplicate was created because the join didn't see them as 
matching (null != null).

So after 1 hour there were two records. After two there were four, after 3, 8 
etc.

When I logged in after 25 hrs and noticed table access was a little slow, there 
were 2^25 = 33 million records.

That's a learning experience for me at least. It's lucky I did check it at the 
end of that day rather than leaving it overnight, otherwise I think our server 
would have ground to a halt.

One other wrinkle to note. After clearing out these rows, running 'VACUUM 
table2', 'ANALYZE table2' and 'REINDEX table table2', some queries with simple 
sequence scans were taking a few seconds to run even though there are only a 
thousand rows in the table. I finally found that running CLUSTER on the table 
sorted that out, even though we're on an SSD so I would have thought seeking 
all over the place for a seq. scan wouldn't have made that much difference. It 
obviously does still make some.

Oliver Kohll
www.agilebase.co.uk










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could use some advice on search architecture

2014-04-19 Thread Robin



On 18/04/2014 21:24, Andy Colson wrote:

On 4/18/2014 8:59 AM, Ron Pasch wrote:

Hello,

I'm contemplating what architecture I should use to make searching as
fast as possible given the information available and the search
requirements. Let me give some background first;

- The database contains products of can potentially have a lot of them
(up to about 3 to 5 million)
- Each product has about 30 different properties defined about them.
Things like what color they are etc. All these properties are enumerated
choices, so for instance for color there is a list of available static
never changing options of which one can be chosen for that product. This
is the same for all those 30 properties. Currently they are stored as
enumerated types (CREATE TYPE propertyvalue AS ENUM ('option1',
'option2', etc..)
- It should be possible to search for products and provide properties
that the product SHOULD have, not must have. For instance, for color,
the search could specify that it should return products that are either
red, blue or green.
- The products that match with the most properties should be in the top
of the search results
- If different products match with the same amount of properties, the
ordering should then be on the product that is most popular. There is
information in the database (and if need be also in the same table)
about how many times a product is sold.
- The results will be paginated per 15 products

The requirement is that these searches should be as fast as possible,
with a maximum of about 200 ms time taken for a search query.

What would be the best approach to this if I were to do this in the
database only? Should/can this be done with postgresql only or should I
look into other types of technology? (Lucene? Sphinx? others?)

Any advice on this would be greatly appreciated.

Thx in advance!

Ron






As we are PG users, on a PG list, we are gonna recommend PG, 
obviously. :-)


Actually though, I recommend PG.


 - The products that match with the most properties should be in the top
 of the search results


That kinda query is going to be really difficult, I think, regardless 
of what you use.  To find that you'll have to measure every product 
(all 5 million) and then sort the results.



 enumerated types (CREATE TYPE propertyvalue AS ENUM ('option1',
 'option2', etc..)

So, you have something like:

CREATE TYPE colortype AS ENUM ('red', 'green', 'blue');
CREATE TYPE sizetype AS ENUM ('small', 'medium', 'large');

create table product
(
  id serial,
  name text,
  color colortype,
  size  sizetype,
   ...
);


I assume the problem is you don't want to index all 30 properties?  
That makes sense.


 - It should be possible to search for products and provide properties
 that the product SHOULD have, not must have.

I don't understand this.  Say you have a sprocket in red and green.  
Do you want to search for:


select * from product where name = 'sprocket' and (color = 'red' or 
color = 'green')


Or do you want something else?  Does the user say they'd prefer 
blue, but will take whatever else you have?


Do you search for some properties exactly and some preferred?

Perhaps you could describe a little more how you want to query the 
database?  Or, maybe, what your user's are searching for?


-Andy


Well, given that there are known limited attributes, this is the type of 
application that really really suits a column oriented database, such as 
Sybase IQ (now sold by SAP). Its a neat product that scales. Great 
performance with drag'n'drop analytics.


Unless you can charm IQ out of SAP (it has been known to happen), you 
might have to look at some other techniques


So consider some binary data representation
Red - 1 ( 0001)
Orange - 2 ( 0010)
Yellow - 4 ( 0100)
Green - 8 ( 1000)
Blue - 16 (0001 )
Indigo - 32 (0010 )
Violet - 64 (0100 )

This way, you can encode several colours in 1 value
Red or Green or Indigo = 1 + 8  + 32 = 41 = 0010 1001


Robin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could use some advice on search architecture

2014-04-19 Thread Robin

bottom post
On 19/04/2014 12:46, R. Pasch wrote:

On 19-4-2014 9:38, Robin wrote:


Well, given that there are known limited attributes, this is the type 
of application that really really suits a column oriented database, 
such as Sybase IQ (now sold by SAP). Its a neat product that scales. 
Great performance with drag'n'drop analytics.


Unless you can charm IQ out of SAP (it has been known to happen), you 
might have to look at some other techniques


So consider some binary data representation
Red - 1 ( 0001)
Orange - 2 ( 0010)
Yellow - 4 ( 0100)
Green - 8 ( 1000)
Blue - 16 (0001 )
Indigo - 32 (0010 )
Violet - 64 (0100 )

This way, you can encode several colours in 1 value
Red or Green or Indigo = 1 + 8  + 32 = 41 = 0010 1001


Robin


I stopped reading when I heard the word sold by SAP ;-) This project 
is solely build with open-source and freely available software.


I've been thinking about using a binary data representation but didn't 
come to a solution to this specific problem quite yet. Per property of 
a product, only one bit would be 1 and the rest would be 0. What would 
a query look like to match all products that have a bit in the correct 
position?


Say for instance these are a couple records (and yes, property values 
can be null as well)


title, property1, property2, property3

product1,  0001,  0010, NULL
product2,  0100, 0100 , 0010 
product3, 0010 , 0010 , 0100 

Say that I would like to retrieve the products that either have 
property1 as 0010 , 1000 000 or  0001. Combined that would be 
0010 1001 and would have to match product1 and product3 as they both 
have their individual bit matching one of the bits being asked for. 
What would a where statement look like using this type of binary 
representation?


If that would be fairly simple to do and fast (most important factor) 
then I could do an OR construction on all property columns and have 
something count the amount of properties that actually matched. Is 
that something you can do with a binary operator of some sort as well? 
Count the amount of overlapping bits?


Say for instance I have a binary value of 0110 0101 and another binary 
value of 1100 0100, how could I found out how many bits matched? (in 
this case the number of matching bits would be 2)




I understand the reluctance to pay SAP-style rates, as a longtime DB 
user, I have learned some 'charm' techniques.


However, I poked around a bit for alternatives, as I do like the 
column-oriented approach, and found something called - /MonetDB / 
http://www.monetdb.com/Home/Features- it apparently has a column-store 
db kernel, and is open source - I suggest you have a look, if it does 
what it says on the label, then it looks like a find.


There is a discussion of bitmask-trickiness here also dealing with 
colours 
http://postgresql.1045698.n5.nabble.com/Bitmask-trickiness-td1865258.html


Robin
http://postgresql.1045698.n5.nabble.com/Bitmask-trickiness-td1865258.html



Re: [GENERAL] Could use some advice on search architecture

2014-04-19 Thread Robin

Because this is a topic that interests me I have done some digging.

MonetDB http://www.monetdb.com/Homeis a proper column-store DBMS. It 
is used on some decent sized projects, several of which are EU-funded.


Digging a bit deeper I have discovered that a PostgreSQL Foreign Data 
Wrapper for MonetDB has been created - monetdb_fdw.


The fdw is written up here 
http://pgsnaga.blogspot.co.uk/2013/07/monetdbfdw-postgresql-meets-column.html


There is a video that shows what happens when a query is executed in 
PostgreSQL and MonetDB


The video shows an analytical query being processed

1. Using PostgreSQL - 177 seconds
2. Using MonetDB - 8 seconds
3. Using a remote MonetDB server through MonetDB FDW - 1 second


I think its worth investigating further

Robin St.Clair




Re: [GENERAL] Full Text Index Scanning

2014-02-07 Thread Robin Coe
I was looking for an answer to the same problem posted a while back (sorry,
not sure how to join that thread):

Thanks. pg_trgm looks interesting, but after installing the pg_trgm.sql, I
get error messages when following the documentation.

sggeeorg= create index test_idx on test using gist(columnname
gist_trgm_ops);
ERROR:  operator class gist_trgm_ops does not exist for access method
gist
STATEMENT:  create index test_idx on test using
gist(columnname gist_trgm_ops);
ERROR:  operator class gist_trgm_ops does not exist for access method
gist



On Sun, Jan 30, 2011 at 10:36 AM, Tom Lane
tgl(at)sss(dot)pgh(dot)pa(dot)us wrote:

 Matt Warner matt(at)warnertechnology(dot)com writes:
  If I understand this, it looks like this approach allows me to match the
  beginnings and endings of words, but not the middle sections.

 Yeah, probably.  You might consider using contrib/pg_trgm instead if
 you need arbitrary substrings.

regards, tom lane


I just ran into the same problem using pgsql 9.1 on both Windows and
Linux.  It happened when I tried
to create an index on a new table in a new schema of a database that
already had the pg_trgm extension
loaded.  The only way I could get the index to build was to first drop
the extension and then re-add it.
Once I re-added the extension, I was then able to create the index.


Re: [GENERAL] compliance with 21 cfr part 11

2013-12-04 Thread Robin
I have built big nationwide health databases, and other secure systems 
requiring traceback and auditability.


The more you are able to make the required processes transparent to the 
end user, the happier they are and the fewer problems you have from a 
user acceptance perspective.


Audit does belong in the database, go and look at where the likes of 
Sybase ASE and DB2 do it. If you do it up front, you are continually 
maintaining it.


Few systems control appropriate access to data; that is a harder and 
proprietary trick.


Robin St.Clair

On 05/12/2013 00:07, Rob Sargent wrote:

On 12/04/2013 04:41 PM, John R Pierce wrote:

On 12/4/2013 3:34 PM, e-letter wrote:

Can postgresql be deployed with audit trail and data validation
capability, to meet compliance with 21 cfr part 11?


IMHO, that sort of audit trail should be higher level in your 
business logic, not in the database.   but that's just my opinion, 
worth exactly what you paid for it.




I've been through 21 cfr part 11 (successfully) and it has nothing to 
do with the database vendor and everything to do with how you track 
changes to the data.




Re: [GENERAL] Any advantage of using SSL with a certificate of authority?

2013-11-26 Thread Robin

There is a downside to self-signed certificates.

1. A self-signed certificate can be issued by anybody, there is no way
   of authenticating the issuer.
2. Distributing self-signed certificates becomes a pain - if signed by
   a CA, its easy to lodge your public key where everybody can find it,
   and knows where to look for it.
3. Maintenance becomes a problem


I only use self signed certs for testing.

Robin St.Clair

On 26/11/2013 19:34, Andrew Sullivan wrote:

On Tue, Nov 26, 2013 at 02:18:58PM -0500, Vick Khera wrote:

Using self-signed certs you can give them longevity of 10+ years, so never
have to worry about them again :)

Unless of course you turn out to have a weak algorithm and, say, No
Such Agency decides to take up residence on your network.  (It's not
clear that CAs are any protection against that either, though, of
course.)  In general, 10+ years is probably too short a time to be
using a cert unless you are completely certain to whom it could be
exposed.  (Some would argue that if you had that certainty, you might
not need TLS/SSL anyway.  I guess I'd respond that you could use TLS
anyway because it would help in case of a network compromise.)

Best,

A





Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Robin St . Clair

Hi

For decades, this type of problem has been the meat and vegetables of 
discussions about SQL programming and design.


One writer on this subject has stood out, thanks to his mental clarity 
and ability to set out complicated concepts in a readily comprehensible 
manner.


His name is Joe Celko http://en.wikipedia.org/wiki/Joe_Celko. He has 
published several books, including SQL For Smarties 
http://en.wikipedia.org/wiki/Special:BookSources/9780120887972 and SQL 
Puzzles  Answer 
http://en.wikipedia.org/wiki/Special:BookSources/9780123735966s, you 
may even find them in .pdf format online.


Read some of what Joe has written and you will find answers to this sort 
of problem.


Incidentally, I can't remember a problem like this since we stored data 
on open reel tape systems.  We used to set multiway masterfile update 
problems for first year Information System students, before they had 
widespread access to ISAM and database hosted exercises. What you have 
is a 'batch' system, you might find it worthwhile rolling (virtually) A 
 B together and then grouping C. The trick will be in the way you 
handle grouping, Joe Cleko has an entire book on this subject - Thinking 
in Sets: Auxiliary, Temporal  Virtual Tables in SQL 
http://en.wikipedia.org/wiki/Special:BookSources/9780123741370.


This blog http://joecelkothesqlapprentice.blogspot.co.uk/(based on his 
solutions) is worth a look.


Robin St.Clair




On 18/11/2013 02:16, Hengky Liwandouw wrote:

Dear Friends,

Please help for the select command, as i had tried many times and always can 
not display the result as what i want.

I am looking for the solution on google but still can not found the right 
answer to solve the problem.

I have 3 tables :

Table A
ProductID
ProductName
SupplierID

Table B
ProductID
InitialStock

Table C
ProductID
Date
In
Out

1. I want to select all productID from Table A where supplierID='XXX'.

2. Based on list from Step.1 : sum the initialstock from  Table B

3. Based on list from Step 1 : Sum (in-out) from Table C where date 'BEGINNING 
DATE'

4. Based on list from Step 1 : Sum (in) and sum(out) from Table C where date 
between 'BEGINNING DATE' and 'ENDING DATE'

So the result will look like this :

ProductID  ProductName  SumofIntialStock  sum(in-Out)beginningdate   
SumofIN  SumofOut
   x  99   99   
  99 99
   x  99   99   
  99 99
   x  99   99   
  99 99
   x  99   99   
  99 99


What command to get result like this ? i have tried crosstab function but not 
success too :(

Thanks in advance
  








Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Robin St . Clair
In general, when I have to handle Ledger type data (which this problem 
is), I tend to hold data in 3 tables


1. Master Ledger ( Product ID, Name, etc)
2. Master Ledger Balances(Product ID, Fiscal_Year, Opening Balance,
   Net_Transaction_P1, Net_Transaction_P2, ...  etc)
3. Master Ledger Transactions(Product_ID, (Fiscal_Year), Date,
   Amount..)

I use Triggers and Stored Procedures to maintain consistency. This 
allows you to quickly navigate across the population of your data and 
drill down to the detailed transaction when required.


Careful manipulation of the Master Ledger Balances table lets you 
retrieve multiple different kinds of information at a single pass, ie 
This Year To Date Actual, Last Year To Date Actual and Budget This Year 
To Date. I usually create functions/SPs to do this even more rapidly.


If you have many bulk updates, it can be better to drop all indices 
prior to updating and then rebuilding them.


Robin St.Clair



On 18/11/2013 10:04, Hengky Liwandouw wrote:

Thanks a lot Ken,

I will try it soon.

But when the table becomes huge (how big 'huge'  in postgres ?), how 
to optimize such command ?


I have index on all important field like date, productid, supplierid, 
customerid and so on


Optimization is really an important thing as i plan to keep all 
transaction data as long as possible.



On Nov 18, 2013, at 5:37 PM, Ken Tanzer wrote:

If the tables aren't huge, you're not concerned about optimization, 
and you just want to get your numbers, I think something like this 
would do the trick.  I haven't actually tried it 'cause I didn't have 
easy access to your tables:


SELECT
  a.product_id,
a.product_name,
b.initial_stock_sum,
  c.in_out_sum,
  c.in_sum,
  c.out_sum
FROM
  a
LEFT JOIN
  (SELECT
product_id,
SUM(initial_stock) AS initial_stock_sum
  FROM b
  GROUP BY product_id
  ) b USING (product_id)
LEFT JOIN
  (SELECT
product_id,
sum(CASE WHEN date  'BEGINNING DATE' THEN in-out ELSE 0 END) AS 
in_out_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' 
THEN in ELSE 0 END) AS in_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' 
THEN out ELSE 0 END) AS out_sum

   FROM c
   GROUP BY product_id
   ) c USING (product_id)
WHERE a.supplier_id='XXX';

Cheers,
Ken





Re: [GENERAL] Open Source CRM - Options?

2008-05-28 Thread Robin Helgelin
On Tue, May 27, 2008 at 4:18 AM, Mark Neely [EMAIL PROTECTED] wrote:
 I've already shortlisted potential CMS systems (including several open-
 source options, such as Drupal and Joomla).

[snip...]

 I am looking for examples of open-source CRM (or similar platforms)
 used for this kind of profiling/personalisation, and would appreciate
 any pointers readers of this newsgroup might be able to offer.

I think the important question is whether the op means CMS or CRM
(quite different).

-- 
 regards,
 Robin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] After Installing a Program I get this error: psql:sql/Pg-database.sql:825: ERROR: language plpgsql does not exist

2008-01-10 Thread Robin-Vossen

CREATE INDEX
psql:sql/Pg-database.sql:825: ERROR: language plpgsql does not exist
HINT: You need to use createlang to load the language into the database.
psql:sql/Pg-database.sql:828: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:836: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:839: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:841: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:843: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:845: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:855: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:858: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:868: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:871: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:929: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:932: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:935: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:938: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:960: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:963: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:989: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:992: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:994: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:996: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:998: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:1008: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:1010: ERROR: current transaction is aborted,
commands ignored until end of transaction block
psql:sql/Pg-database.sql:1012: ERROR: current transaction is aborted,
commands ignored until end of transaction block
So I thought lets add the language I miss.
doing:
createlang -d ledgersmb -U ledgersmb plpgsql
I keep getting:
createlang: language installation failed: ERROR: permission denied for
language c


So, I wonder what is the best and quickest way to fix this Flaw?

Thanks,
Robin
-- 
View this message in context: 
http://www.nabble.com/After-Installing-a-Program-I-get-this-error%3A-psql%3Asql-Pg-database.sql%3A825%3A-ERROR%3A-language-%22plpgsql%22-does-not-exist-tp14709592p14709592.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] history table

2007-08-21 Thread Robin Helgelin
Hi,

I want to save history for a few tables using triggers on update and
creation. What's the best approach to do this in a webapp environment
where I want to save which webapp user that is doing the change, not
the postgresql user?

-- 
regards,
Robin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] history table

2007-08-21 Thread Robin Helgelin
On 8/21/07, Guy Rouillier [EMAIL PROTECTED] wrote:
 Well, you haven't told us much about your webapp.  Are you using
 connection pooling?  If so, then you'll need to provide the webapp
 userid as an additional parameter to your database updates.  If you are
 not using connection pooling, such that your webapp userids are
 connecting as themselves, then the problem becomes much easier; you've
 got the correct userid to log by just looking at the connection details.

Yes, this is where I'm too new to postgresql, how do I tell the
database which user is logged in to the webapp? A session parameter?
There will be connection pooling, but if I know how to solve the
previous question I don't think it's hard to get it working with the
pool.

-- 
regards,
Robin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] history table

2007-08-21 Thread Robin Helgelin
On 8/21/07, Guy Rouillier [EMAIL PROTECTED] wrote:
 Well, I can't find a way to set a variable associated with a connection,
 so probably the easiest thing to do is to add an updated_by column to
 your regular table (i.e., the non-history version.)  Then just include
 the userid from your webapp as the value for that column.  Your history
 table can then be updated by just copying the entire row from the base
 table whenever an insert or update occurs.

Yes, I think this will be the easiest way, thanks!

-- 
regards,
Robin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] entry log

2007-08-20 Thread Robin Helgelin
On 8/19/07, Michael Glaesemann [EMAIL PROTECTED] wrote:
 As you mention, you could use a trigger instead of explicitly setting
 updated_at to DEFAULT, which might be more convenient because you
 don't need remember to set the updated_at column explicitly on update.

 Whether or not this information is *interesting* is really up to the
 specifics of your application, rather than answerable in a general
 sense.

I'm thinking it's probably going to make more sense to have a
logging/history table. What's the use of seeing when an entry was
updated when you don't know what was updated anyway :).

I guess that could be solved with triggers, each table have a trigger
that fires on update and runs a stored procedure.

-- 
regards,
Robin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] entry log

2007-08-19 Thread Robin Helgelin
Hi,

When I started with MySQL I exploited their bug with timestamp
fields and always had a entered and updated field on my tables.

My question, is this interesting information enough to save on the
table itself? If so, I guess this could easily be solved with a
trigger, however, should one instead create a log table and log
changes, etc?

Hints and tips are appreciated :)

-- 
regards,
Robin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problem compiling on CentOS

2007-06-20 Thread Robin Ericsson

On 6/20/07, Robin Ericsson [EMAIL PROTECTED] wrote:

`-mcpu=' is deprecated. Use `-mtune=' or '-march=' instead.


Ok solved. This is what the configure-script barfs on. Hacking the
spec-file to change this flag removes this problem.

--
   regards,
   Robin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Problem compiling on CentOS

2007-06-19 Thread Robin Ericsson

Hi,

I'm trying to recompile the SRPM with ldap support (maybe this should
be added as an option to the spec?) but it fails to check for ldap_r
due to problems finding my pthread flag in configure.

Snip from configure:
checking whether pthreads work with -pthread... no
...
checking for ldap_simple_bind in -lldap_r... no
configure: error: library 'ldap_r' is required for LDAP

checking the log for why pthread failed:
configure:16390: checking whether pthreads work with -pthread
configure:16478: gcc -o conftest -O2 -g -march=i386 -mcpu=i686
-I/usr/include/et -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
-pthread -D_GNU_SOURCE  -I/usr/include   -L/usr/lib conftest.c   -lpam
-lssl -lcrypto -lkrb5 -lz -lreadline -ltermcap -lcrypt -ldl -lm  5
`-mcpu=' is deprecated. Use `-mtune=' or '-march=' instead.
conftest.c: In function 'main':
conftest.c:139: warning: null argument where non-null required (argument 1)
conftest.c:140: warning: null argument where non-null required (argument 1)
conftest.c:140: warning: null argument where non-null required (argument 3)
conftest.c:138: warning: 'th' is used uninitialized in this function
configure:16484: $? = 0
configure:16488: test -z
|| test ! -s conftest.err
configure:16491: $? = 0
configure:16494: test -s conftest
configure:16497: $? = 0
configure:16535: result: no

Any ideas? This is a CentOS5 installation, only special I can think of
is that it runs as Xen domU, and I have the same problem on the dom0,
no non Xen available at the moment.

--
   regards,
   Robin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] changing the /tmp/ lock file?

2007-06-14 Thread Robin Ericsson

On 6/13/07, Ben [EMAIL PROTECTED] wrote:

Why would that be a problem if each is configured to listen on different
addresses?

But maybe a better question to ask would be how people are doing failover
in the case where you have two servers, each handling a seperate set of
data and acting as backup for each other. I fully expect things to go
slower during failover periods, but in my case, that's better than
doubling my hardware.


Why not running them on different ports and proxy the incoming ports
via iptables or something like that based on the current situation on
your backends?

--
   regards,
   Robin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] simple coordinate system

2007-03-16 Thread Robin Ericsson

On 3/15/07, Webb Sprague [EMAIL PROTECTED] wrote:

http://www.postgresql.org/docs/8.2/static/datatype-geometric.html

Have you looked at these yet?  If not, you asked your question
prematurely and should have read the docs.  If so, in what respect do
they not work for you?


Yes, I've looked at those, I was thinking that point looked like a
good type, but it's only 2d, so maybe I need a hint on how to use this
in a 3d environment.

--
   regards,
   Robin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] simple coordinate system

2007-03-16 Thread Robin Ericsson

On 3/16/07, Tom Lane [EMAIL PROTECTED] wrote:

Robin Ericsson [EMAIL PROTECTED] writes:
 Yes, I've looked at those, I was thinking that point looked like a
 good type, but it's only 2d, so maybe I need a hint on how to use this
 in a 3d environment.

Yeah, the built-in geometric types are all 2D.  If you need 3D, perhaps
PostGIS can help --- otherwise you're on your own :-(.  But adding a new
datatype to PG isn't hard, if you can hack C at all.


My hope was that there was something between standard PostgreSQL and
PostGIS as I didn't want to bring in the whole PostGIS into my
application. But probably it's worth it anyways.

--
   regards,
   Robin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] simple coordinate system

2007-03-15 Thread Robin Ericsson

Hi,

I'm planning a simple coordinate system, where objects are described
as x, y and z. Are there any contribs or extensions available that can
help me with datatypes, calculation of length between two points, etc?

--
   regards,
   Robin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] simple coordinate system

2007-03-15 Thread Robin Ericsson

On 3/15/07, Webb Sprague [EMAIL PROTECTED] wrote:

... planning a simple coordinate system, where objects are described
 as x, y and z. Are there any contribs or extensions available that can
 help me with datatypes, calculation of length between two points, etc?

google postgis.  It is for geographic stuff, so maybe overkill, but
maybe not.  There are are also geometry types native to Postgres that
you can find in the docs


I was thinking about PostGIS, but it seemed overkill for my purpose.
Therefore I asked in the first place :)

--
   regards,
   Robin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL and Munin

2006-10-19 Thread Robin Ericsson

On 10/19/06, DANTE Alexandra [EMAIL PROTECTED] wrote:


My problem is that I have not found any RPM for perl-rrdtool and
rrdtool for IA64 and RHEL4-AS (for the others, it's OK).
Does someone know where I can find them or tell me where I can find the
associated tar.gz archive ?


Download the source rpms and build them yourself?

--
   regards,
   Robin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL and Munin

2006-10-19 Thread Robin Ericsson

On 10/19/06, DANTE Alexandra [EMAIL PROTECTED] wrote:

Hello Robin, hello List,

Can I use it on RHEL4-AS ? I also tried to find the rpm sources on the
Red Hat web site, without success...
/index.php3/stat/3/srodzaj/2/search/rrdtool-1.2.13-1.fc3.rf.src.rpm


Download the source and try. rpmbuild --rebuild *.src.rpm. If it
doesn't work at least you have .spec file you can use to tweak it to
work.

--
   regards,
   Robin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] What is the Best Postgresql Load Balancing Solution available ?

2006-09-18 Thread Robin Ericsson

On 9/18/06, Najib Abi Fadel [EMAIL PROTECTED] wrote:

Hi,

i was searching for a load balancing solution for
postgres, I found some ready to use software like
PGCluster, Slony, pgpool and others.

It would really be nice if someone knows which one is
the best taking in consideration that i have an
already running application that i need to load
balance.


There isn't one tool that is the best, all three work very good based
on where they are used and what they are used for.

--
   regards,
   Robin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Messages to pgsql-general list not being posted

2006-07-19 Thread Robin Ericsson

On 7/19/06, Roman Neuhauser [EMAIL PROTECTED] wrote:

Both are in my pgsql-general maildir, message ids:
[EMAIL PROTECTED] [EMAIL PROTECTED]


I find those as well.


regards,
   Robin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Problem in creating a table

2005-05-17 Thread Robin Ericsson
Wen Guangcheng wrote:
(B Hi Gurus,
(B
(BHi!
(B
(B CREATE TABLE TD_ACCESSCOUNT(
(B AC_YEAR NUMBER(4,0)NOT NULL,
(B AC_MONTHNUMBER(2,0)NOT NULL,
(B AC_DAYS NUMBER(2,0)NOT NULL,
(B OFFICE_ID   VARCHAR2(7)NOT NULL,
(B AC_COUNTNUMBER(6,0)   DEFAULT 0,
(B PRIMARY KEY("AC_YEAR","AC_MONTH","AC_DAYS","OFFICE_ID")
(B )
(B TABLESPACE RAPIS;
(B  ^^
(B
(B psql:/opt/rapisa/sql/TD_ACCESSCOUNT.sql:14: ERROR:  syntax error at or near 
(B "(" at character 52
(B #
(B 
(B I really don't know what is wrong with it and do appreciate it if anyone would
(B point out it. The version of Postgresql is 7.4.7.
(B Thanks in advance.
(B
(BTablespace wasn't supported until 8.0
(B
(B
(B
(Bregards,
(BRobin
(B
(B---(end of broadcast)---
(BTIP 3: if posting/reading through Usenet, please send an appropriate
(B  subscribe-nomail command to [EMAIL PROTECTED] so that your
(B  message can get through to the mailing list cleanly

Re: [GENERAL] Problem in creating a table

2005-05-17 Thread Robin Ericsson
Wen Guangcheng wrote:
(B
(BHi,
(B
(B TABLESPACE RAPIS;
(B psql:/opt/rapisa/sql/TD_ACCESSCOUNT.sql:14: ERROR:  syntax error at or near 
(B "(" at character 52
(B
(B I really don't know what is wrong with it and do appreciate it if anyone would
(B point out it. The version of Postgresql is 7.4.7.
(B Thanks in advance.
(B
(BTablespace wasn't supported until 8.0
(B
(B
(B
(Bregards,
(B    Robin
(B
(B---(end of broadcast)---
(BTIP 2: you can get off all lists at once with the unregister command
(B(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

[GENERAL] Deep integration of PostgreSQL with Apache

2005-05-03 Thread Robin Boerdijk
Hi,

Apologies if this has been discussed before, but I was wondering if
there have been any efforts in the past to provide a deep integration
of PostgreSQL with Apache. What I mean by deep integration is that the
PostgreSQL server logic runs inside the Apache server processes, rather
than separate processes. In particular, the postmaster server logic
would run inside the Apache master process and the postgres server
logic would run inside Apache child processes.

The main advantage of this approach would be that it avoids the
Apache/PostgreSQL context switch when executing SQL requests from the
web server. It looks like the Apache server and PostgreSQL server
architectures are quite similar to make this feasible. Any thoughts?

Best regards,

Robin Boerdijk

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Deep integration of PostgreSQL with Apache

2005-05-03 Thread Robin Boerdijk
--- Thomas Hallgren [EMAIL PROTECTED] wrote:
 Robin Boerdijk wrote:
  Hi,
  
  Apologies if this has been discussed before, but I was wondering if
  there have been any efforts in the past to provide a deep
 integration
  of PostgreSQL with Apache. What I mean by deep integration is that
 the
  PostgreSQL server logic runs inside the Apache server processes,
 rather
  than separate processes. In particular, the postmaster server logic
  would run inside the Apache master process and the postgres server
  logic would run inside Apache child processes.
  
  The main advantage of this approach would be that it avoids the
  Apache/PostgreSQL context switch when executing SQL requests from
 the
  web server. It looks like the Apache server and PostgreSQL server
  architectures are quite similar to make this feasible. Any
 thoughts?
  
 The PostgreSQL backend is inherently single-threaded and a new
 process 
 is forked each time you establish a new connection (session) so the 
 integration you ask for is not in anyway possible unless you are
 content 
 with one single database connection.

I agree that it is not trivial, but is it feasible? Specifically, I'm
thinking about the following approach:

1. Strip all networking logic and the logic that manages the postgres
child servers from the postmaster server. The logic that remains is
code that manages the auxiliary processes such as the bgwriter and
statistics collector. Integrate this remaining logic in the the Apache
master server.

2. Strip all networking logic from the postgres server. The logic that
remains is logic for executing queries against the database. Integrate
this remaining logic in the Apache child server.

The result of this is an integrated web/database server where all
networking is handled by Apache instead of postmaster/postgres. Other
than that, I see no difference with the way PostgreSQL works out of the
box. The Apache master server functions as the postmaster (i.e.
managing the child server processes) and the Apache child servers
function as the postgres servers (i.e. access the database). Why would
this web/database server be limited to using only one connection?

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Deep integration of PostgreSQL with Apache

2005-05-03 Thread Robin Boerdijk
  Apologies if this has been discussed before, but I was wondering
 if
  there have been any efforts in the past to provide a deep
 integration
  of PostgreSQL with Apache. What I mean by deep integration is that
 the
  PostgreSQL server logic runs inside the Apache server processes,
 rather
  than separate processes. In particular, the postmaster server
 logic
  would run inside the Apache master process and the postgres server
  logic would run inside Apache child processes.
 
 I think the closest you are going to get is Apache::DBI however sense
 PostgreSQL is processed based you are going to get a new connection
 for every connection to Apache.

Yes, I didn't think about that one. Those Apache child processes serve
only a single connection at a time. I guess a single postgres server
process must be quite heavywait then compared to an Apache child
process.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-05 Thread Robin Ericsson
Martijn van Oosterhout wrote:
I suppose the choice comes down to either PHP splitting the DB access
(like other languages) or PostgreSQL splitting out pl/PHP.
Most major distributions (Fedora Core, Debian, Redhat) splits core php 
and database-access in different packages. Might be that sqlite is core, 
that newer php that have that change also bundles libsqlite.

php
php-mysql
php-pgsql
...

regards,
Robin
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-05 Thread Robin Ericsson
Martijn van Oosterhout wrote:
On Tue, Apr 05, 2005 at 11:17:48AM +0200, Robin Ericsson wrote:
Martijn van Oosterhout wrote:
I suppose the choice comes down to either PHP splitting the DB access
(like other languages) or PostgreSQL splitting out pl/PHP.
Most major distributions (Fedora Core, Debian, Redhat) splits core php 
and database-access in different packages. Might be that sqlite is core, 
that newer php that have that change also bundles libsqlite.

Ah yes, I meant to check this but packages.debian.org is down. From my
Sources file, php3-pgsql is generated from the main php3 package. But
php4-pgsql has its own source bundle. Maybe the problem is solved?
Ah, you mean it that way. I can't say for debian as the site is still 
down :) But atleast Fedore Core uses one main package to generate all 
binary packages, so I guess the problem is still there.

regards,
Robin
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] 8.0.2beta1 RPMs

2005-04-02 Thread Robin Ericsson
Devrim GUNDUZ wrote:
PostgreSQL RPM Building Project[1] has built RPMs for 8.0.2beta1. RPMs 
for Red Hat Linux 9, Red Hat Enterprise Linux Enterprise Server 3.0, 
Fedora Cor 1,2,3 are now available, and more to come later.
Are the Fedora Core packages compatible with packages from Fedora Core 
itself? I.e, files will be at the same places if I upgrade from FC packages?

regards,
Robin
--
Robin Ericsson
http://robin.vill.ha.kuddkrig.nu/
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] truncate/create slowness

2005-03-31 Thread Robin M.
unsubscribe pgsql-general
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] LWM 2004 Readers' Choice Nomination

2005-03-28 Thread Robin Ericsson
Christopher Browne wrote:
The poll isn't about OSS; it's a popularity contest for proprietary
software that runs on Linux.
It's interesting to see that MySQL is only third at the moment.
regards,
Robin
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-20 Thread Robin Ericsson
Mark Rae wrote:
I would say that doing the concurrency tests is probably the most
important factor in comparing other databases against MySQL, as
MySQL will almost always win in single-user tests.
E.g. here are some performance figures from tests I have done in the past.
This is with a 6GB databse on a 4CPU Itanium system running a mixture of
read-only queries, but it is fairly typical of the behaviour I have seen. 
The Oracle figures also scaled in a similar way to postgres.

Clients   1 2 3 4 6 812163264   128
---
mysql-4.1.11.00  1.41  1.34  1.16  0.93  1.03  1.01  1.00  0.94  0.86  0.80
pg-7.4.1   0.65  1.27  1.90  2.48  2.45  2.50  2.48  2.51  2.49  2.39  2.38
Would be interesting to know about the tuning of the MySQL, I guess that 
 buffers for indexing and sort is well setup, but what about thread 
caching? Knowing that will once in a while you will have a connection 
burst you can tell mysql to cache thread so that it can save time next 
time it needs them.

--
Robin Ericsson
http://robin.vill.ha.kuddkrig.nu/
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] unsubscribe

2005-01-06 Thread Robin M.
unsubscribe


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] unsubscribe

2005-01-06 Thread Robin M.
unsubscribe

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostGreSQL upgrade failed (Debian Packages), need

2004-11-24 Thread Robin Ericsson
On Wed, 2004-11-24 at 08:30 -0800, Joshua D. Drake wrote:
 Peter Eisentraut wrote:
 
 Joshua D. Drake wrote:
   
 
 Well you can't just upgrade 7.2.1 to 7.4.6. You have to dump and
 restore.
 
 
 
 The Debian package does that automatically.  On some days...
   
 
 Really? WOW! I wonder if Gentoo does that. That is pretty
 remarkable.

Gentoo tells you that you need to dump and remove the cluster before it
evens tries to upgrade, atleast did for me when going from 7.3 to 7.4



regards,
Robin


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] not using index through procedure

2004-10-14 Thread Robin Ericsson
After some discussion on performance list, I guess this is back to a
general question :)

This is very simplified query of my real problem, but it should show the
way of the problems.

CREATE OR REPLACE FUNCTION ago(interval) RETURNS timestamp AS
'SELECT (now() - $1)::timestamp without time zone'
  LANGUAGE 'sql' IMMUTABLE STRICT;

This query uses the index without problem.
SELECT entered
FROM data
WHERE ago('60 seconds')  data.entered;

However using this function
CREATE OR REPLACE FUNCTION get_machine_status(interval) RETURNS
timestamp AS
'SELECT entered
FROM data
WHERE ago($1)  data.entered
' LANGUAGE 'sql' VOLATILE;

and call it like this:
select * from get_machine_status('60 seconds');
makes the query not use index, I guess it some basic problem I'm having,
maybe I should make this into a view instead?


Regards,
Robin


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] not using index through procedure

2004-10-14 Thread Robin Ericsson
Martijn van Oosterhout [EMAIL PROTECTED] wrote:

 One's marked VOLATILE, the other is marked IMMUTABLE. This affects
 whether it's considered a constant, the planner estimates and hence
 whether it uses the index.

Is there even a way to solve it this way via a procedure?

I've tried different approaches, with moving the ago outside the procedure,
having the procedure take timestamp as argument instead of interval, but still
no luck.

However, remove the $1 argument all together and replace that with ago('60
seconds') makes the procedure fly.


Regards,
Robin
_
This mail sent using V-webmail - http://www.v-webmail.org


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [PERFORM] query problem

2004-10-13 Thread Robin Ericsson
On Wed, 2004-10-13 at 18:01 +0200, Robin Ericsson wrote:
 Using exact timestamp makes the query go back as it should in speed (see
 explain below). However I still have the problem using a stored
 procedure or even using the ago-example from above.

Well, changing ago() to use timestamp without time zone it goes ok in
the query. This query now takes ~2ms.

SELECT
data.entered,
data.machine_id,
datatemplate_intervals.template_id,
data_values.value
FROM
data, data_values, datatemplate_intervals
WHERE
datatemplate_intervals.id = data_values.template_id AND
data_values.data_id = data.id AND
data.machine_id IN (SELECT machine_id FROM machine_group_xref
WHERE group_id = 1) AND
ago('60 seconds')  data.entered

Using it in this procedure.
select * from get_current_machine_status('60 seconds', 1);
takes ~100s. Maybe there's some obvious wrong I do about it?

CREATE TYPE public.mstatus_holder AS
   (entered timestamp,
machine_id int4,
template_id int4,
value varchar);
CREATE OR REPLACE FUNCTION public.get_current_machine_status(interval,
int4)
  RETURNS SETOF mstatus_holder AS
'
SELECT
data.entered,
data.machine_id,
datatemplate_intervals.template_id,
data_values.value
FROM
data, data_values, datatemplate_intervals
WHERE
datatemplate_intervals.id = data_values.template_id AND
data_values.data_id = data.id AND
data.machine_id IN (SELECT machine_id FROM machine_group_xref WHERE
group_id = $2) AND
ago($1)  data.entered
'
  LANGUAGE 'sql' VOLATILE;


Regards,
Robin



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] [PERFORM] query problem

2004-10-13 Thread Robin Ericsson
Sorry, this should have been going to performance.



Regards,
Robin


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] query gone haywire :)

2004-10-12 Thread Robin Ericsson
On Fri, 2004-10-08 at 10:11 -0400, Tom Lane wrote:
 Robin Ericsson [EMAIL PROTECTED] writes:
   -  Index Scan using idx_d_entered on data  (cost=0.00..18024.04 
  rows=50360 width=16) (actual time=0.210..0.247 rows=1 loops=1)
 Index Cond: 'now'::text)::timestamp(6) with time 
  zone)::timestamp without time zone - '00:01:00'::interval)  entered)
 
 You're running into the well-known problem that the planner can't make
 good estimates for index conditions that involve non-constant terms
 (such as CURRENT_TIMESTAMP).  Lacking a decent estimate, it guesses that
 this scan will produce many more rows than it really will, and so it
 tends to favor plans that would be good in that scenario, but are not
 optimal for retrieving just a couple of rows.
 
 One workaround is to do the date arithmetic on the client side; another
 is to cheat by hiding the arithmetic in a function like ago(interval)
 that you lyingly claim is IMMUTABLE.  See the pgsql-performance
 archives.

I did run a new explain analyze on the query and found the attached
result.

status=# EXPLAIN ANALYZE
status-# SELECT
status-# data.entered,
status-# data.machine_id,
status-# datatemplate_intervals.template_id,
status-# data_values.value
status-# FROM
status-# data, data_values, datatemplate_intervals
status-# WHERE
status-# datatemplate_intervals.id = data_values.template_id AND
status-# data_values.data_id = data.id AND
status-# data.machine_id IN (2,3) AND
status-# current_timestamp::timestamp - interval '60 seconds' 
data.entered;

It seems very strange that it does a full index scan on idx_dv_data_id.



Regards,
Robin

 Hash Join  (cost=28646.01..274260.15 rows=555706 width=24) (actual 
time=102323.087..102323.196 rows=5 loops=1)
   Hash Cond: (outer.template_id = inner.id)
   -  Merge Join  (cost=28644.09..265922.62 rows=555706 width=24) (actual 
time=102322.632..102322.709 rows=5 loops=1)
 Merge Cond: (outer.data_id = inner.id)
 -  Index Scan using idx_dv_data_id on data_values  (cost=0.00..205034.19 
rows=9580032 width=16) (actual time=17.503..86263.130 
rows=9596747 loops=1)
 -  Sort  (cost=28644.09..28870.83 rows=90697 width=16) (actual 
time=0.829..0.835 rows=1 loops=1)
   Sort Key: data.id
   -  Index Scan using idx_d_entered on data  (cost=0.00..20202.81 
rows=90697 width=16) (actual time=0.146..0.185 rows=1 loops=1)
 Index Cond: 'now'::text)::timestamp(6) with time 
zone)::timestamp without time zone - '00:01:00'::interval)  entered)
 Filter: ((machine_id = 2) OR (machine_id = 3))
   -  Hash  (cost=1.74..1.74 rows=74 width=8) (actual time=0.382..0.382 rows=0 
loops=1)
 -  Seq Scan on datatemplate_intervals  (cost=0.00..1.74 rows=74 width=8) 
(actual time=0.024..0.250 rows=74 loops=1)
 Total runtime: 102323.491 ms
(13 rows)


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] query gone haywire :)

2004-10-08 Thread Robin Ericsson
The query have been running ok for some time now, but this morning I
decided to run vacuum analyze (vacuumdb -a -z) on the cluster, and
suddenly the query isn't running very well at all. This query has only
one value in the IN, if I add another id the query becomes really
really slow.

Query:
SELECT
data.entered,
data.machine_id,
datatemplate_intervals.template_id,
data_values.value
FROM
data, data_values, datatemplate_intervals
WHERE
datatemplate_intervals.id = data_values.template_id AND
data_values.data_id = data.id AND
data.machine_id IN (2) AND
current_timestamp::timestamp - interval '60 seconds'  data.entered

Indexes exists on data_values.template_id, data.entered,
data.machine_id, datatemplate_intervals.machine_id,
datatemplate_intervals.template_id.

Data contains almost 1.5milj entries, and data_values around 9.1milj. As
I write this letter I check the tables in pgAdmin, and it tells me this
for table data 
Rows (estimated) 1
Rows (counted)   1491401
 even though I run vacuum analyze on the table itself from pgadmin.

Explain analyze result attached as explain-analyze.txt

Explain without analyze when using IN(2,3) attached as explain.txt


Regards,
Robin
 Hash Join  (cost=1.93..175359.44 rows=152524 width=24) (actual time=0.972..1.085 
rows=5 loops=1)
   Hash Cond: (outer.template_id = inner.id)
   -  Nested Loop  (cost=0.00..173069.64 rows=152524 width=24) (actual 
time=0.274..0.358 rows=5 loops=1)
 -  Index Scan using idx_d_entered on data  (cost=0.00..18024.04 rows=50360 
width=16) (actual time=0.210..0.247 rows=1 loops=1)
   Index Cond: 'now'::text)::timestamp(6) with time zone)::timestamp 
without time zone - '00:01:00'::interval)  entered)
   Filter: (machine_id = 2)
 -  Index Scan using idx_dv_data_id on data_values  (cost=0.00..3.04 rows=3 
width=16) (actual time=0.049..0.074 rows=5 loops=1)
   Index Cond: (data_values.data_id = outer.id)
   -  Hash  (cost=1.74..1.74 rows=74 width=8) (actual time=0.438..0.438 rows=0 
loops=1)
 -  Seq Scan on datatemplate_intervals  (cost=0.00..1.74 rows=74 width=8) 
(actual time=0.028..0.255 rows=74 loops=1)
 Total runtime: 1.575 ms
(11 rows)

 Hash Join  (cost=27010.45..254388.49 rows=253820 width=24)
   Hash Cond: (outer.template_id = inner.id)
   -  Merge Join  (cost=27008.52..250579.25 rows=253820 width=24)
 Merge Cond: (outer.data_id = inner.id)
 -  Index Scan using idx_dv_data_id on data_values  (cost=0.00..197001.44 
rows=9127404 width=16)
 -  Sort  (cost=27008.52..27218.04 rows=83806 width=16)
   Sort Key: data.id
   -  Index Scan using idx_d_entered on data  (cost=0.00..19266.46 
rows=83806 width=16)
 Index Cond: 'now'::text)::timestamp(6) with time 
zone)::timestamp without time zone - '00:01:00'::interval)  entered)
 Filter: ((machine_id = 2) OR (machine_id = 3))
   -  Hash  (cost=1.74..1.74 rows=74 width=8)
 -  Seq Scan on datatemplate_intervals  (cost=0.00..1.74 rows=74 width=8)
(12 rows)


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Join a varchar array

2004-06-17 Thread Robin Ericsson
Hi,

I have a field in which I save the follow:
{{8,0.58},{9,972420},{10,239544},{6,0.49},{7,0.63}}

The first field is a reference to an id in another table and the second
field is a value.

Can these values be used somehow in a select query to join the other
table?

I don't mind getting them on different rows but I would prefer to 
get them back as an array.


regards
Robin


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Join a varchar array

2004-06-17 Thread Robin Ericsson
On Thu, 2004-06-17 at 11:38, Richard Huxton wrote:
 Robin Ericsson wrote:
  Hi,
  
  I have a field in which I save the follow:
  {{8,0.58},{9,972420},{10,239544},{6,0.49},{7,0.63}}
  
  The first field is a reference to an id in another table and the second
  field is a value.
  
  Can these values be used somehow in a select query to join the other
  table?
 
 You appear to be using the array as a replacement for a table. Don't do 
 that. From the little information you've provided, it looks like you 
 want a separate table (a_ref, b_ref, float_val) where a_ref references 
 the current table, b_ref the other table.

Thats correct, I choose this way because there will be a lot of entries
and it will grow pretty quickly, but maybe postgres will be able to
handle this?

If I went for a separate table it will contain over 3 milj. entries
within in a couple of weeks and still grow after that but not at the
same speed as the first weeks.



regards
Robin


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] dynamic parameters in procedure

2004-05-21 Thread Robin Ericsson
Hi,

I want to create a stored procedure that can take a dynamic number of
in-parameters and base an inside-query based on those parameters.

My ideas was to use text[] as input parameters like this.

CREATE OR REPLACE FUNCTION get_table(text[])
  RETURNS SETOF table AS
'
 DECLARE
params ALIAS FOR $1;
query VARCHAR;
entry RECORD;
 BEGIN
query := \'SELECT * FROM table\';

FOR entry IN EXECUTE query LOOP
RETURN NEXT entry;
END LOOP;
RETURN;
 END;
'
  LANGUAGE 'plpgsql';

My question is how do I loop the input-parameters?

I plan to call it like this: 

SELECT * FROM get_table('{field1=1, field2=3}');

Do I make any sense? :)


regards
Robin


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Newbie timestamp question

2004-03-17 Thread Robin 'Sparky' Kopetzky
Good Morning!!

I'm repairing a series of scripts in PHP that use the 'datetime' of MySQL
and converting them to Postgres. Question is this: The datetime format used
in the script is 'MMDDHHMMSS' as a text string. Do I have to convert
this to the format shown in the Postgres manual: '1999-01-08 04:05:06' for
Postgres to accept the value or can I just pass an integer as 19990108040506
for the timestamp?

Thanks in advance.

Robin Kopetzky
Black Mesa Computers/Internet Services
Grants, NM



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Newbie timestamp question

2004-03-17 Thread Robin 'Sparky' Kopetzky
Thank you for all your help.

I built two simple functions (extract_timestamp, build_timestamp) to tear
apart a timestamp and put it back together to make the job easier.

Thanks again!

'Sparky'





---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] pg_dump return failed sanity check

2000-05-17 Thread Patrick Robin

Hi,

When I try to use pg_dump, I get this error. Can it have something to

do with a custom type I added. I made sure I added the input/output functions

and comparision functions for sorting and queries.

The type works fine in SQL queries in general.

 pg_dump -s scm
\connect - d23adm
failed sanity check,  type with oid 457690 was not found

Thanks

Patrick

--

Patrick Robin
[EMAIL PROTECTED]
Walt Disney Feature Animation
500 South Buena Vista Street
Burbank,California 91521-4817






Re: [GENERAL] demande d'aide

2000-03-24 Thread Patrick Robin

Postgres fait partie de RedHat linux.

Postgres ne se compare pas facilement a MYSQL car il est un vrai RDBMS
(relational database). MYSQL est une emulation partielle.

Patrick Robin

chriqi abdelkarim wrote:

 bonjour;
 tout d'abord, je me presente. je suis un étudiant marocain de la 4eme année
 et je prépare un projet de fin d'étude dans les deux mois qui suivent.
 j'aimerais savoir,s'il vous plait, si POSTGRES est intégré dans une version
 de LINUX et aussi, j'aimerais savoir le mieux entre POSTGRES et MYSQL.
 dans l'attent de votre réponse agréer monsieur mes salutations les plus
 vifs.
 __
 Get Your Private, Free Email at http://www.hotmail.com

--

Patrick Robin
[EMAIL PROTECTED]
Walt Disney Feature Animation
500 South Buena Vista Street
Burbank,California 91521-4817






Re: [GENERAL] COALESCE() or NVL()

1998-06-18 Thread Robin Thomas

At 12:03 PM 6/17/98 +, Jose' Soares Da Silva wrote:
 Jose' Soares Da Silva writes:
   SELECT name,NVL(salary)+100 AS dream FROM emp;
   name   |dream
   ---+-
   Sam| 1300
   Claire | 5100
   Bill   | 4300
   Ginger | 4900
   NULL VALUES|  --- I expected 100 here.
   (5 rows)

SELECT name, NVL(salary, 0) + 100 AS dream FROM emp;

NVL() takes two values: the column/variable, and the value to use if NULL.

--
Robin Thomas
[EMAIL PROTECTED]