Re: [GENERAL] Cognitive dissonance

2010-06-09 Thread Brian Modra
On 09/06/2010, John Gage  wrote:
> 1) On a list that howls with complaints when posts are in html, it is
> surprising that there is resistance to the idea of documentation in
> plain text.
>
> 2) Posters are correctly referred to the documentation as frequently
> as possible.  In fact, very frequently.  The frequency might decrease
> if the documentation were in plain text.  It is easier to search a
> single plain text file than any other source, except perhaps the
> database itself.
>
> 3) Postgres is getting pushed off the map at the low end by MySQL, now
> owned by Oracle.If Postgres ceased to exist, Ellison would be
> thrilled.  I chose A2 Hosting (with whom I am very happy) for my
> website because they support Postgres.  I'm writing cgi scripts in
> perl.  I had to install the postgres driver for dbi.  It was not pre-
> installed.  There are about four buttons for MySQL on the cPanel and
> two farther over on the right for Postgres.
>
> An anecdote.  I discovered the tsvector functionality a while back.  I
> have used it to create indices for my text files and several other
> tasks.  I recently was re-looking at my files and saw
> "tsvector::text".  I had forgotten that the double colon is one way to
> cast a type.  Double colon is not in the html index of the
> documentation.  I found it by searching my plain text version of the
> pdf file.  In my opinion, the html documentation is useful for reading
> it like a novel or referencing it in these lists.
>
>
> On Jun 8, 2010, at 9:56 PM, Josh Kupershmidt wrote:
>
>> Not that I see a whole lot of utility in this endeavor

Personally I like to use html docs, and it would be good if the
documentation were downloadable from the postgresql website in other
formats, for convenience...

But, what I use is this, which works pretty well:

(e.g. to get the 8.1 dosc)

mkdir postgresql
cd postgresql
wget -r -nH -l 10 -k -np
http://www.postgresql.org/docs/8.1/interactive/index.html

... then after it all downloads:

open the file docs/8.1/interactive/index.html
in your web browser.

e.g.
links docs/8.1/interactive/index.html


HTML is "text", so you can search using grep e.g.
grep -r "ALTER TABLE .* ADD COLUMN" docs/8.1

>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/
Fax: +27865510467

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clustering, parallelised operating system, super-computing

2010-05-13 Thread Brian Modra
On 14/05/2010, Bruce Momjian  wrote:
> Brian Modra wrote:
>> Hi,
>> I've been told that PostgreSQL and other similar databases don't work
>> well on a parallelised operating system because they make good use of
>> shared memory which does not cross the boundary between nodes in a
>> cluster.
>>
>> So I am wondering if any work is being done to make it possible to
>> have a single database schema that spans a number of hosts?
>>
>> For example, a table on one host/node that has a reference to a table
>> on another host/node with deletes cascading back.
>> e.g.
>
> Not currently.  There are some prototypes in development, but those
> usually have the same database on all the machines and they share the
> load.

I'm trying to solve the problem of firstly distributing the volume of
data, and secondarily the load.

So far, I'm putting some bulky data onto different hosts, where there
is no need to ever do a join. I put a "reference" table onto a host
with the data that needs to be joined, then I can select the actual
data from the other host by unique IDs after the join has been
performed locally.

To create a reference with "on delete cascade" across hosts, I create
a trigger (after) delete, and in the plpgsql I call dblink to do the
remote delete.

Similarly, I can do joins in plpgsql with the help of dblink.
But, doing joins across hosts certainly does defeat the purpose of
"distributing the load".

I think that the schema design must be done carefully when distributing data.
So it really will be difficult to get this "supercomputer database" right.

Maybe the best way to solve this is not to do automatic distribution
of the data, but rather to provide tools for implementing distributed
references and joins.

I'm thinking of working on this as part of "The Karoo Project" Open
Source Project I'm working on, and would appreciate
comments/support/criticism.
Thanks

-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Clustering, parallelised operating system, super-computing

2010-05-13 Thread Brian Modra
Hi,
I've been told that PostgreSQL and other similar databases don't work
well on a parallelised operating system because they make good use of
shared memory which does not cross the boundary between nodes in a
cluster.

So I am wondering if any work is being done to make it possible to
have a single database schema that spans a number of hosts?

For example, a table on one host/node that has a reference to a table
on another host/node with deletes cascading back.
e.g.

on host A:

create table person (
username text not null unique primary key,
...
);


on host B:

create table race_entry (
   person text
   references person(username)
   match full
   on delete cascade
   on update cascade,
   ...
);

-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issues when the number of records are around 10 Million

2010-05-11 Thread Brian Modra
On 11/05/2010, Sergey Konoplev  wrote:
> On 11 May 2010 10:18, venu madhav  wrote:
>> select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
>> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
>> e.wifi_addr_2, e.view_status, bssid  FROM event e, signature s WHERE
>> s.sig_id = e.signature   AND e.timestamp >= '1270449180' AND
>> e.timestamp < '1273473180'  ORDER BY e.cid DESC,  e.cid DESC limit 21
>> offset 10539780;
>>
>> Can any one suggest me a better solution to improve the performance.
>> Please let me know if you've any further queries.
>
> 1. Avoid using large OFFSETs. Do instead "... ORDER BY e.cid, e.cid LIMIT
> 21;"
> 2. What "EXPLAIN SELECT ..." shows?
> 3. What "\d event" prints?
>
> --
> Sergey Konoplev
>
> Blog: http://gray-hemp.blogspot.com /
> Linkedin: http://ru.linkedin.com/in/grayhemp /
> JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Have you checked if the indexes don't have lots of dead references?
Try to create new indexes, and then delete the old indexes (or just
use reindex if this is not an online database in production).

-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to get the deleted data

2010-05-05 Thread Brian Modra
On 06/05/2010, AI Rumman  wrote:
> Unauthorized user deleted some critical data from the database.
> I don't have any log.
>
> Is it possible to get what was the command used to delete the data?
>

Is your server saving WAL's?
-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fuzzy string matching of product names

2010-04-05 Thread Brian Modra
On 05/04/2010, Peter Geoghegan  wrote:
> Hello,
>
> At the moment, users of my application, which runs on 8.4.3, may
> search for products in a way that is implemented roughly like this:
>
> SELECT * FROM products WHERE description ILIKE '%%usr_string%%';
>
> This works reasonably well. However, I thought it would be a nice
> touch to give my users leeway to spell product names incorrectly when
> searching, or to not have to remember if a product is entered as "coca
> cola", "CocaCola" or "Coca-cola". At the moment, they don't have to
> worry about case sensitivity because I use ILIKE - I'd like to
> preserve that. I'd also like to not have it weigh against them heavily
> when they don't search for a specific product, but just a common
> substring. For example, if they search for "coca-cola", there may be a
> number of different coca-cola products: "CocaCola 330ml can",
> "Coca-Cola 2 litre bottle", but no actual plain "cocacola". That ought
> to not matter too much - all cocacola products should be returned.
>
> This isn't important enough for me to be willing to add a big
> dependency to my application. I'd really prefer to limit myself to the
> contrib modules. pg_trgm and fuzzystrmatch look very promising, but
> it's not obvious how I can use either to achieve what I want.
> Postgres's built-in regex support may have a role to play too.
>
> I can live with it not being indexable, because typically there are
> only tens of thousands of products in a production system.
>
> Could someone suggest an approach that is reasonably simple and
> reasonably generic ?

What I do is to create another column that has a simplified version of
the string in it.
(I created a function to simplify strings, and when the source column
is changed or inserted, I also update the "simplified" column.
Then when searching, I use the same function to "simplify" the search
string and use "=" to test against the "simplified" column.

E.g.
if the table has a column called "name" that you want to search, you
create a name_simplified column, and fill it as so:
update your_table set name_simplified=yourSimplifyFunction(name);

Then to search:
select * from your_table where simplified_name =
yourSimplifyFunction('Coca-Cola');

This is really fast, because the match is using the index rather than
a sequential scan.

>
> Thanks,
> Peter Geoghegan
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice on webbased database reporting

2010-04-02 Thread Brian Modra
On 02/04/2010, Leif Biberg Kristensen  wrote:
> On Friday 2. April 2010 14.43.48 Davor J. wrote:
>> I need to make certain views from the database visible online (on our
>> webpage) and I wonder if there is any reasonably quick solution for this
>> that works with Postgres?
>>
>> At best, a query should be specified and the user should be able to select
>>
>> the layout on certain columns (like stepped, or outlined).
>>
>> I don't mind running a whole CMS on our Apache server as long as it allows
>>
>> me to make reports and is free to use.
>>
>> Has anyone any suggestions?
>
> PHP works great with Postgres and Apache.

take a look at The Karoo Project:
http://www.zwartberg.com/tutorial_1.html

>
> regards,
> --
> Leif Biberg Kristensen
> http://solumslekt.org/
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] split to table by space

2010-01-02 Thread Brian Modra
2010/1/3 Jamie Kahgee :
> I need a function like regexp_split_to_table where I can split a string to a
> table by a space delimiter.
> so:
> Please Help Me
> would convert to:
> Please
> Help
> Me
> However I'm stuck working w/ version 8.2.9, so I don't have the
> regexp_split_to_table function. Is there any good functions that can handle
> this in my version that I am unaware of?  Or does anyone know how to write
> an easy function to handle this in in plpgsql or something?

I wrote one a while ago... I'll paste it below. Its not exactly
optimised, but you are welcome:

CREATE OR REPLACE FUNCTION getWords(inv text)
 RETURNS text[] AS $$
DECLARE
 temp text;
 i integer;
 len integer;
 ch character(1);
 outv text[] := '{}';
 outlen integer := 0;
 i1 integer := 0;
BEGIN
 temp := trim(both ' ' from inv);
 len  := char_length(temp);
 i:= 1;
 while i <= len loop
  while i <= len loop
   ch  := cast(substring(temp from i for 1) as character(1));
   exit when ch = ' ' or ch = ',' or ch = '.' or ch = '-';
   i := i + 1;
  end loop;

  exit when i = i1;

  outv[outlen] := substring(temp from i1 for (i - i1));
  outlen := outlen + 1;

  while i <= len loop
   ch  := cast(substring(temp from i for 1) as character(1));
   exit when ch != ' ' and ch != ',' and ch != '.' and ch != '-';
   i := i + 1;
  end loop;
  i1 := i;
 end loop;
 return outv;
END;
$$ LANGUAGE plpgsql;


-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of space making backup

2009-12-25 Thread Brian Modra
2009/12/25 Farhan Malik :
>> Reading between the lines, I suspect you are trying to use 'tar' output
>> format, which does have a need to make temp files that can be large.
>> If I guessed right, I'd suggest using 'custom' format instead.  There
>> really is no advantage to tar format, and several disadvantages besides
>> this one.
>>
>>                        regards, tom lane
>
> That sounds right.  The error I get from the software is 2009/12/25
> 10:21:40.812: [1EA8][ThreadBackupRestore] Restore Error: pg_dump:
> [tar archiver] could not write to output file: No space left on device
>
> Is there a way to have postgre put those large temp files on a
> different drive?  I only have 4GB free on my C: drive and once the
> temp files go over that I get an out of space error.  I have tons of
> free space on other drives, including the one where I am asking that
> the final backup.zip file goes.
>
> As for changing the backup to a custom format, I will pass that on to
> the developer of the software.

I do backups semi-manually:

use select pg_start_backup('some-name') (in psql logged in a postres)
then start a tar of /var/lib/pgsql/data/, to stdout and pipe this to
tar on another server using ssh
then finally select pg_stop_backup()

e.g. my two scripts (backup.sh calls back1.sh

[r...@www pgsql]# cat back1.sh
#/bin/bash
cd /var/lib/pgsql
ssh lead touch /var/lib/postgresql/backups/start_backup
tar zcf - data |ssh lead "cat > /var/lib/postgresql/backups/20091223.tgz"
echo "DONE"
[r...@www pgsql]# cat backup.sh
#!/bin/bash
cd /var/lib/pgsql
./back1.sh > backups/backup.log 2>&1 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Objects / Procedure creation date or modified date

2009-12-16 Thread Brian Modra
2009/12/16 akp geek :
> Dear all -
>                    How can we check when a procedure last modified or last
> DDL change happened to a table  in postgres? Which Pg_* do we need to query
> to get the details
> Regards

Maybe follow this line of thinking:
http://stackoverflow.com/questions/899203/how-do-i-find-the-last-time-that-a-postgresql-database-has-been-updated

-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] howto escape my string

2009-11-24 Thread Brian Modra
2009/11/24 Jean-Yves F. Barbier <12u...@gmail.com>:
> Hi list,
>
> I'm using Gambas to access a PG DB (only through views and functions)
> and I can't find a way to escape my string to fit into a BYTEA field;
> does anybody have an idea about this?

I can't help you with Gambas, but if you are asking how to escape a
string for bytea, its documented here:
http://www.postgresql.org/docs/8.4/interactive/datatype-binary.html

>
> JY
> --
> Don't drop acid -- take it pass/fail.
>                -- Seen in a Ladies' Room at Harvard
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How is the right query for this condition ?

2009-11-22 Thread Brian Modra
2009/11/23 Bino Oetomo :
> Dear All
>
> Suppose I created a database with single table like this :
> --start--
> CREATE DATABASE bino;
> CREATE TABLE myrecords(record text);
> --end
>
> and I fill myrecords with this :
> --start--
> COPY myrecords (record) FROM stdin;
> 1
> 12
> 123
> 1234
> \.
> --end
>
> In my bash script, I have variable called 'vseek', that will be use for
> query parameter.
> How to query the table , for (i.e):
>
> a. If vseek = '127' , I want the result is ==> '12'
> b. if vseek = '123987' , I want the result is ==> '123'
> c. if vseek = '14789' , I want the result is ==> '1'
>
> Kindly please give me any enlightment

You can use a plpgsql to do that e.g.

create or replace function getMatchingRecord(vseek text)
returns text as $$
declare
 str text;
 len integer;
 ret text;
begin
 len := char_length(vseek);
 loop
  exit when len = 0;
  str := substring(vseek from 1 for len);
  select "record" into ret from myrecords where "record" = str;
  if found then
return ret;
  end if;
  len := len - 1;
 end loop;
end;
$$ language plpgsql;


Then call it as so:

KarooDB=> select getMatchingRecord('127');
 getmatchingrecord
---
 12
(1 row)

KarooDB=> select getMatchingRecord('123987');
 getmatchingrecord
---
 123
(1 row)


> Sincerely
> -bino-
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange performance degradation

2009-11-20 Thread Brian Modra
2009/11/20 Lorenzo Allegrucci :
>
> Hi all,
>
> I'm experiencing a strange behavior with my postgresql 8.3:
> performance is degrading after 3/4 days of running time but if I
> just restart it performance returns back to it's normal value..
> In normal conditions the postgres process uses about 3% of cpu time
> but when is in "degraded" conditions it can use up to 25% of cpu time.
> The load of my server is composed of many INSERTs on a table, and
> many UPDATEs and SELECT on another table, no DELETEs.
> I tried to run vacuum by the pg_maintenance script (Debian Lenny)
> but it doesn't help. (I have autovacuum off).

I had a similar problem: I did a large delete, and then a selct which
"covered" the previous rows.
It took ages, because the index still had those deleted rows.
Possibly the same happens with update.

Try this:
vacuum analyse
reindex database 
(your database name instead of ...)

or, rather do this table by table:
vacuum analyse 
reindex table ...


Autovacuum is a generally good thing.

> So, my main question is.. how can just a plain simple restart of postgres
> restore the original performance (3% cpu time)?

there were probably some long transactions running. Stopping postgres
effectively kills them off.

> I can post my postgresql.conf if needed.
> Thank you for your help,
>
> --
> Lorenzo
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xml import/export tools and performance

2009-11-06 Thread Brian Modra
2009/11/6 Ivan Sergio Borgonovo :
> I need to import/export through xml files.
>
> I was looking for tools/examples/HOWTO for postgresql.
>
> Right now I still don't know the details of the requirements.
>
> I know I won't need a GUI.
>
> I know one of the exporting parties will be a MS SQL 2005 server, so
> it would be nice if there was an easy way to import xml generated
> with the FOR XML AUTO, XMLSCHEMA ('forpg').
>
> I'd like to have a tool that can write XSD from queries
> automatically.
>
> I may need to strictly specify one of the xml output format since
> one of the importing parties pretend to support xml with something
> that is more like a tagged csv.
>
> Currently I'm mostly looking around to see where it would be
> convenient to move the boundaries of the problem.
>
> I'd be curious about what kind of performance impact they may have
> compared to COPY.
>
> thanks

I don't know of any such tools, but they may exist, if not:

Tell me the XML format you want in/out of postgres, and I can write a
tool for you. I had a quick look just now to see what the MS Access
format is, and its typically difficult to wade through the moutains of
useless information that people have posted on teh web about Microsoft
products, so i lost interest. However, if you can point me in the
right direction, so I can see the DTD or examples, or both... then I
can tell you how easy/difficult it would be to write a converter.

> --
> Ivan Sergio Borgonovo
> http://www.webthatworks.it
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres for mobile website?

2009-11-03 Thread Brian Modra
2009/11/4 Andre Lopes :
> Hi,
>
> I have a website using Postgres database. Now I need to develop a mobile
> website, what should I do?

You need some middleware.
As it happens, my open source project "The Karoo Project" includes
some middleware for developing websites using postgresql as the
database, so you could use that. I had a look around for tools for
such middleware and seemed to find only PHP, but I'd be interested to
know if anyoe can enlighten me (in effect to tell me if my project is
filling a much needed gap or not.)

> Use the Postgres database or copy the information of the database to a
> SQLite database? Postgres isn't heavy to a mobile website?

The website being heavy or light depends on the way you design it.
As long as your output is simple xhtml, or wap, or both (determined by
the mobile phone's html request headers), then it will be "light" from
the phone's perspective.

> The website is update 2 times month. I can easly sincronize the databases
> Postgres to SQLite. What do you think about this solution?

It depends a lot on the type of application and the type of data.

> Best Regards.



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Many instances of postgres.exe

2009-10-15 Thread Brian Modra
2009/10/15 Bob Pawley :
> Hi
>
> While looking at an error message for iexplore.exe, I noticed, on Task
> Manager, that there are 5 or more instances of postgres.exe running. Each
> instance is consuming between 7 to 10 megs of resources, for a total of
> almost 50 megs.
>
> Is this normal behavour?

yes

> If so, could someone enlighten me as to the purpose?

Its good that it uses more than one process, because each task then is
separated from other tasks by the operating system's memory
management. I.e. if one crashes, the others stay up.
This is good design as opposed to hugely multi-threaded apps where one
little bug can bring everything down.

The memory used is configurable, you can set up the amount of caching,
but actually I am not sure exactly how much memory is used for what
purpose. However, I set up my server with larger cache than standard.
It has a load of memory though... 50Mb is very little memory
considering you are talking about a professional database system

> Bob



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query is taking longer time after a while

2009-09-29 Thread Brian Modra
2009/9/29 Sam Mason :
> On Tue, Sep 29, 2009 at 02:25:52PM +0200, Brian Modra wrote:
>> 2009/9/29 tomrevam :
>> > My DB is auto-vacuuming all the time. The specific table I'm talking about
>> > gets vacuumed at least every 2 hours (usually a little more frequently than
>> > that).
>> > Deletes are happening on the table at about the same rate as inserts (there
>> > are also some updates).
>>
>> The index quite likely is in a poor state.
>
> Really? Plain vacuum should allow things to reach a steady state after
> a while, doing a large delete will put things out of kilter, but that
> doesn't sound to be the case here.  Vacuum full can also cause things to
> go amiss, but if it's just regular vacuums then things should be OK.

If there are a lot of deletes, then likely the index parameters are
not the best.

ANALYSE yourtable;

Then, reindex (or create new index followed by drop index and rename -
if you want to leave the index online.

> What do you get out of vacuum analyse verbose? for this table?
>
>> You could try this:
>>
>> analyse 
>> create index ... (same parameters as existing index)
>> delete the old index.
>> rename the new index to the same name as the old one
>> repeat this for all indexes.
>
> Why not just do:
>
>  REINDEX TABLE yourbigtable;
>
> No need to worry about rebuilding foreign key constraints or anything
> like that then.

Thats OK if the table can be taken offline. REINDEX locks the index
while in progress.

>
> --
>  Sam  http://samason.me.uk/
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] variables in ad hoc queries

2009-09-29 Thread Brian Modra
2009/9/29 Iain Barnett :
> If I have a function that returns an integer, how can I use the following
> code outside of a function block?
>
> declare
> _myint int;
>
> begin
> _myint := myfunction( );
> end;
>

select myfunction from myfunction();

or, you can use the function in SQLs wherever you'd usually use an integer

> or is it just not possible?
>
> Iain
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query is taking longer time after a while

2009-09-29 Thread Brian Modra
2009/9/29 tomrevam :
>
>
>
> Brian Modra-2 wrote:
>>
>> When did you last do an analyse and re-create indexes?
>> Is the table UPDATEd to or DELETEd from, or just INSERTed ?
>> Is your DB auto vacuuming?
>>
>
> My DB is auto-vacuuming all the time. The specific table I'm talking about
> gets vacuumed at least every 2 hours (usually a little more frequently than
> that).
> Deletes are happening on the table at about the same rate as inserts (there
> are also some updates).

The index quite likely is in a poor state. You could try this:

analyse 
create index ... (same parameters as existing index)
delete the old index.
rename the new index to the same name as the old one
repeat this for all indexes.

> --
> View this message in context: 
> http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25661758.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query is taking longer time after a while

2009-09-29 Thread Brian Modra
2009/9/29 tomrevam :
>
> I have a table with 5 million rows. 10 inserts and deletes are performed on
> this table every second.
> The table has indexes on the columns I use to query it, and the query is
> returning about a 1000 rows. Initially the query takes a very short time
> (order of miliseconds), after a few hours it takes hundreds of miliseconds,
> and after a few days it can take more than 10 seconds. When this happens it
> also blocks all other operations on the database and I see very long times
> for all of them.

When did you last do an analyse and re-create indexes?
Is the table UPDATEd to or DELETEd from, or just INSERTed ?
Is your DB auto vacuuming?

I found that if you delete a lot of rows, the index seems to still
have entries for those deleted rows, though it automatically skips
them, this "skipping of them" takes time.
Also, if the index was created when the table was smalleror the types
of values were quite different, then the parameters for the index will
be wrong for the current total data. So it will be inefficient.
Tables that are frequently UPDATEd become fragmented, and need VACUUM.

I have a table from which I deleted 1/3 of its contents (32 million
rows), and then if I did an index search that would have included
those deleted rows, it took an age. Hours in fact! So I had to first
run ANALYSE on the table, create a new index, then delete the old and
rename the new index.

> I thought this may be caused by the indexes not remaining in the memory, but
> I increased the shared_buffers to 0.5 GB and this didn't seem to help.
> --
> View this message in context: 
> http://www.nabble.com/query-is-taking-longer-time-after-a-while-tp25661219p25661219.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ideal server

2008-10-17 Thread Brian Modra
Hi,
I have a server in production running postgresql, receiving 110 rows
inserted per second, with some pretty intense queries involving long
plpgsql.
This server has 4Gigs of RAM and dual processor. Disk is Raid 5.
I need more power, and am wondering what is the place really I need to put
more emphasis? CPU, RAM, or disk?
I'm thinking of a 4xCPU and 20 Gigs and one of those large ram disks which
has its own battery and writes all RAM to hard disk in the event of power
failure.

Obviously I need to first get a good sysadmin guy to configure postgres
properly, but I am sure I also need a bigger host.
Any advice will be appreciated.
Brian

-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa


Re: [GENERAL] [ADMIN] how to create cluster(loadbanacing) of postgresql server

2008-01-29 Thread Brian Modra
http://slony.info/

On 30/01/2008, yogesh m chaudhari <[EMAIL PROTECTED]> wrote:
>
> Hi Friends
> Can any Guys help me to create postgresql Cluster.
> Is there any useful link or Info.  related to cluster than pl mail me.
>
> --
> This message has been scanned for viruses and
> dangerous content by *MailScanner* <http://www.mailscanner.info/>, and is
> believed to be clean.




-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 183 8059
6 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa