Re: [GENERAL] Generate a list of (days/hours) between two dates

2005-06-27 Thread Gregory Youngblood
Hopefully I'm understanding your question correctly. If so, maybe  
this will do what you are wanting.


First, a couple of questions. Do you have this data in a table  
already, and are looking to extract information based on the dates?  
Or, are you basically wanting something like a for loop so you can  
generate the dates between start and stop values?


If the former, and I understand what you are looking to accomplish,  
here's one way to do it:


select timestampfield::date::timestamp as "date", count(*) from table  
where timestampfield between start and stop group by "date" order by  
"date";


should yield:
-MM-DD 00:00:00#

for hours, use :

select (substr(timestampfield, 1, 13) || ':00:00')::timestamp as  
"hourly", count(*) from table where timestampfield between start and  
stop group by "hourly" order by "hourly";


should yield:
-MM-DD HH:00:00  #

Of course, this assumes your database already has this information.

i hope this helps.
Greg

On Jun 27, 2005, at 10:30 AM, [EMAIL PROTECTED] wrote:


Hi guys,

I've scoured the date/time functions in the docs as well as
google-grouped as many different combinations as I could think of to
figure this out without asking, but I'm having no luck.

I'd like to make a query that would return a list of every trunc'd
TIMESTAMPs between two dates.  For example, I'd want to get a list of
every date_trunc('hour',whatever) between 6-1-2005 and 6-10-2005 and
get a list that looks like:

6-1-2005 00:00:00
6-1-2005 01:00:00
6-1-2005 02:00:00
etc

Conversely, I want to generate a list of every day between two dates,
like:

6-1-2005 00:00:00
6-2-2005 00:00:00
6-3-2005 00:00:00

I know there's gotta be some way to do this in a SELECT function, but
I'm running into a brickwall.  I'm trying to take some of my date
handling logic out of code and use the db engine so I can spend less
time developing/maintaining code when mature date handling already
exists in a resource I've already got loaded.

Any thoughts?


---(end of  
broadcast)---

TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Generate a list of (days/hours) between two dates

2005-06-27 Thread ben . hallert
Hi guys,

I've scoured the date/time functions in the docs as well as
google-grouped as many different combinations as I could think of to
figure this out without asking, but I'm having no luck.

I'd like to make a query that would return a list of every trunc'd
TIMESTAMPs between two dates.  For example, I'd want to get a list of
every date_trunc('hour',whatever) between 6-1-2005 and 6-10-2005 and
get a list that looks like:

6-1-2005 00:00:00
6-1-2005 01:00:00
6-1-2005 02:00:00
etc

Conversely, I want to generate a list of every day between two dates,
like:

6-1-2005 00:00:00
6-2-2005 00:00:00
6-3-2005 00:00:00

I know there's gotta be some way to do this in a SELECT function, but
I'm running into a brickwall.  I'm trying to take some of my date
handling logic out of code and use the db engine so I can spend less
time developing/maintaining code when mature date handling already
exists in a resource I've already got loaded.

Any thoughts?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postmaster Out of Memory

2005-06-27 Thread Jeff Gold

Tom Lane wrote:

TRUNCATE and CLUSTER both rebuild indexes, so they'd also trigger the
leak.


Sorry to bug you again, but I have two quick followup questions: (1) is 
the leak you discovered fixed on the 8.0 branch? and (2) would closing 
the database connection once per day be a reasonable way to work around 
the problem in the absence of the patch you forwarded?


Jeff

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] performance for insert / update

2005-06-27 Thread Catalin Constantin
Hello,

I have a pretty big database with about 200 000 rows.
This is the main table. Also some other tables with FKs to this main
table.

I have to calculate some numbers for each entry at a certain amount of
time and update the DB.

I've noticed the update TAKES a very long time.

For example i have a second table which has 3 columns:
1) the FK to the main big table
2) number 1
3) number 2

After the calculation is done i execute many update table2 set
number1=%d, number2=%d.

It takes quite a lot of time.

Any clue of how can i improve this time ?

I did some tests with other engines.

Eg: Mysql (no FKs) - aprox 10 times faster
SQLite - aprox 5 time faster

I tryed to set fsync = false but no luck either.
It still takes quite a lot of time.

I can share the complete test results if somebody is interested.

Thanks for your help !

p.s.: postgresql-7.4.7 is the version i have !

-- 
Catalin Constantin
Bounce Software
http://www.bounce-software.com
http://www.cabanova.ro


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Populating huge tables each day

2005-06-27 Thread Jim C. Nasby
On Mon, Jun 27, 2005 at 01:05:42PM -0700, Dann Corbit wrote:
> 
> > -Original Message-
> > From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
> > Sent: Monday, June 27, 2005 12:58 PM
> > To: Dann Corbit
> > Cc: Ben-Nes Yonatan; pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] Populating huge tables each day
> > 
> > On Mon, Jun 27, 2005 at 12:43:57PM -0700, Dann Corbit wrote:
> > > I see a lot of problems with this idea.
> > >
> > > You mention that the database is supposed to be available 24x7.
> > > While you are loading, the database table receiving data will not be
> > > available.  Therefore, you will have to have one server online (with
> > 
> > Why do you think that's the case?
> 
> He's doing a bulk load.  I assume he will have to truncate the table and
> load it with the copy command.

Don't ass-u-me; he said he'd be deleting from the main table, not
truncating.

> Is there an alternative I do not know of that is equally fast?
 
Nope, truncate is undoubtedly faster. But it also means you would have
downtime as you mentioned. If it were me, I'd probably make the
trade-off of using a delete inside a transaction.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Bruno Wolff III
On Mon, Jun 27, 2005 at 17:09:37 -0400,
  Vivek Khera <[EMAIL PROTECTED]> wrote:
> 
> On Jun 27, 2005, at 4:36 PM, John Browne wrote:
> 
> >I'm interested in doing a project for calculating distances similar to
> >this.  Anyone have suggestions on how/where this type of data can be
> >obtained?  Is it freely available anywhere?
> >
> 
> Google is your friend.  There are places that sell very well kept  
> zipcode databases for under $50.

The US government gives it away for free. Look for "tiger".

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] automating backup ?

2005-06-27 Thread Zlatko Matic

Hello.
I created a Windows XP schedule for backup, following your instruction. Now 
I have a .bat file with this script:


cd D:\Program Files\PostgreSQL\8.0\bin
pg_dumpall >D:\MYDATABASE_DUMPALL -U postgres
pg_dumpall >D:\MYDATABASE_SHEMA -U postgres -s
pg_dumpall >D:\MYDATABASE_GLOBALS -U postgres -g

Well, it works OK, but prompts for password every time. Is there any way 
that I pass the superuser password (off course, in safe way) so that it 
works automatically without prompting for password ?


- Original Message - 
From: "Andreas" <[EMAIL PROTECTED]>

To: 
Cc: "Zlatko Matic" <[EMAIL PROTECTED]>
Sent: Monday, June 27, 2005 12:43 PM
Subject: Re: [GENERAL] automating backup ?



Zlatko Matic schrieb:


Thanks Fuhr.
Anybody can tell me how to do it on Windows XP ?
Thanks.



That's again a pure Windows issue, but not commonly needed.
I guess, it should be in the Windows PG-FAQ, too.

I'll describe the way you go with Windows 2000.
If you are lucky WinXP Pro does it the same way and if you have only WinXP 
Home you might be lucky if you have the needed timer-service at all. 
Please try it and come back to the list and tell us what you did and if it 
solved your problem.



First you write a text file, that contains all commands you want to get 
executed.

All those which you would otherwise type yourself on the command line.
Call this file   zlatko_backup.batjust that it has a unique name.

In Windows' start menue you find "programs".
There is a subfolder where I don't know the English name, but it holds 
among other things the calculator and a systemprograms-folder.

In this Systemprograms-folder you should find "planned tasks" as a folder.
Open the "planned tasks" folder. It is empty.
Right click in it and select "new" --> "planned task".
Select it and you will get an settings menue of a task where you can 
define what program should run when.

Enter here the path to zlatko_backup.bat.

Provided Window's planned-tasks-service runs all is set now.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Indexes not used - what am I missing?

2005-06-27 Thread Relyea, Mike
I knew I would forget to include some obvious info.  I'm running PG
8.0.x on WinXP and have vacuum full analyzed. 

-Original Message-
From: Relyea, Mike 
Sent: Monday, June 27, 2005 5:10 PM
To: 'pgsql-general@postgresql.org'
Subject: Indexes not used - what am I missing?

Given my query:

SELECT "PrintSamples"."MachineID" , "PrintSamples"."PrintCopyID" ,
"tblColors"."Color" , "tblBlockAC"."AreaCoverage" ,
"tblFriendlyParameterNames"."FriendlyParameterName" AS "Measurement" ,
"ParameterValues"."ParameterValue" AS "Value" FROM "AnalysisModules"
INNER JOIN (("tblBlockAC" INNER JOIN "Targets" ON
"tblBlockAC"."TargetID" = "Targets"."TargetID") INNER JOIN (("tblColors"
INNER JOIN ("tblTPNamesAndColors" INNER JOIN "PrintSamples" ON
"tblTPNamesAndColors"."TestPatternName" =
"PrintSamples"."TestPatternName") ON "tblColors"."ColorID" =
"tblTPNamesAndColors"."ColorID") INNER JOIN (("DigitalImages" INNER JOIN
"PrintSampleAnalyses" ON "DigitalImages"."ImageID" =
"PrintSampleAnalyses"."ImageID") INNER JOIN
(("tblFriendlyParameterNames" INNER JOIN "ParameterNames" ON
"tblFriendlyParameterNames"."ParameterID" =
"ParameterNames"."ParameterID") INNER JOIN ("Measurements" INNER JOIN
"ParameterValues" ON "Measurements"."MeasurementID" =
"ParameterValues"."MeasurementID") ON "ParameterNames"."ParameterID" =
"ParameterValues"."ParameterID") ON "PrintSampleAnalyses"."psaID" =
"Measurements"."psaID") ON "PrintSamples"."PrintSampleID" =
"DigitalImages"."PrintSampleID") ON "Targets"."TargetID" =
"Measurements"."TargetID") ON "AnalysisModules"."MetricID" =
"Measurements"."MetricID" WHERE (("PrintSamples"."MachineID" = '3179')
AND (("AnalysisModules"."AnalysisModuleName")='DeltaE') AND
(("PrintSamples"."TestPatternName") Like '%TP8%') AND
(("ParameterNames"."ParameterName") Like 'Mean%')) ORDER BY
"PrintSamples"."MachineID" , "PrintSamples"."PrintCopyID" ,
"tblBlockAC"."AreaCoverage" , "tblColors"."ColorID" ,
"tblFriendlyParameterNames"."FriendlyParameterName" ;


My explain analyze output:

QUERY PLAN
Sort  (cost=429075.71..429075.71 rows=2 width=45) (actual
time=44984.000..44984.000 rows=2352 loops=1)
  Sort Key: "PrintSamples"."MachineID"
  ->  Nested Loop  (cost=415721.80..429075.70 rows=2 width=45) (actual
time=44828.000..44968.000 rows=2352 loops=1)
  ->  Nested Loop  (cost=415721.80..429069.64 rows=2 width=53) (actual
time=44828.000..44937.000 rows=2352 loops=1)
  ->  Hash Join  (cost=415721.80..429054.51 rows=5 width=45) (actual
time=44828.000..44874.000 rows=2352 loops=1)
  Hash Cond: ("outer"."MetricID" = "inner"."MetricID")
  ->  Hash Join  (cost=415720.44..429052.95 rows=31 width=49) (actual
time=44828.000..44874.000 rows=2352 loops=1)
  Hash Cond: ("outer"."PrintSampleID" = "inner"."PrintSampleID")
  ->  Hash Join  (cost=415665.42..428234.44 rows=152635 width=28)
(actual time=35234.000..43656.000 rows=835385 loops=1)
  Hash Cond: ("outer"."psaID" = "inner"."psaID")
  ->  Merge Join  (cost=414212.50..424585.45 rows=146405 width=28)
(actual time=35062.000..41235.000 rows=835385 loops=1)
  Merge Cond: ("outer"."MeasurementID" = "inner"."MeasurementID")
  ->  Sort  (cost=202227.03..202593.04 rows=146405 width=20) (actual
time=27140.000..27451.000 rows=835385 loops=1)
  Sort Key: "ParameterValues"."MeasurementID"
  ->  Hash Join  (cost=3.23..189665.77 rows=146405 width=20) (actual
time=15.000..25139.000 rows=835385 loops=1)
  Hash Cond: ("outer"."ParameterID" = "inner"."ParameterID")
  ->  Seq Scan on "ParameterValues"  (cost=0.00..143853.18 rows=8787918
width=12) (actual time=0.000..13302.000 rows=8787896 loops=1)
  ->  Hash  (cost=3.23..3.23 rows=1 width=20) (actual time=0.000..0.000
rows=0 loops=1)
  ->  Hash Join  (cost=1.62..3.23 rows=1 width=20) (actual
time=0.000..0.000 rows=4 loops=1)
  Hash Cond: ("outer"."ParameterID" = "inner"."ParameterID")
  ->  Seq Scan on "tblFriendlyParameterNames"  (cost=0.00..1.40 rows=40
width=16) (actual time=0.000..0.000 rows=40 loops=1)
  ->  Hash  (cost=1.61..1.61 rows=1 width=4) (actual time=0.000..0.000
rows=0 loops=1)
  ->  Seq Scan on "ParameterNames"  (cost=0.00..1.61 rows=1 width=4)
(actual time=0.000..0.000 rows=4 loops=1)
  Filter: (("ParameterName")::text ~~ 'Mean%'::text)
  ->  Sort  (cost=211985.47..216074.23 rows=1635502 width=16) (actual
time=7922.000..9003.000 rows=2145644 loops=1)
  Sort Key: "Measurements"."MeasurementID"
  ->  Seq Scan on "Measurements"  (cost=0.00..43191.02 rows=1635502
width=16) (actual time=0.000..3598.000 rows=1635476 loops=1)
  ->  Hash  (cost=1403.55..1403.55 rows=19748 width=8) (actual
time=157.000..157.000 rows=0 loops=1)
  ->  Hash Join  (cost=529.85..1403.55 rows=19748 width=8) (actual
time=63.000..142.000 rows=19748 loops=1)
  Hash Cond: ("outer"."ImageID" = "inner"."ImageID")
  ->  Seq Scan on "PrintSampleAnalyses"  (cost=0.00..577.48 rows=19748
width=8) (actual time=0.000..16.000 rows=19748 loops=1)
  ->  Hash  (cost=480.48..480.

[GENERAL] Indexes not used - what am I missing?

2005-06-27 Thread Relyea, Mike
Given my query:

SELECT "PrintSamples"."MachineID" , "PrintSamples"."PrintCopyID" ,
"tblColors"."Color" , "tblBlockAC"."AreaCoverage" ,
"tblFriendlyParameterNames"."FriendlyParameterName" AS "Measurement" ,
"ParameterValues"."ParameterValue" AS "Value" FROM "AnalysisModules"
INNER JOIN (("tblBlockAC" INNER JOIN "Targets" ON
"tblBlockAC"."TargetID" = "Targets"."TargetID") INNER JOIN (("tblColors"
INNER JOIN ("tblTPNamesAndColors" INNER JOIN "PrintSamples" ON
"tblTPNamesAndColors"."TestPatternName" =
"PrintSamples"."TestPatternName") ON "tblColors"."ColorID" =
"tblTPNamesAndColors"."ColorID") INNER JOIN (("DigitalImages" INNER JOIN
"PrintSampleAnalyses" ON "DigitalImages"."ImageID" =
"PrintSampleAnalyses"."ImageID") INNER JOIN
(("tblFriendlyParameterNames" INNER JOIN "ParameterNames" ON
"tblFriendlyParameterNames"."ParameterID" =
"ParameterNames"."ParameterID") INNER JOIN ("Measurements" INNER JOIN
"ParameterValues" ON "Measurements"."MeasurementID" =
"ParameterValues"."MeasurementID") ON "ParameterNames"."ParameterID" =
"ParameterValues"."ParameterID") ON "PrintSampleAnalyses"."psaID" =
"Measurements"."psaID") ON "PrintSamples"."PrintSampleID" =
"DigitalImages"."PrintSampleID") ON "Targets"."TargetID" =
"Measurements"."TargetID") ON "AnalysisModules"."MetricID" =
"Measurements"."MetricID" WHERE (("PrintSamples"."MachineID" = '3179')
AND (("AnalysisModules"."AnalysisModuleName")='DeltaE') AND
(("PrintSamples"."TestPatternName") Like '%TP8%') AND
(("ParameterNames"."ParameterName") Like 'Mean%')) ORDER BY
"PrintSamples"."MachineID" , "PrintSamples"."PrintCopyID" ,
"tblBlockAC"."AreaCoverage" , "tblColors"."ColorID" ,
"tblFriendlyParameterNames"."FriendlyParameterName" ;


My explain analyze output:

QUERY PLAN
Sort  (cost=429075.71..429075.71 rows=2 width=45) (actual
time=44984.000..44984.000 rows=2352 loops=1)
  Sort Key: "PrintSamples"."MachineID"
  ->  Nested Loop  (cost=415721.80..429075.70 rows=2 width=45) (actual
time=44828.000..44968.000 rows=2352 loops=1)
  ->  Nested Loop  (cost=415721.80..429069.64 rows=2 width=53) (actual
time=44828.000..44937.000 rows=2352 loops=1)
  ->  Hash Join  (cost=415721.80..429054.51 rows=5 width=45) (actual
time=44828.000..44874.000 rows=2352 loops=1)
  Hash Cond: ("outer"."MetricID" = "inner"."MetricID")
  ->  Hash Join  (cost=415720.44..429052.95 rows=31 width=49) (actual
time=44828.000..44874.000 rows=2352 loops=1)
  Hash Cond: ("outer"."PrintSampleID" = "inner"."PrintSampleID")
  ->  Hash Join  (cost=415665.42..428234.44 rows=152635 width=28)
(actual time=35234.000..43656.000 rows=835385 loops=1)
  Hash Cond: ("outer"."psaID" = "inner"."psaID")
  ->  Merge Join  (cost=414212.50..424585.45 rows=146405 width=28)
(actual time=35062.000..41235.000 rows=835385 loops=1)
  Merge Cond: ("outer"."MeasurementID" = "inner"."MeasurementID")
  ->  Sort  (cost=202227.03..202593.04 rows=146405 width=20) (actual
time=27140.000..27451.000 rows=835385 loops=1)
  Sort Key: "ParameterValues"."MeasurementID"
  ->  Hash Join  (cost=3.23..189665.77 rows=146405 width=20) (actual
time=15.000..25139.000 rows=835385 loops=1)
  Hash Cond: ("outer"."ParameterID" = "inner"."ParameterID")
  ->  Seq Scan on "ParameterValues"  (cost=0.00..143853.18 rows=8787918
width=12) (actual time=0.000..13302.000 rows=8787896 loops=1)
  ->  Hash  (cost=3.23..3.23 rows=1 width=20) (actual time=0.000..0.000
rows=0 loops=1)
  ->  Hash Join  (cost=1.62..3.23 rows=1 width=20) (actual
time=0.000..0.000 rows=4 loops=1)
  Hash Cond: ("outer"."ParameterID" = "inner"."ParameterID")
  ->  Seq Scan on "tblFriendlyParameterNames"  (cost=0.00..1.40 rows=40
width=16) (actual time=0.000..0.000 rows=40 loops=1)
  ->  Hash  (cost=1.61..1.61 rows=1 width=4) (actual time=0.000..0.000
rows=0 loops=1)
  ->  Seq Scan on "ParameterNames"  (cost=0.00..1.61 rows=1 width=4)
(actual time=0.000..0.000 rows=4 loops=1)
  Filter: (("ParameterName")::text ~~ 'Mean%'::text)
  ->  Sort  (cost=211985.47..216074.23 rows=1635502 width=16) (actual
time=7922.000..9003.000 rows=2145644 loops=1)
  Sort Key: "Measurements"."MeasurementID"
  ->  Seq Scan on "Measurements"  (cost=0.00..43191.02 rows=1635502
width=16) (actual time=0.000..3598.000 rows=1635476 loops=1)
  ->  Hash  (cost=1403.55..1403.55 rows=19748 width=8) (actual
time=157.000..157.000 rows=0 loops=1)
  ->  Hash Join  (cost=529.85..1403.55 rows=19748 width=8) (actual
time=63.000..142.000 rows=19748 loops=1)
  Hash Cond: ("outer"."ImageID" = "inner"."ImageID")
  ->  Seq Scan on "PrintSampleAnalyses"  (cost=0.00..577.48 rows=19748
width=8) (actual time=0.000..16.000 rows=19748 loops=1)
  ->  Hash  (cost=480.48..480.48 rows=19748 width=8) (actual
time=63.000..63.000 rows=0 loops=1)
  ->  Seq Scan on "DigitalImages"  (cost=0.00..480.48 rows=19748
width=8) (actual time=0.000..32.000 rows=19748 loops=1)
  ->  Hash  (cost=55.01..55.01 rows=4 width=29) (actual
time=0.000..0.000 rows=0 loops=1)
  ->

Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Vivek Khera


On Jun 27, 2005, at 4:36 PM, John Browne wrote:


I'm interested in doing a project for calculating distances similar to
this.  Anyone have suggestions on how/where this type of data can be
obtained?  Is it freely available anywhere?



Google is your friend.  There are places that sell very well kept  
zipcode databases for under $50.


Vivek Khera, Ph.D.
+1-301-869-4449 x806




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread John Browne
I'm interested in doing a project for calculating distances similar to
this.  Anyone have suggestions on how/where this type of data can be
obtained?  Is it freely available anywhere?


On 6/27/05, Uwe C. Schroeder <[EMAIL PROTECTED]> wrote:
> 
> Actually it does.
> I'm using a bounding box too. I have a stored procedure to get me what I need 
> - here's the relevant part of it.
> Explanation: zc is the record holding the point of origin. I just added the 
> maxdistance definition for this, because in my function its a parameter.
> 
> 
> 
> 
> SELECT INTO zc z.* FROM v_profile p JOIN zipcodes z ON 
> z.zipcode=p.zipcode WHERE p.uid=uid;
> IF NOT FOUND THEN
> RAISE EXCEPTION \'Cant find member %\',uid;
> END IF;
> maxdistance:=50;
> la_min:=(zc.latn - (maxdistance::float8/70.0));
> la_max:=(zc.latn + (maxdistance::float8/70.0));
> lo_min:=(zc.longw - (maxdistance::float8/70.0));
> lo_max:=(zc.longw + (maxdistance::float8/70.0));
> 
> 
> stmt:=''SELECT  n.username, n.uid, n.areacode, n.zipcode
> geo_distance(point('' || zc.longw ||'',''|| 
> zc.latn ||''),point(z.longw, z.latn))::int as distance,
> n.image_thumbnail,n.city, n.state_code
> FROM v_new_members n JOIN zipcodes z ON 
> z.zipcode=n.zipcode
> AND (z.latn BETWEEN '' || la_min || '' AND '' 
> || la_max || '')
> AND (z.longw BETWEEN '' || lo_min || '' AND 
> '' || lo_max || '') AND
> geo_distance(point(''|| zc.longw 
> ||'',''||zc.latn||''),point(z.longw, z.latn))::int <= ''||maxdistance ;
> 
> 
> 
> 
> hope that helps
> 
> UC
> 
> 
> On Monday 27 June 2005 02:08 am, you wrote:
> > Uwe C. Schroeder wrote:
> > >in the where clause use something like (requires the earthdistance contrib
> > > to be installed):
> > >
> > >geo_distance(point([origin longitude],[origin latitude]),point([target
> > >longitude column],[target latitude column])))::int <= 50
> >
> > I don't suppose geo_distance really returns a number in miles, does it?
> >
> >
> > ---(end of broadcast)---
> > TIP 5: Have you checked our extensive FAQ?
> >
> >http://www.postgresql.org/docs/faq
> 
> --
> UC
> 
> --
> Open Source Solutions 4U, LLC   2570 Fleetwood Drive
> Phone:  +1 650 872 2425 San Bruno, CA 94066
> Cell:   +1 650 302 2405 United States
> Fax:+1 650 872 2417
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Populating huge tables each day

2005-06-27 Thread Dann Corbit

> -Original Message-
> From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
> Sent: Monday, June 27, 2005 12:58 PM
> To: Dann Corbit
> Cc: Ben-Nes Yonatan; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Populating huge tables each day
> 
> On Mon, Jun 27, 2005 at 12:43:57PM -0700, Dann Corbit wrote:
> > I see a lot of problems with this idea.
> >
> > You mention that the database is supposed to be available 24x7.
> > While you are loading, the database table receiving data will not be
> > available.  Therefore, you will have to have one server online (with
> 
> Why do you think that's the case?

He's doing a bulk load.  I assume he will have to truncate the table and
load it with the copy command.

Is there an alternative I do not know of that is equally fast?

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Populating huge tables each day

2005-06-27 Thread Jim C. Nasby
On Mon, Jun 27, 2005 at 12:43:57PM -0700, Dann Corbit wrote:
> I see a lot of problems with this idea.
> 
> You mention that the database is supposed to be available 24x7.
> While you are loading, the database table receiving data will not be
> available.  Therefore, you will have to have one server online (with

Why do you think that's the case?
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Uwe C. Schroeder

Actually it does.
I'm using a bounding box too. I have a stored procedure to get me what I need - 
here's the relevant part of it.
Explanation: zc is the record holding the point of origin. I just added the 
maxdistance definition for this, because in my function its a parameter.




SELECT INTO zc z.* FROM v_profile p JOIN zipcodes z ON 
z.zipcode=p.zipcode WHERE p.uid=uid;
IF NOT FOUND THEN
RAISE EXCEPTION \'Cant find member %\',uid;
END IF;
maxdistance:=50;
la_min:=(zc.latn - (maxdistance::float8/70.0));
la_max:=(zc.latn + (maxdistance::float8/70.0));
lo_min:=(zc.longw - (maxdistance::float8/70.0));
lo_max:=(zc.longw + (maxdistance::float8/70.0));


stmt:=''SELECT  n.username, n.uid, n.areacode, n.zipcode
geo_distance(point('' || zc.longw ||'',''|| 
zc.latn ||''),point(z.longw, z.latn))::int as distance,
n.image_thumbnail,n.city, n.state_code
FROM v_new_members n JOIN zipcodes z ON 
z.zipcode=n.zipcode
AND (z.latn BETWEEN '' || la_min || '' AND '' 
|| la_max || '')
AND (z.longw BETWEEN '' || lo_min || '' AND '' 
|| lo_max || '') AND
geo_distance(point(''|| zc.longw 
||'',''||zc.latn||''),point(z.longw, z.latn))::int <= ''||maxdistance ;




hope that helps

UC


On Monday 27 June 2005 02:08 am, you wrote:
> Uwe C. Schroeder wrote:
> >in the where clause use something like (requires the earthdistance contrib
> > to be installed):
> >
> >geo_distance(point([origin longitude],[origin latitude]),point([target
> >longitude column],[target latitude column])))::int <= 50
>
> I don't suppose geo_distance really returns a number in miles, does it?
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faq

-- 
UC

--
Open Source Solutions 4U, LLC   2570 Fleetwood Drive
Phone:  +1 650 872 2425 San Bruno, CA 94066
Cell:   +1 650 302 2405 United States
Fax:+1 650 872 2417

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Populating huge tables each day

2005-06-27 Thread Dann Corbit
I see a lot of problems with this idea.

You mention that the database is supposed to be available 24x7.
While you are loading, the database table receiving data will not be
available.  Therefore, you will have to have one server online (with
only the old data), while the other one is loading.  Once the load and
all procedures are complete, you could switch the active server.

You do not mention your time constraints and the total volume of data.
If the new data must become available at some critical time, this is a
very important requirement that must be spelled out clearly.

You will need some kind of guarantee of relational integrity.  This is
always difficult to achieve when bulk loading from foreign sources.

I think it is important to spell things out more clearly.

How many tables are to be replicated?
What is the total number of expected rows for each table?
How fast are the tables expected to grow?
When must the new data become available online?
Are all of the tables in the database populated from a foreign source or
just some of them?
Do you also have access to the source data in its database format, or
only as text dumps?

Is the goal to serve as a reporting server?  Is the goal to make the
same data as the original server online for end-users and in so doing to
reduce the load on the original server?  What is the real purpose of the
task to be accomplished?

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Ben-Nes Yonatan
> Sent: Monday, June 27, 2005 1:13 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Populating huge tables each day
> 
> Hi All,
> 
> First I apologize for the length of this email im just afraid that my
> problem is quite complicated for explination & also about my english
> which is not my native language sorry :)
> 
> I'm currently building a site for a client which will need to update
his
> database on a daily routine with about 3.5 million rows from data
files
> (the files vary in size and can reach up to 1 million rows per file).
> That data will be retrieved from several files (Text or XML by my
> chioce) and part of the rows will need to be INSERT cause they doesnt
> have any previous record and some will need to UPDATE previous records
> or just INSERT and DELETE the previous records.
> Beside of that the new data also bring information in it that tell how
> to populate another table (2 fields and the result is alot less rows)
> and connect between the two tables by foreign key which is written in
> each row of the main table (the one with the 3.5 million rows).
> Now the site must work 24/7 and it will probably have 2 servers which
> will run PostreSQL (working as 1 database), the scripting language
that
> ill use is PHP if it change anything.
> 
> I thought on how to accomplish this and I would like to receive
comments
> and ideas.
> I'm mostly afraid from the stress it will make on the server during
the
> process and that the site will display working information all the
time.
> 
> A brief explaniation on the definitions ill use ahead:
> 1. New data - The retrieved data from the files (Text or XML).
> 2. Main table - The final table which need to hold about 3.5 million
> rows, it can be empty before the process or can hold already
information
> that some of it need to get updated using the "New data" and the rest
of
> the "New data" need to get inserted into it while the previous data
> which didnt got updated need to get deleted.
> 3. Second table - The table which hold information that the data at
the
> "Main table" need to get connected to using foreign keys.
> 4. Temp table - A temporary table which will hold the "New data" till
it
> will be ready to be INSERT/UPDATE the data at "Main table" (got the
> exact same columns as the "Main table").
> 4. Temp table2 - A temporary table which is created by CREATE
TEMPORARY
> TABLE AS (former SELECT INTO).
> 
> My plan:
> 1. With a COPY FROM ill insert the data to the "Temp table" (all of
the
> files in Text format).
> 2. Run at PL/pgSQL function: {
> 
> A. Start transaction
> B. DELETE the content of the current existing "Second table".
> C. INSERT data into the "Second table" using the "Temp table" - each
row
> will be checked to its values and compared to check if they exist
> already (SELECT) at the "Second table" and if not it will run an
INSERT
> to create it at the "Second table" - im afraid that this process will
be
> extremly heavy on the server.
> D. DELETE the "Main table" rows.
> E. With CREATE TEMPORARY TABLE AS ill create "Temp table2" which will
> contain all of the information of the "Temp table" + a subquery will
> retrieve the ID of the "Second table" for the foreign key - quite
heavy
> process i suspect.
> F. INSERT the data from "Temp table2" to the "Main table".
> G. End transaction + quit from PL/pgSQL. }
> 
> 3. Delete all the files.
> 
> Thanks alot in advance and again im sorry for the length of the mail
:)
> 
> Ben-Nes 

[GENERAL] Vacuum analyze question on estimated rows

2005-06-27 Thread Sailer, Denis (YBUSA-CDR)








In the following output the vacuum knows there are 99,612
pages and 1,303,891 rows.  However the last line of output during the analyze
only thinks there are 213,627 rows.  Is this so far off because the table is
bloated?  Version of PostgreSQL is “PostgreSQL 7.4.3 on
i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2”

 

 

INFO:  vacuuming "ods.production_fact"

INFO:  index "production_fact_pkey" now contains
1303891 row versions in 30843 pages

DETAIL:  0 index pages have been deleted, 0 are currently
reusable.

CPU 3.21s/0.24u sec elapsed 58.51 sec.

INFO:  "production_fact": found 0 removable,
1303891 nonremovable row versions in 99612 pages

DETAIL:  0 dead row versions cannot be removed yet.

There were 12241393 unused item pointers.

0 pages are entirely empty.

CPU 7.33s/0.78u sec elapsed 64.37 sec.

INFO:  vacuuming "pg_toast.pg_toast_104431207"

INFO:  index "pg_toast_104431207_index" now
contains 0 row versions in 1 pages

DETAIL:  0 index pages have been deleted, 0 are currently
reusable.

CPU 0.00s/0.00u sec elapsed 0.18 sec.

INFO:  "pg_toast_104431207": found 0 removable, 0 nonremovable
row versions in 0 pages

DETAIL:  0 dead row versions cannot be removed yet.

There were 0 unused item pointers.

0 pages are entirely empty.

CPU 0.00s/0.00u sec elapsed 0.37 sec.

INFO:  analyzing "ods.production_fact"

INFO:  "production_fact": 99612 pages, 3000 rows
sampled, 213627 estimated total rows

 








[GENERAL] Populating huge tables each day

2005-06-27 Thread Ben-Nes Yonatan

Hi All,

First I apologize for the length of this email im just afraid that my 
problem is quite complicated for explination & also about my english 
which is not my native language sorry :)


I'm currently building a site for a client which will need to update his 
database on a daily routine with about 3.5 million rows from data files 
(the files vary in size and can reach up to 1 million rows per file).
That data will be retrieved from several files (Text or XML by my 
chioce) and part of the rows will need to be INSERT cause they doesnt 
have any previous record and some will need to UPDATE previous records 
or just INSERT and DELETE the previous records.
Beside of that the new data also bring information in it that tell how 
to populate another table (2 fields and the result is alot less rows) 
and connect between the two tables by foreign key which is written in 
each row of the main table (the one with the 3.5 million rows).
Now the site must work 24/7 and it will probably have 2 servers which 
will run PostreSQL (working as 1 database), the scripting language that 
ill use is PHP if it change anything.


I thought on how to accomplish this and I would like to receive comments 
and ideas.
I'm mostly afraid from the stress it will make on the server during the 
process and that the site will display working information all the time.


A brief explaniation on the definitions ill use ahead:
1. New data - The retrieved data from the files (Text or XML).
2. Main table - The final table which need to hold about 3.5 million 
rows, it can be empty before the process or can hold already information 
that some of it need to get updated using the "New data" and the rest of 
the "New data" need to get inserted into it while the previous data 
which didnt got updated need to get deleted.
3. Second table - The table which hold information that the data at the 
"Main table" need to get connected to using foreign keys.
4. Temp table - A temporary table which will hold the "New data" till it 
will be ready to be INSERT/UPDATE the data at "Main table" (got the 
exact same columns as the "Main table").
4. Temp table2 - A temporary table which is created by CREATE TEMPORARY 
TABLE AS (former SELECT INTO).


My plan:
1. With a COPY FROM ill insert the data to the "Temp table" (all of the 
files in Text format).

2. Run at PL/pgSQL function: {

A. Start transaction
B. DELETE the content of the current existing "Second table".
C. INSERT data into the "Second table" using the "Temp table" - each row 
will be checked to its values and compared to check if they exist 
already (SELECT) at the "Second table" and if not it will run an INSERT 
to create it at the "Second table" - im afraid that this process will be 
extremly heavy on the server.

D. DELETE the "Main table" rows.
E. With CREATE TEMPORARY TABLE AS ill create "Temp table2" which will 
contain all of the information of the "Temp table" + a subquery will 
retrieve the ID of the "Second table" for the foreign key - quite heavy 
process i suspect.

F. INSERT the data from "Temp table2" to the "Main table".
G. End transaction + quit from PL/pgSQL. }

3. Delete all the files.

Thanks alot in advance and again im sorry for the length of the mail :)

Ben-Nes Yonatan
Canaan Surfing ltd.
http://www.canaan.net.il


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] DANGER Windows version might hurt you

2005-06-27 Thread Andreas

Magnus Hagander schrieb:


Did you check the eventlog? It's supposed to write it there..
 


Well, you are right.   =8-}
There is "FATAL:  Syntaxfehler in Datei 
»[...]/PostgreSQL-8.0.3/postgresql.conf«, Zeile 322, bei »#«"

That's the error message I got on the console.
I'm by far no Windows guru and the eventlog ... you know ... ;)
But still, it doesn't say anything about premature EOF or something. 
Instead it rises a syntaxerror on a comment line.



Hmm. It isn't. And it probably should, yes - or the parser should be
fixed to deal with it.
 

pg_hba.conf isn't affected with this issue so it wasn't the first 
thought that came to my mind and for a newbie this would be a complete 
show-stopper, I'm afraid. I was just lucky to fgure it out since 
obviously I had no backup  postgresql.conf .  ;)



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] problems with slow insert/delete queries/lot of disk write i/o in postgresql 7.2.4

2005-06-27 Thread Alexander Korobov
Hi,
We are having strange problem on production system with very slow
insert/delete commands and huge cpu and disk write activity spikes  in
postgresql 7.2.4. This behavior is very reproducible and happens in
following sequence :
1. full db vacuum
2. several hundred of insert/delete commands are performed as fast as
possible(tables used by insert/delete command are indexed to speed up
queries)


At (2) postgres starts taking 90-100% cpu and performing a lot of
writing that i guess flushing dirty buffers to disk and queries are
slowed down to nearly 1-2 per second. During this time postresql logs
'recycled transaction log ...' messages.
Postgres configuration is mostly default:
max_connections=45
shared_buffers=2048
deadlock_timeout = 6
sort_mem = 1024
log_timestamp = true
log_pid = true
debug_level = 2

What causes postgres to behave like this? Is there anything can be
done to postgres [configuration] to handle bulk queries more efficient
and reduce disk i/o and cpu consumption?

Thanks a lot and sorry for long post,

alex.

Here's top and sar outputs during this time:
7:52am  up 42 days,  6:25,  1 user,  load average: 2.62, 1.90, 1.22
187 processes: 182 sleeping, 4 running, 1 zombie, 0 stopped
CPU states:  0.0% user, 80.7% system, 17.7% nice,  1.5% idle
Mem:  1025852K av, 1012012K used,   13840K free,   0K shrd,   20796K buff
Swap: 1026036K av,   23292K used, 1002744K free  743360K cached

 PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
23411 postgres  17  10 19304  18M 18696 R N  84.1  1.8   7:16 postmaster
24177 admin 16   5  1044 1044   760 R N   6.6  0.1  17:39 top
11633 admin 17  10  5888 5888  1736 R N   6.1  0.5   0:00 ctl
321 admin  9   0 00 0 SW0.6  0.0  19:30 kjournald


sar :
07:02:00  tps  rtps  wtps   bread/s   bwrtn/s
07:52:0041.38  0.34 41.04 16.84701.34
08:02:0042.27  0.02 42.24  0.23696.80
08:12:0039.42  0.05 39.37  0.76730.63

And sample of postmaster.log :

2005-06-24 07:53:24 [23411]  DEBUG:  StartTransactionCommand
2005-06-24 07:53:24 [23411]  DEBUG:  query: insert into MONITORING_DATA_LOC_DAIL
Y (MONITORING_DATA_LOC_DAILY_ID,GEN_ID,START_TIME,END_TIME,MONITORED_OBJECT_ID,L
OCATION_ID,TIME_SPAN,GAP_MILLIS,CE_COUNT,DATA) values (1,330861,111939840,11
1948480,19011,19011,4,0,1,'6#|0#0#0#|311864#311864#0#0#0#0#0#0#0#0#0#0#0#0#0
#')
2005-06-24 07:53:24 [23411]  DEBUG:  ProcessQuery
2005-06-24 07:53:24 [23411]  DEBUG:  CommitTransactionCommand
2005-06-24 07:53:24 [23411]  DEBUG:  StartTransactionCommand
2005-06-24 07:53:24 [23411]  DEBUG:  query: delete from MONITORING_DATA_LOC_HOUR
LY where MONITORED_OBJECT_ID = cast(19011 as bigint) AND END_TIME <= cast(111948
480 as bigint)
2005-06-24 07:53:24 [23411]  DEBUG:  ProcessQuery
2005-06-24 07:53:25 [23411]  DEBUG:  CommitTransactionCommand
2005-06-24 07:53:25 [23411]  DEBUG:  StartTransactionCommand
2005-06-24 07:53:25 [23411]  DEBUG:  query: insert into MONITORING_DATA_LOC_DAIL
Y (MONITORING_DATA_LOC_DAILY_ID,GEN_ID,START_TIME,END_TIME,MONITORED_OBJECT_ID,L
OCATION_ID,TIME_SPAN,GAP_MILLIS,CE_COUNT,DATA) values (1,330861,111939840,11
1948480,15877,15877,4,0,1,'7#|0#0#0#|121435#121435#0#0#0#0#0#0#0#0#0#0#0#0#0
#')
2005-06-24 07:53:25 [23411]  DEBUG:  ProcessQuery
2005-06-24 07:53:25 [23411]  DEBUG:  CommitTransactionCommand
2005-06-24 07:53:25 [23411]  DEBUG:  StartTransactionCommand
2005-06-24 07:53:25 [23411]  DEBUG:  query: delete from MONITORING_DATA_LOC_HOUR
LY where MONITORED_OBJECT_ID = cast(15877 as bigint) AND END_TIME <= cast(111948
480 as bigint)
2005-06-24 07:53:25 [23411]  DEBUG:  ProcessQuery
2005-06-24 07:53:26 [23411]  DEBUG:  CommitTransactionCommand
2005-06-24 07:53:26 [23411]  DEBUG:  StartTransactionCommand
2005-06-24 07:53:26 [23411]  DEBUG:  query: insert into MONITORING_DATA_LOC_DAIL
Y (MONITORING_DATA_LOC_DAILY_ID,GEN_ID,START_TIME,END_TIME,MONITORED_OBJECT_ID,L
OCATION_ID,TIME_SPAN,GAP_MILLIS,CE_COUNT,DATA) values (1,330861,111939840,11
1948480,16095,16095,4,0,1,'6#|0#0#0#|78179338#78179338#0#0#0#0#0#0#0#0#0#0#0
#0#0#')
2005-06-24 07:53:26 [23411]  DEBUG:  ProcessQuery
2005-06-24 07:53:26 [23411]  DEBUG:  CommitTransactionCommand
2005-06-24 07:53:26 [11664]  DEBUG:  recycled transaction log file 00090
0C5
2005-06-24 07:53:26 [11664]  DEBUG:  proc_exit(0)

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] DANGER Windows version might hurt you

2005-06-27 Thread Magnus Hagander
> I want to warn you all that you might hurt yourself by 
> banging your head against the wall or damage your karma by 
> using foul language if you find your postmaster rejecting to 
> work after a minor change in the postgresql.conf. (I changed 
> the listenig address to an IP instead of *.) Windows can't 
> start the service anymore but says the program (Postgres) 
> didn't issue an error message.
> Postgres doesn't write a logfile either.

Did you check the eventlog? It's supposed to write it there..

If it didn't, what version are you on?


> Line 322 was the last one and was a comment.
> So I suspected the lines before it which looked reasonably OK, too.
> 
> After trying the sensible solutions I went via the improbaple 
> to the ridiculous and there it was.
> My editor strips empty  lines at the end of a textfile and 
> sets EOF right after the last visible character so it dumps 
> the CRLF of the last line and even though it was a comment 
> Postmaster complaines about a syntax error and goes on strike.
> 
> This should be mentioned in the FAQ if it isn't already. 

Hmm. It isn't. And it probably should, yes - or the parser should be
fixed to deal with it.

Does this affect win32 only, or unix as well? Anybody know?

//Magnus

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] DANGER Windows version might hurt you

2005-06-27 Thread Andreas
I want to warn you all that you might hurt yourself by banging your head 
against the wall or damage your karma by using foul language if you find 
your postmaster rejecting to work after a minor change in the 
postgresql.conf. (I changed the listenig address to an IP instead of *.)
Windows can't start the service anymore but says the program (Postgres) 
didn't issue an error message.

Postgres doesn't write a logfile either.

After a long and thorough head scratching while staring at the config I 
logged in as "postgres" and startet postmaster directly on the console.
Now it said "syntax error in file ...postgres.conf in line 322" or 
something similar.


Line 322 was the last one and was a comment.
So I suspected the lines before it which looked reasonably OK, too.

After trying the sensible solutions I went via the improbaple to the 
ridiculous and there it was.
My editor strips empty  lines at the end of a textfile and sets EOF 
right after the last visible character so it dumps the CRLF of the last 
line and even though it was a comment Postmaster complaines about a 
syntax error and goes on strike.


This should be mentioned in the FAQ if it isn't already. 





---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Web application development on Win32, deployment on

2005-06-27 Thread Scott Marlowe
On Mon, 2005-06-27 at 10:50, Alvaro Herrera wrote:

> 
> Regarding replication, the Slony-I guys state that their project is not
> yet ported to Windows, but giving that there is a pthread library for
> Windows (which is about the only thing it lacks, apparently) then it
> should be possible to port it rather easily.  I'm sure they'd welcome it
> if somebody were to do the legwork.  I know of several people who would
> be very glad to have Win32 Slony-I.

Actually, the port is well underway, and JUST missed the cutoff for
1.1.  Might well show up for 1.1.1 or so.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Status of pg_dump

2005-06-27 Thread Vivek Khera


On Jun 27, 2005, at 8:29 AM, K.Deepa wrote:


Dear All,

Is there any way to check the santity of the backup taken using  
pg_dump?



Send it to a psychotherapist?

The only way to verify a backup is to restore from it and check it  
however you can (compare with original if it is unaltered, or check  
record integrity somehow).  Same advice as you get for verifying your  
system tape backups...


Vivek Khera, Ph.D.
+1-301-869-4449 x806




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Web application development on Win32, deployment on Linux

2005-06-27 Thread Magnus Hagander
> > > I currently maintain a website which is deployed on LAMP, 
> but it is 
> > > developed on Windows (WAMP?).  I'm considering moving both to use 
> > > PostgreSQL.  I'm curious as to whether others are following a 
> > > similar development/deployment model and if so, what 
> > > pitfalls/gotchas they have found.  For example, I refresh 
> production 
> > > tables on a weekly basis from the development database.  
> Linux MySQL 
> > > is case-sensitive about table names but Windows isn't, so 
> I have to 
> > > modify table names like 'whatsnew' on Windows to 
> 'WhatsNew' before I 
> > > can run the reload script on Linux.
> > 
> > [snip replication - can't answer that]
> 
> Regarding replication, the Slony-I guys state that their 
> project is not yet ported to Windows, but giving that there 
> is a pthread library for Windows (which is about the only 
> thing it lacks, apparently) then it should be possible to 
> port it rather easily.  I'm sure they'd welcome it if 
> somebody were to do the legwork.  I know of several people 
> who would be very glad to have Win32 Slony-I.

Dave, Andreas, me and Hiroshi are working on this. It's not at top-speed
right now due to the main projects feature freeze, but work is in
progress, and it's definitly doable.

(Hey, I knew that. Should've answered from the beginning :P)


//Magnus

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Web application development on Win32, deployment on Linux

2005-06-27 Thread Alvaro Herrera
On Mon, Jun 27, 2005 at 01:59:42PM +0200, Magnus Hagander wrote:
> > I currently maintain a website which is deployed on LAMP, but 
> > it is developed on Windows (WAMP?).  I'm considering moving 
> > both to use PostgreSQL.  I'm curious as to whether others are 
> > following a similar development/deployment model and if so, 
> > what pitfalls/gotchas they have found.  For example, I 
> > refresh production tables on a weekly basis from the 
> > development database.  Linux MySQL is case-sensitive about 
> > table names but Windows isn't, so I have to modify table 
> > names like 'whatsnew' on Windows to 'WhatsNew' before I can 
> > run the reload script on Linux. 
> 
> [snip replication - can't answer that]

Regarding replication, the Slony-I guys state that their project is not
yet ported to Windows, but giving that there is a pthread library for
Windows (which is about the only thing it lacks, apparently) then it
should be possible to port it rather easily.  I'm sure they'd welcome it
if somebody were to do the legwork.  I know of several people who would
be very glad to have Win32 Slony-I.

-- 
Alvaro Herrera ()
"La Primavera ha venido. Nadie sabe como ha sido" (A. Machado)

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Bruno Wolff III
On Sun, Jun 26, 2005 at 16:40:03 -0700,
  CSN <[EMAIL PROTECTED]> wrote:
> If I have a table of items with latitude and longitude
> coordinates, is it possible to find all other items
> that are within, say, 50 miles of an item, using the
> geometric functions
> (http://www.postgresql.org/docs/8.0/interactive/functions-geometry.html)?
> If so, how?

You could take a look at using the parts of the earthdistance contrib
module based on the cube data type. Gist indexes are supposed to make
this kind of query fast.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Infix Function?

2005-06-27 Thread Bruno Wolff III
On Sun, Jun 26, 2005 at 15:42:25 -0500,
  Peter Fein <[EMAIL PROTECTED]> wrote:
> 
> Is there anyway to create an infix version of this?  I'd really like be
> able to write (where a..d are some boolean conditions):

Use <>:
area=> select true <> false <> true;
 ?column?
--
 f
(1 row)

area=> select false <> true <> false;
 ?column?
--
 t
(1 row)

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Status of pg_dump

2005-06-27 Thread K.Deepa

Dear All,

Is there any way to check the santity of the backup taken using pg_dump?

--
regards,
Deepa K

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Vivek Khera


On Jun 26, 2005, at 7:40 PM, CSN wrote:


If I have a table of items with latitude and longitude
coordinates, is it possible to find all other items
that are within, say, 50 miles of an item, using the
geometric functions
(http://www.postgresql.org/docs/8.0/interactive/functions- 
geometry.html)?

If so, how?


We optimize this query by first finding the bounding square, then  
comparing the lat/lon of the other objects (in our case zip codes)  
for radius.  This has the advantage of deleting a *lot* of possible  
values before passing them to the heavy math formulas.


so ours boils down to something along these lines ($zip_radius is the  
miles we're looking for)


the distance computation:

(acos((sin($input_lat/57.2958) * sin(zip_latitude/57.2958)) + (cos 
($input_lat/57.2958) * cos(zip_latitude/57.2958) * cos(zip_longitude/ 
57.2958 - $input_long/57.2958))) * 3963) <= $zip_radius


and the bounding box is done like this:

$lat_range  = $zip_radius / ((6076. / 5280.) * 60);
$long_range = $zip_radius / (((cos($input_lat * 3.141592653589 / 180)  
* 6076.) / 5280.) * 60);


so just do a +/- of the center point lat/lon with the above values  
and you have your square bounding box inside which you run your  
distance computation.


Putting it together is left as an exercise for the reader (hint: just  
AND your pieces together...)



Vivek Khera, Ph.D.
+1-301-869-4449 x806




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Vivek Khera


On Jun 27, 2005, at 3:47 AM, Janning Vygen wrote:

I had some problems with the calculation inside acos() sometimes  
being greater

than 1, which should not occur. Please use a
   CASE WHEN sin(...) > 1 THEN 1 ELSE sin(...) END
if you have the same problem.



We've seen this as well with the distance radius calculation.  It  
doesn't happen in 8.x but did happen 7.4, and then was easily worked  
around by reducing the precision of the arguments.  Ie, we would use  
73.13 rather than 73.1343593421 as pulled from the database for the  
lat/lon values of the center point.


In any case, I urge you to derive the formulas yourself from basic  
research so you *know* you're getting what you think you're getting.


Vivek Khera, Ph.D.
+1-301-869-4449 x806




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] number of records returned by cursors

2005-06-27 Thread Tom Lane
Zac <[EMAIL PROTECTED]> writes:
> does anyone know if there is a way (in plpgsql) to obtain the number of 
> records returned by a cursor without fetching them all?

No.  The system itself does not know that until you've fully executed
the query ...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Tablespaces across same raid...

2005-06-27 Thread Douglas McNaught
Himanshu Baweja <[EMAIL PROTECTED]> writes:

> i just have one raid controller with 6 disks
> attached... i see all of it as single drive with three
> partitions create by me...
>
> will creating tablespaces help me also is there a
> way i can say this table to this disk???

In a RAID, all data goes across all disks, so in your case the answer
is no.  I doublt tablespaces would help, because again all data goes
across all disks. 

What you might consider, if you can break up the RAID, is to use two
disks for a mirrored pair (RAID-1).  Put the WAL logs (the pg_xlog
directory) on that volume.  Then make a RAID-5 with the other four
disks for the rest of the database.  The pg_xlog directory is
constantly written to so having it on independent disks is very
helpful with performance.

-Doug

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Postfix/Maildrop and too many connections issues

2005-06-27 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-06-23 17:25:03 -0400:
> On Thu, 2005-06-23 at 03:39 +, Karl O. Pinc wrote:
> > On 06/22/2005 08:23:43 AM, Sven Willenberger wrote:
> > > On Wed, 2005-06-22 at 01:30 +0200, Roman Neuhauser wrote:
> > > > # [EMAIL PROTECTED] / 2005-06-21 15:00:12 -0400:
> > > > > We have a system set up whereby postfix and maildrop gather
> > > > > user info from a pg database (7.4.2 on FreeBSD 5.2.1) to do
> > > > > local mail acceptance and delivery. I have configured max
> > > > > connections at 512 but I find that this is not enough and I
> > > > > get "connection limit exceeded for non-superusers" errors. I
> > > > > see upon ps ax that there are hundreds of idle connections
> > > > > (state I).

[...]

> I just realized however that once postifx looks up the user in
> transport it hands the mail off to maildrop which then has to do its
> own db lookup for the home directory, etc and I cannot see how I would
> configure this to use proxymap.

*That* is where something like
http://pgpool.projects.postgresql.org/ would be useful.

> Well at least I have consolidated some of the lookups ...

True.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] automating backup ?

2005-06-27 Thread Zlatko Matic

thank you Andreas!

- Original Message - 
From: "Andreas" <[EMAIL PROTECTED]>

To: 
Cc: "Zlatko Matic" <[EMAIL PROTECTED]>
Sent: Monday, June 27, 2005 12:43 PM
Subject: Re: [GENERAL] automating backup ?



Zlatko Matic schrieb:


Thanks Fuhr.
Anybody can tell me how to do it on Windows XP ?
Thanks.



That's again a pure Windows issue, but not commonly needed.
I guess, it should be in the Windows PG-FAQ, too.

I'll describe the way you go with Windows 2000.
If you are lucky WinXP Pro does it the same way and if you have only WinXP 
Home you might be lucky if you have the needed timer-service at all. 
Please try it and come back to the list and tell us what you did and if it 
solved your problem.



First you write a text file, that contains all commands you want to get 
executed.

All those which you would otherwise type yourself on the command line.
Call this file   zlatko_backup.batjust that it has a unique name.

In Windows' start menue you find "programs".
There is a subfolder where I don't know the English name, but it holds 
among other things the calculator and a systemprograms-folder.

In this Systemprograms-folder you should find "planned tasks" as a folder.
Open the "planned tasks" folder. It is empty.
Right click in it and select "new" --> "planned task".
Select it and you will get an settings menue of a task where you can 
define what program should run when.

Enter here the path to zlatko_backup.bat.

Provided Window's planned-tasks-service runs all is set now.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] MacOSX, fink, missing readline/readline.h

2005-06-27 Thread Kevin Murphy

Teunis Peters wrote:


This has to do with building postgresql-8.0.3

I'm installing on a MacOSX system (10.3) and configure's failing to 
find the readline installation.



Installing the vanilla readline library from source now works great on 10.3.

If you first install this using the standard ./configure && make && sudo 
make install :


ftp://ftp.gnu.org/gnu/readline/readline-5.0.tar.gz

then your postgresql compile should work fine.

-Kevin murphy


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Windows XP installation problem

2005-06-27 Thread Magnus Hagander
> Hello!
> I'm trying to install postgresql on WindowsXP box using 
> postgresql-8.0.msi.
> The installation goes fine until running the service. I get a 
> message saying that postgre-8.0 failed to run as a service. I 
> checked system log and came across an error saying
> that: PostgreSQL Database Server 8.0 is not a valid Win32 application.
> My postgres account has log on locally and log on as a 
> service rights granted so I don't think that this is a problem.

This sounds like you either have a corrupt file somehow (which is weird, 
because checksumming is done when you unzip the file), or something is badly 
broken on your system.

can you manually run "pg_ctl" in the postgresql directory? (no matter what use, 
just see if you can run it at all).If you can, then the binary is ok, and 
something else is wrong in your system. If you acn't, then you've somehow got 
broken files in there - try re-downloading and re-installing.


//Magnus

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Web application development on Win32, deployment on Linux

2005-06-27 Thread Magnus Hagander
> I currently maintain a website which is deployed on LAMP, but 
> it is developed on Windows (WAMP?).  I'm considering moving 
> both to use PostgreSQL.  I'm curious as to whether others are 
> following a similar development/deployment model and if so, 
> what pitfalls/gotchas they have found.  For example, I 
> refresh production tables on a weekly basis from the 
> development database.  Linux MySQL is case-sensitive about 
> table names but Windows isn't, so I have to modify table 
> names like 'whatsnew' on Windows to 'WhatsNew' before I can 
> run the reload script on Linux. 

There should be no such differences on PostgreSQL. The one major pitfall
there is that UNICODE encoding is not fully supported on Win32. Yuo can
use ut, but things like sorting and upper/lower does not work.

[snip replication - can't answer that]

> Another concern about the move to Postgres is that the 
> website's current Search capability is based on MySQL's 
> full-text search functionality [SELECT ... WHERE MATCH (...) 
> AGAINST (... IN BOOLEAN MODE)] .  PostgreSQL doesn't appear 
> to have something resembling that.  I'm curious as to 
> third-party solutions or development plans.

Take a look at tsearch2, it's in /contrib/. It can fulfill most
full-text search needs - it certainly solved all mine.
You'll have a different syntax, of course, but you can get the system
working.

//Magnus

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Windows XP installation problem

2005-06-27 Thread Marek Aleksander Dąbek

Hello!
I'm trying to install postgresql on WindowsXP box using postgresql-8.0.msi.
The installation goes fine until running the service. I get a message 
saying that postgre-8.0
failed to run as a service. I checked system log and came across an 
error saying

that: PostgreSQL Database Server 8.0 is not a valid Win32 application.
My postgres account has log on locally and log on as a service rights 
granted

so I don't think that this is a problem.

Marek Aleksander Dabek

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Win32 users?

2005-06-27 Thread Marco Colombo
On Sun, 2005-06-26 at 10:59 -0700, Bob Pawley wrote:
> I'll date myself so you all may know from where I am coming. My first lesson 
> in binary math took place in a classroom in 1958. Since then I have 
> witnessed a lot that has since swept under the bridge.
> 
> 
> 
> This thread reminds me of the discussion that surrounded the complexity of 
> using the first spreadsheets (the precursor to today's Databases)
^^

What? Well, I admit I was not there in the 60's, but IFAIK databases
predate spreadsheets by far.

[...]
> In my 47 years of being somewhat aligned to the software industry this story 
> was repeated over and over again.
>
> There is a lesson to be learned from the Wang experience.
>
> People want tools to do tasks. They do not want to spend their own time (or 
> their own money to hire others) to build, design, repair or change the tools 
> that they need just in order to accomplish their own work - the work they 
> know best and from which they make a living. A good tool - a perfect tool - 
> is like a hammer. Its use is immediately known and it can be deployed 
> quickly, accurately and with little or no specialized training.

Pardom me, but the last sentence sounds ridiculous. Think of a Master
Smith making a perfect Katana. Guess which tool he's going to use
mostly? The hammer. Explain me how "its use is immediately known and it
can be deployed, accurately and with little or no specialized training"
applies here. Expecially the last part. Do you _really_ think that just
looking at even a remarkably simple tool such a hammer makes you able to
accomplish ("accurately"!) _anything_ that can be done with it? Before
answering, think about Michelangelo or Da Vinci.

> So I caution all to not make light of newbies who are searching for good 
> tools (not even perfect tools - yet) to do the work that needs doing. The 
> world will not sit by and continue to pay for Wang operators.

There are tasks that require the human brain in order to overcome the
lack of row performance of computers. Those task are not complex, it's
the computer that is slow. Give 20 years of advance in the computer
industry, and those tasks will require no human brain at all, for the
computers will be fast enough.

But not all task are like that. Some will grow with the computers. The
bigger the processing power, the more data you want to process. Some are
just complex at human brain level, _no matter what tool they involve_.
We have wordprocessors these days, but they don't turn us all into great
poets and writers, even it they are terribly better compared to quills.

There is some (brain) complexity in computer systems in general, and
there is in databases. It's just that some tasks are not for newbies, be
the tool a hammer or a RDBMS. When the tool is aimed mostly at such
tasks, there's little need to make it too newbie-friendly.

.TM.
-- 
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Web application development on Win32, deployment on Linux

2005-06-27 Thread Joe Abbate
I currently maintain a website which is deployed on LAMP, but it is developed on 
Windows (WAMP?).  I'm considering moving both to use PostgreSQL.  I'm curious as 
to whether others are following a similar development/deployment model and if 
so, what pitfalls/gotchas they have found.  For example, I refresh production 
tables on a weekly basis from the development database.  Linux MySQL is 
case-sensitive about table names but Windows isn't, so I have to modify table 
names like 'whatsnew' on Windows to 'WhatsNew' before I can run the reload 
script on Linux.  I'm also interested in any replication experience in such a 
scenario since the current export/modify/import process is done manually, in 
particular the determination of which tables changed from week to week.


Another concern about the move to Postgres is that the website's current Search 
capability is based on MySQL's full-text search functionality [SELECT ... WHERE 
MATCH (...) AGAINST (... IN BOOLEAN MODE)] .  PostgreSQL doesn't appear to have 
something resembling that.  I'm curious as to third-party solutions or 
development plans.


Joe

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Postmaster Out of Memory

2005-06-27 Thread Jeff Gold

Tom Lane wrote:

Found it --- the actual leak is in index_create, not in TRUNCATE or
CLUSTER at all; and it's been there a really long time.
Patch for 7.4 branch attached.


Excellent!  Does index_create refer to something that is invoked as a 
consequence of CREATE INDEX?  I'm looking through the code on our side 
and can't find any obvious places where we recreate indexes, but I might 
just be missing something.  Might it be a conseuence of repulating a 
recently truncated table instead?  I'll confer with Joe on this new 
information tomorrow.  Thanks for taking the time to look into this.


Jeff



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] automating backup ?

2005-06-27 Thread Andreas

Zlatko Matic schrieb:


Thanks Fuhr.
Anybody can tell me how to do it on Windows XP ?
Thanks.



That's again a pure Windows issue, but not commonly needed.
I guess, it should be in the Windows PG-FAQ, too.

I'll describe the way you go with Windows 2000.
If you are lucky WinXP Pro does it the same way and if you have only 
WinXP Home you might be lucky if you have the needed timer-service at 
all. Please try it and come back to the list and tell us what you did 
and if it solved your problem.



First you write a text file, that contains all commands you want to get 
executed.

All those which you would otherwise type yourself on the command line.
Call this file   zlatko_backup.batjust that it has a unique name.

In Windows' start menue you find "programs".
There is a subfolder where I don't know the English name, but it holds 
among other things the calculator and a systemprograms-folder.

In this Systemprograms-folder you should find "planned tasks" as a folder.
Open the "planned tasks" folder. It is empty.
Right click in it and select "new" --> "planned task".
Select it and you will get an settings menue of a task where you can 
define what program should run when.

Enter here the path to zlatko_backup.bat.

Provided Window's planned-tasks-service runs all is set now.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] number of records returned by cursors

2005-06-27 Thread Zac

Hi,
does anyone know if there is a way (in plpgsql) to obtain the number of 
records returned by a cursor without fetching them all?


Using "FOUND" and "GET DIAGNOSTICS row_count" variables doesn't help me: 
the only way seems to be fetching all the records.


I try to explain it better:

CREATE OR REPLACE FUNCTION test_cur() RETURNS void AS
$$
DECLARE
c CURSOR FOR SELECT 'foo' UNION SELECT 'bar';
i integer;
t text;
BEGIN
OPEN c;
RAISE INFO 'FOUND: %', FOUND;
GET DIAGNOSTICS i = row_count;
RAISE INFO 'row_count: %', i;

FETCH c INTO t;
RAISE INFO '1 row (of 2) fetched: %', t;
RAISE INFO 'FOUND: %', FOUND;
GET DIAGNOSTICS i = row_count;
RAISE INFO 'row_count: %', i;

FETCH c INTO t;
RAISE INFO '2 row (of 2) fetched: %', t;
RAISE INFO 'FOUND: %', FOUND;
GET DIAGNOSTICS i = row_count;
RAISE INFO 'row_count: %', i;

CLOSE c;
END;
$$ LANGUAGE plpgsql;

SELECT test_cur();
INFO:  FOUND: f
INFO:  row_count: 0
INFO:  1 row (of 2) fetched: bar
INFO:  FOUND: t
INFO:  row_count: 0
INFO:  2 row (of 2) fetched: foo
INFO:  FOUND: t
INFO:  row_count: 0
 test_cur
--

(1 row)


Thank you.
Zac

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] How to compare the schemas ?

2005-06-27 Thread Milorad Poluga
DB Comparer tool seems very useful.
Thank you for your suggestions and links.

Milorad Poluga                        
HK CORES Beograd, Makenzijeva 31
[EMAIL PROTECTED]              
+381-11-30-80-461              

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] automating backup ?

2005-06-27 Thread Marco Colombo
On Sun, 2005-06-26 at 11:18 +0200, Zlatko Matic wrote:
> How to automate backup, so that Postgres automatically backups, for example, 
> once in a week ?
> The same question about vacuum ?
> 
> Concerning backup, how to prevent that someone makes a copy (for example 
> pg_dumpall) of a database, 

pg_dumpall is no special case. It just runs some SQL queries. Your users
are subject to the normal permission checking.

> then installs new instance of Postgres, create 
> the same user acount that was the original owner and then restore the 
> database. In that case all restrictions would be overriden, right ? 

No, because pg_dumpall doesn't override any restriction.

Of course, if someone puts his hands on _your_ backups (made with full
permissions), he can access everything, unless you encrypted it.

.TM.
-- 
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Alban Hertroys

Uwe C. Schroeder wrote:

in the where clause use something like (requires the earthdistance contrib to 
be installed):


geo_distance(point([origin longitude],[origin latitude]),point([target 
longitude column],[target latitude column])))::int <= 50
 


I don't suppose geo_distance really returns a number in miles, does it?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Oleg Bartunov

How big is your data ? There are rather sophisticated and
very effective methods in astronomy. For example,
http://www.sai.msu.su/~megera/oddmuse/index.cgi/SkyPixelization,
http://www.sai.msu.su/~megera/oddmuse/index.cgi/pg_sphere


Oleg
On Mon, 27 Jun 2005, Janning Vygen wrote:


Am Montag, 27. Juni 2005 01:40 schrieb CSN:

If I have a table of items with latitude and longitude
coordinates, is it possible to find all other items
that are within, say, 50 miles of an item, using the
geometric functions
(http://www.postgresql.org/docs/8.0/interactive/functions-geometry.html)?
If so, how?


I did it without some special features and datatypes some time ago. feel free
to modify and use for your own. It should give you an idea how to do it.

   SELECT
 c1.zip,
 c2.zip,
 6378.388 *
 acos(
   sin(radians(c1.latitude)) * sin(radians(c2.latitude))
   + cos(radians(c1.latitude)) * cos(radians(c2.latitude))
   * cos(radians(c1.longitude - c2.longitude))
 ) AS distance
   FROM
 coordinates AS c1
 CROSS JOIN coordinates AS c2

I had some problems with the calculation inside acos() sometimes being greater
than 1, which should not occur. Please use a
  CASE WHEN sin(...) > 1 THEN 1 ELSE sin(...) END
if you have the same problem.

kind regards,
janning

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] vacuum error

2005-06-27 Thread Ulrich Wisser

Hi,

after months of flawless operation I get the following error when 
running VACUUM ANALYZE


vacuumdb: vacuuming database "CLIX2"
vacuumdb: vacuuming of database "CLIX2" failed: ERROR:  left link 
changed unexpectedly


System: Fedora Core 2, 2.6.10-1.771_FC2
Postgres 7.4.2

What does that mean? Can it be fixed? How?

TIA

Ulrich


--
Ulrich Wisser  / System Developer

RELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden
Direct (+46)86789755 || Cell (+46)704467893 || Fax (+46)86789769

http://www.relevanttraffic.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Uwe C. Schroeder

in the where clause use something like (requires the earthdistance contrib to 
be installed):

geo_distance(point([origin longitude],[origin latitude]),point([target 
longitude column],[target latitude column])))::int <= 50


On Sunday 26 June 2005 04:40 pm, CSN wrote:
> If I have a table of items with latitude and longitude
> coordinates, is it possible to find all other items
> that are within, say, 50 miles of an item, using the
> geometric functions
> (http://www.postgresql.org/docs/8.0/interactive/functions-geometry.html)?
> If so, how?
>
> Thanks,
> CSN
>
> __
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

-- 
UC

--
Open Source Solutions 4U, LLC   2570 Fleetwood Drive
Phone:  +1 650 872 2425 San Bruno, CA 94066
Cell:   +1 650 302 2405 United States
Fax:+1 650 872 2417

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Finding points within 50 miles

2005-06-27 Thread Janning Vygen
Am Montag, 27. Juni 2005 01:40 schrieb CSN:
> If I have a table of items with latitude and longitude
> coordinates, is it possible to find all other items
> that are within, say, 50 miles of an item, using the
> geometric functions
> (http://www.postgresql.org/docs/8.0/interactive/functions-geometry.html)?
> If so, how?

I did it without some special features and datatypes some time ago. feel free 
to modify and use for your own. It should give you an idea how to do it. 

SELECT
  c1.zip,
  c2.zip, 
  6378.388 * 
  acos(
sin(radians(c1.latitude)) * sin(radians(c2.latitude))
+ cos(radians(c1.latitude)) * cos(radians(c2.latitude))
* cos(radians(c1.longitude - c2.longitude)) 
  ) AS distance
FROM   
  coordinates AS c1
  CROSS JOIN coordinates AS c2 

I had some problems with the calculation inside acos() sometimes being greater 
than 1, which should not occur. Please use a 
   CASE WHEN sin(...) > 1 THEN 1 ELSE sin(...) END 
if you have the same problem.

kind regards,
janning

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq