Re: [postgis-users] Missing GEO data

2011-12-20 Thread Dan Putler

Hi Mikal,

I assume you are talking about results you are getting from the PostGIS 
TIGER Geocoder. The US Census Bureau does not provide address points, 
which is what would be needed for "rooftop" geocoding, and under Title 
13 of the US Code, they legally can't for privacy reasons. What the US 
Census Bureau does provide is street segments with address range 
information as part of their TIGER/Line product. However, this address 
range information has been obfuscated to a certain extent to be 
compliant with Title 13. What this means is that the geo-positioning 
will me a bit off for most addresses, will always be along the street 
and not on the rooftop, and will purposely have some missing addresses. 
If you truly need rooftop geolocations (which is needed for emergency 
response, but few other applications), you will need to use something 
other than the TIGER data. If you want it for a small area, say a county 
or the Borough or City of Anchorage, you can consider basing an address 
point data set from a local parcel layer along with the parcel situs 
addresses obtained from the relevant local government agencies (likely 
both the GIS group and the Assessors Office). Address point data can 
also be obtained from commercial vendors, such as NAVTEQ.


News reports indicate that the Census Bureau is willing to release the 
Master Address File to the public, but this would require Congress to 
amend Title 13 of the US Code.


Dan

On 12/20/2011 01:02 PM, Mikal Laster wrote:

Hello,

I apologize if someone has already posted this, but when trying to 
geocode some addresses:
('1301 W 100th Avenue Anchorage, AK 99515', '651 E 100th Avenue, 
Anchorage, AK 99515',

) . Getting Street level coding and the not the rooftop.

("(,E,100th,Ave,,,Anchorage,AK,99515,t)",010120AD10083BC5AAC1BB62C095986725AD904E40,9)
("(,W,100th,Ave,,,Anchorage,AK,99515,t)",010120AD101EDB32E0ACBC62C04815C5ABAC904E40,9)
("(,E,100th,Ave,,,Anchorage,AK,99507,t)",010120AD1092CA147310B962C0B6476FB88F904E40,11)


Is there a known issue with not all of the valid address being 
inserted or even recorded by the Census? These addresses exist. Please 
advise.




___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


[postgis-users] Parcel polygons to address points

2011-12-12 Thread Dan Putler

Hi all,

This is more a cartography question than a PostGIS question, but this 
seems like the right list to ask on. I've got parcel data from several 
counties that are in US state plane feet projections and I want to 
create address points that are in NAD83 geographic coordinates. There 
are three possible answers about how to proceed: (A) get the centroids 
from the US state plane feet parcels, and then re-project the centroids 
to NAD83 geographic cooridnatesa; (B) re-project the parcels to the 
NAD83 geographic projection and then get the centroids of the 
re-projected parcels; or (C) pick one since the locational differences 
between the two approaches will amount to rounding error. My best guess 
is that (C) is the correct answer, but I want to use the approach in (A) 
for reasons of computational efficiency.


Dan
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Tigerdata for AZ, AS and VI

2011-12-12 Thread Dan Putler
To complete Steve's list, the Northern Mariana Islands (abbreviation MP, 
FIPS 69) also do not have "addr" files.


Dan

On 12/12/2011 06:50 AM, Stephen Woodbridge wrote:

On 12/12/2011 9:18 AM, Ravi ada wrote:

Hello All,

Has anyone experienced loading tigerdata into postgis database for
Arizona, American Samoa and Virgin Islands. I getting “*addr.dbf” cannot
find errors. All the other states are loaded fine. I tried to download
the shape files again thinking that they might have been corrupted
during the transmission, but even after that I am getting the same error.

Any ideas?

My download of Tiger has all the *addr* files for Arizona and I believe
I have accessed them all without a problem.


In general, the *addr* files are optional, and there are none for Guam,
American Samoa and Virgin Islands.

Typically if the county or county equivalent does not have roads with
address ranges in it, then it will not have any *addr* files. So it is
possible that a county in Arizona in say the desert might not have any
address ranges and therefore not have that file, but looking at the list
of counties in Arizona it looks like they all have those files.

-Steve W
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Tiger Line 2010 - Edges

2011-10-18 Thread Dan Putler

Hi Rene,

The edges also include TFIDL and TFIDR fields. These are the foreign 
keys that identify the topological faces on the left- and right-side of 
an edge. If you combine this with the attribute data of the TIGER FACE 
layers (the topological faces), you can then determine the "place" 
(city, town, or Census Designated place) FIPS code associated with the 
topological face that is bounded by a road edge. To get the name of the 
"place", you then need to use the TIGER PLACE layers. However, many road 
segments don't have road segments that fall into a "PLACE", so you need 
to devise another strategy to deal with them.


All in all, working with TIGER data is not straight forward.

Dan

On 10/18/2011 09:24 AM, René Fournier wrote:

Having imported the Edges shape files, I'm able to get quickly find the closest 
street to a given latlng point (reverse-geocode). From this row, I get the 
street name and house number ranges, and state (FIPS code) -- but not the city 
name. Any suggestions on the best way to find the town/city?

…Rene



___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Distances off in the Southern US

2011-08-15 Thread Dan Putler

Hi Mike and Nicolas,

The fact that Mike's two calculations resulted in the same value to 12 
decimal places is more that a little fishy. I seem to remember a similar 
issue coming up on this list sometime ago (roughly a year ago is my, 
faulty, memory). The difference between southern and northern cities in 
the US just shouldn't be an issue if the data was read in correctly.


Dan

On 08/15/2011 06:18 AM, Nicolas Ribot wrote:

On 15 August 2011 12:50, Mike Hostetler  wrote:

Hello,
I'm somewhat new to GIS and I have a problem that I thought appeared to be
simply using a wrong projection or datum, but it seems to be a bit more
subtle than that.
I have a table of cities in the US and I'm trying to find distances between
them. When I use a city that is in the northern US, it works fine.  When I
try to find the distance between two cities in the Southern US, the distance
becomes way off.
I setup a Geometry in my cities table and populated it like the following:
select AddGeometryColumn('cities','geom',32661,'POINT',2);
UPDATE cities SET  geom=transform(setsrid(makepoint(longitude,
latitude),4269), 32661)
(I find the latitude and longitude from the Yahoo Geocode service)
A distance calc from McHenry, IL to Dallas, TX is calculated as:
select distance( (select geom from cities where id=26251), (select geom from
cities where id=67) )*0.000621371192 as miles;
  miles
--
  996.717850542391
(Google Maps reads as 972, off by 25 miles or off around 4%)

But Birmingham, AL, to Miami, FL is calculated as:
leader=# select distance( (select geom from cities where id=26251), (select
geom from cities where id=67) )*0.000621371192 as miles;
   miles
--
  996.717850542391
(Google Maps reads as 767, off by 120 files, or 13%).
I can handle a little error, as long as it's somewhat small (<5%).  But this
is way off.
Again, it smells to be to be a datum or projection issue to me, but I'm not
sure how to find the sweet spot to be accurate everywhere.
Your input is appreciated.


Hi Mike,

Some remarks:
• Are you sure Yahoo! Geocoder Service returns degree expressed on
4269 coordinate system ? I read it is 4326.
• How did you get the distances with Google Maps ? Did you use travel
directions ? If so, the 972 miles is by driving on roads, not by
flying the shortest distance (great circle).

The following service:
http://www.geobytes.com/CityDistanceTool.htm?loadpage
gave me a direct distance of 808 miles or 1300km for McHenry - Dallas
and a distance of 658 miles or 1059 km for Birmingham - Miami.

Then, by running these queries the distance between cities seems to be good:

select foo.city, bar.city, st_distance(foo.geom,
bar.geom)*0.000621371192 as miles
from
(select 'McHenry' as city, 'IL' as state, 'POINT(-88.267314
42.326215)'::geography as geom) as foo,
(select 'Dallas' as city, 'TX' as state, 'POINT(-96.795404
32.778155)'::geography as geom) as bar;

city   |  city  |  miles
-++--
  McHenry | Dallas | 807.029700174124

select foo.city, bar.city, st_distance(foo.geom,
bar.geom)*0.000621371192 as miles
from
(select 'Birmingham' as city, 'AL' as state, 'POINT(-86.811504
33.520295)'::geography as geom) as foo,
(select 'Miami' as city, 'FL' as state, 'POINT(-80.237419
25.728985)'::geography as geom) as bar

city| city  |  miles
+---+--
  Birmingham | Miami | 666.318599210394

The slight differences between CityDistanceTool and Yahoo! services
come from the cities coordinates: the two services do not position
cities at the same coordinates.

Also please note I'm using the new GEOGRAPHY Postgis type that allows
direct distance computation: no need to transform data back to a
planar coordinate system.
If you use geometry type with long/lat coordinates, you may have a
look at st_distanceSphere and st_distanceSpheroid.

Nicolas
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Newbie Geocoding Error

2011-07-10 Thread Dan Putler

Hi Steve,

Thanks for the clarification, are you getting the census place name via 
a TLID to TFID translation?


Dan

On 07/10/2011 12:55 PM, Stephen Woodbridge wrote:

Dan,

Thanks for pointing out that I augmented the Tiger data. But it should
be noted that I only added the preferred postal name for the zipcode as
an alias place name. So if the zipcode on the record is bad then I add a
potentially bad alias place name. I do join the tiger records and look
up the census place, county sub-division, and county names for each
street record so PAGC can fall back on that is the zip is bad.

-Steve

On 7/10/2011 3:25 PM, Dan Putler wrote:

Hi,

The zip code is just wrong, 94115 is in located in San Francisco. PAGC
can make a good probabilistic match on the address largely because Steve
PAGC service is working with an augmented TIGER database that has has
both the state and place appended to the edges which PAGC can work with
for what we call the "macro" address component. With just the "stock"
TIGER edges, the only macro address component available is the five
digit zipcode, and if that is wrong, the PostGIS TIGER geocoder
struggles. PAGC can actually handle the address string 477 Camino del
Rio South, 94115, and get the correct address as the most likely
candidate, but this is likely to be luck as much as anything else.

Dan

On 07/10/2011 12:05 PM, Paragon Corporation wrote:

Mike,


1. the upgrade_geocoder script ran with most of the sql returning

'already exists' errors. Do I need to modify the script to drop before
recreating?
No you should be fine. The already exists and already exists skipping
issue can be ignored. I didn't want to destroy some of these
structures if they existed because they could be tied to data.
That is why I don't have the whole upgrade script in a transaction
because it would fail the whole thing when it doesn't need to.
Unfortunately PostgresQL doesn't yet support the CREATE IF NOT EXISTS
for a lot of things like new columns I needed to add to lookup etc.


2. My original error went away.
So, now onto another address that does not geocode. The results

don't even include the street in the query.

477 Camino del Rio South, San Diego, CA 94115
SELECT * from normalize_address returns
address predirabbrev streetname streettypeabbrev postdirabbrev

internal location stateabbrev zip parsed

477  Camino del Rio  S  San Diego CA 94115 true
I don't have enough knowledge of postGIS right now to know if it's a

bug or operator error.
As Steve mentioned in another post, it could be a data issue since the
normalization looks right. I looked at the results
for California and I see what you mean that the street appears nowhere
in results.
If I look at my underlying Tiger edges and featnames for California.
ca_edges, ca_featnames
There is a Camino del Rio Ct, but has its zip listed as 93308 which is
no where near 94115 as far as numeric distance goes.
If I look at the street ranges in ca_addr for that tlid, It gives me
ranges from 2400 - 3199 again which is no where near what that is. So
your address
fails the only Camino del Rio match it could possibly match to in a
big way.
In fact when I look thru all the street names in ca_edges with zip
94115, none start with Camino anything.
So even taking off the zip doesn't help. It does seem to be a data issue.
-- Steve,
I don't think its the zip issue per se because I do have that zip
listed in my ca_zip_state and as I recall, I don't think I have
my loader generate the zips from the zcta5 file, since that would be
inaccurate since zips aren't really polygons and also aren't updated
as frequently as you stated.
It could be this street is known by another name and Tiger doesn't
have this particular name listed in its featnames alias or it somehow
missed this place entirely.
Leo says his friend works around there :) so I guess its an important
place to miss.
Hope that helps,
Regina and Leo
http://www.postgis.us

Message: 10
Date: Sat, 9 Jul 2011 14:23:09 -0400
From: "Paragon Corporation"mailto:l...@pcorp.us>>
Subject: Re: [postgis-users] Newbie Geocoding Error
To: "'PostGIS Users Discussion'"
mailto:postgis-users@postgis.refractions.net>>
Message-ID:<20AE3B391EAA4F1A8CBE0347002383E2@J>
Content-Type: text/plain; charset="us-ascii"

Mike,

Which version are you running with? For the newer ones -- if you look at
the normalize_address function , you should see a stamp in the
beginning of
the code that has

--$Id: normalize_address.sql 7616 2011-07-07 12:41:13Z robe $-

That is the latest version stamp. Really old versions don't even have a
stamp. If you are running something older than a week or 2 ago, that is
probably why you are having these issues.
The easiest way to upgrade to the latest is:
1) Download the PostGIS 2.0 tar ball from here --
http://www.postgis.org/download/ (the tiger_geocoder is

Re: [postgis-users] Newbie Geocoding Error

2011-07-10 Thread Dan Putler

Hi,

The zip code is just wrong, 94115 is in located in San Francisco. PAGC 
can make a good probabilistic match on the address largely because Steve 
PAGC service is working with an augmented TIGER database that has has 
both the state and place appended to the edges which PAGC can work with 
for what we call the "macro" address component. With just the "stock" 
TIGER edges, the only macro address component available is the five 
digit zipcode, and if that is wrong, the PostGIS TIGER geocoder 
struggles. PAGC can actually handle the address string 477 Camino del 
Rio South, 94115, and get the correct address as the most likely 
candidate, but this is likely to be luck as much as anything else.


Dan

On 07/10/2011 12:05 PM, Paragon Corporation wrote:

 Mike,

> 1. the upgrade_geocoder script ran with most of the sql returning 
'already exists' errors.  Do I need to modify the script to drop 
before recreating?
No you should be fine.  The already exists and already exists 
skipping issue can be ignored.  I didn't want to destroy some of these 
structures if they existed because they could be tied to data.
That is why I don't have the whole upgrade script in a transaction 
because it would fail the whole thing when it doesn't need to.  
Unfortunately PostgresQL doesn't yet support the CREATE IF NOT EXISTS 
for a lot of things like new columns I needed to add to lookup etc.


> 2. My original error went away.

> So, now onto another address that does not geocode.  The results 
don't even include the street in the query.

> 477 Camino del Rio South, San Diego, CA 94115

> SELECT * from normalize_address returns
> addresspredirabbrevstreetnamestreettypeabbrev
postdirabbrevinternallocationstateabbrevzipparsed
> 477 Camino del Rio S San DiegoCA
94115true


> I don't have enough knowledge of postGIS right now to know if it's a 
bug or operator error.
 As Steve mentioned in another post, it could be a data issue since 
the normalization looks right.  I looked at the results
for California and I see what you mean that the street appears nowhere 
in results.
If I look at my underlying Tiger edges and featnames for California.  
ca_edges, ca_featnames
There is a Camino del Rio  Ct, but has its zip listed as 93308   which 
is no where near 94115 as far as numeric distance goes.
If I look at the street ranges in ca_addr for that tlid, It gives me 
ranges from 2400 - 3199 again which is no where near what that is. So 
your address
fails the only Camino del Rio match it could possibly match to in a 
big way.
In fact when I look thru all the street names in ca_edges with zip 
94115, none start with Camino anything.

So even taking off the zip doesn't help.  It does seem to be a data issue.
-- Steve,
I don't think its the zip issue per se because I do have that zip 
listed in my ca_zip_state and as I recall, I don't think I have
my loader generate the zips from the zcta5 file, since that would be 
inaccurate since zips aren't really polygons and also aren't updated

as frequently as you stated.
It could be this street is known by another name and Tiger doesn't 
have this particular name listed in its featnames alias or it somehow 
missed this place entirely.
Leo says his friend works around there :) so I guess its an important 
place to miss.

Hope that helps,
Regina and Leo
http://www.postgis.us

Message: 10
Date: Sat, 9 Jul 2011 14:23:09 -0400
From: "Paragon Corporation" mailto:l...@pcorp.us>>
Subject: Re: [postgis-users] Newbie Geocoding Error
To: "'PostGIS Users Discussion'"
>

Message-ID: <20AE3B391EAA4F1A8CBE0347002383E2@J>
Content-Type: text/plain; charset="us-ascii"

Mike,

Which version are you running with?  For the newer ones -- if you look at
the normalize_address function , you should see a stamp in the 
beginning of

the code that has

--$Id: normalize_address.sql 7616 2011-07-07 12:41:13Z robe $-

That is the latest version stamp. Really old versions don't even have a
stamp.  If you are running something older than a week or 2 ago, that is
probably why you are having these issues.
The easiest way to upgrade to the latest is:
1) Download the PostGIS 2.0  tar ball from here --
http://www.postgis.org/download/  (the tiger_geocoder is in
extras/tiger_geocoder/tiger_2010)
-- (This new version requires PostGIS 1.5+ and PostgreSQL 8.4+)

2) Edit the  upgrade_geocoder.sh file with your postgres settings and then
run it.

3) Run the Missing_indexes_Generate_Script() to get the commands to build
indexes you may be missing.

http://www.postgis.org/documentation/manual-svn/Missing_Indexes_Generate_Scr
ipt.html
and then execute that generated script


--
I think this is a bug I might have fixed.  When I run normalize on this I
don't get an error, and yours seems to be breaking in the normalize step.

When I run
SELECT * FROM norm

Re: [postgis-users] Recommendations with GRASS v.generalize() on Tiger Data

2011-06-29 Thread Dan Putler

Hi Chris,

Your question on QGIS and GRASS was posted to the PostGIS user list. 
While you might get a response, this seems like a question for the QGIS 
user list since it actually doesn't involve PostGIS.


Dan

On 06/29/2011 03:55 PM, Christian Guirreri wrote:
Narrowed down some test to using the "Hermite" algorithm, but I'm 
having some bizarre issues with it.


In the attached gif of California counties from Tiger data, from left 
to right, I have the following tolerance values:

 - original
 - 1.0
 - 0.08
 - 0.01
 - 0.1

Why do counties disappear entirely as I decrease the tolerance?

Setup is QuantumGIS with GRASS. In the Grass Tools I choose the 
v.generalize function. I choose Boundary as the feature type (though 
I've tried checking others and it doesn't seem to change anything). 
Everything else is default, except for tolerance as notated above.


When I tested this original on only Arkansas and Mississippi, I got 
really nice results. I then tried it on the entire US and had the 
missing counties problem. So I tried only California, and still have 
the same issue.


I've tried other algorithms, but this has so far given me what I want. 
Any thoughts?

 - Chris


On Wed, Jun 29, 2011 at 5:10 PM, Christian Guirreri 
mailto:christ...@guirreri.com>> wrote:


I'm currently going through the GRASS v.generalize() function's
various parameters in QuantumGIS. There's so many options, I'm not
entirely sure what's best. Has anyone tried this on Tiger 2010
counties or district data? Any particular recommendations?

What's most important to me, as mentioned in a previous thread, is
that there are no gaps between counties/districts in a similar
fashion to MapShaper.

Thanks,
 - Chris




___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Convert between SRID units and meters

2011-05-20 Thread Dan Putler

Hi Peter,

Here is a link to OSGeo workshop that should be a helpful premier on the 
issue in general and provides a solution to your problem through the use 
of PostGIS's geography type: 
http://workshops.opengeo.org/postgis-intro/geography.html


If you are dealing with a fairly well define area, say the West Coast of 
the US and Canada, then you might consider reprojecting your data out of 
NAD83 (North American Datum of 1983) geographic (lon/lat) coordinates 
(EPSG: 4269) into Euclidean coordinates (such as NAD83 UTM 10N), and 
then you can use the standard st_area tool and the result will be in meters.


Dan

On 05/20/2011 11:57 AM, Peter Hsu wrote:

I have a simple question.  I've loaded up shape files from the tigerline US 
census bureau.  Everything works fine, but I'm trying to find the area of a 
geometry in meters.  However, I'm only able to get the area in SRID units.

How can I convert from SRID units to meters?  Is this a linear conversion?  Or 
is it dependent on the latitude of the area in question?

The SRID for the tigerline files is 4269

Peter
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Geocoder (from extras)

2011-05-07 Thread Dan Putler

Hi Johnathan,

Yes it is. The web site is drastic need of updating, but development is 
on going. Recent development has been on expanding the set of data 
stores. As Steve indicates, we have added SQLite, and are working on 
including Postgres.


Dan

On 05/07/2011 04:29 PM, Johnathan Leppert wrote:
Do you know what the status of the PAGC project is? Is it still under 
active development? The last post was in 2008.


On Sat, May 7, 2011 at 5:00 PM, Stephen Woodbridge 
mailto:wood...@swoodbridge.com>> wrote:


Stephen,

In a geocoder I wrote in 2000, I mapped city, state and/or zipcode
into a list of counties, then only searched those counties. This
greatly reduced the 3300 counties in Tiger to a small handful that
needed to be searched. The FIPS-4 document has a mapping of all
place names to counties. Zipcodes are easier to handle because you
can just index the records with zipcodes but if you wanted to
widen the search, then map the zipcode to a place name and then to
counties.

My first thought was to use inherited tables where each state or
state equivalent is in a separate table and filter the tables by
the state name of abbreviation. It sounds like you might be
already doing something like this.

I think trigrams would provide some benefits. I have used double
metaphone, but only used the first code for fuzzy searches because
this is a better phonic search key than soundex in my opinion
because it matches thinks that sound alike even if they do not
start with the same letter. Since I used it as a key I only used
the first 4 characters of the metaphone code.

I wrote my own tiger geocoder in C and have since abandoned that
in favor of PAGC. PAGC has just recently refactored the code to
support SQLite as a datastore and has had some work done to
support Postgresql in addition to the previous support for
Berkeley DB. One idea that I had was that with appropriate support
for database backing stores, that the PAGC library might be able
to be wrapped in a stored procedure which would then provide a
high quality and high performance geocoder in the database.

Here is an instance that I created using SQLite as the backing
store and Tiger 2010 data:

http://imaptools.com:8080/geocode/

The single line parsing is broken at the moment, I have a ticket
to look into that and hope to have it resolved shortly.

PAGC project can be found here:

http://www.pagcgeo.org/

Regards,
 -Steve



On 5/6/2011 10:36 PM, Stephen Frost wrote:

* Johnathan Leppert (johnathan.lepp...@gmail.com
) wrote:

Ok, I have it working, but it's very very slow. It takes
about two seconds
to geocode a single record. This isn't very realistic for
large datasets.
Anyone have any ideas on possible optimization? It appears
to have created
proper indexes on all the tables.


Yeah, there's a few things I've been working on.  For one, a
lot of the
time is spent planning that massive query at the heart of the
geocoding,
when it really doesn't need to be re-planned for every query.
 It's done
the way it is currently because the tables are partitioned up
by state.
I've got someone working on simply splitting that query up to
be 53 (or
however many...) queries, one for each state/territory, which
will allow
them to be planned once and then the plans re-used inside the same
session.  That should improve things.

I've also been looking into using trigrams for things instead
of the
current indexes..  I think they're perform better with regard
to speed,
just need to make sure it still returns good results, etc..

I'm very interested in anyone else working on this and any
suggestions
people have for how to improve it..  I've not been able to
work on it
much recently and while I've tried to delegate it to others,
it's a
pretty complex system which is, in particular, hard to test well..

   Thanks,

   Stephen



___
postgis-users mailing list
postgis-users@postgis.refractions.net

http://postgis.refractions.net/mailman/listinfo/postgis-users


___
postgis-users mailing list
postgis-users@postgis.refractions.net

http://postgis.refractions.net/mailman/listinfo/postgis-users




--
/Johnathan/
Software Architect & Developer
Columbus, Ohio
/Follow me on Twitter: @iamleppert /

Re: [postgis-users] 2010 Census

2011-03-08 Thread Dan Putler

Hi Eric,

I'm not aware of any tutorial. Moreover, the redistricting data is just 
now being released. However, I'm in the process of doing this at the 
moment. I'm actually starting with the 2009 5 year summary for the 
American Community Survey (or ACS) at the Block Group and Census Tract 
levels. Really off topic for this list, but the long form was not used 
in the 2010 Census. As a result, there are no housing, income, poverty, 
commuting time, and a host of other variables reported in the 2010 
Census. Instead, this information now is part of the ACS.


We can chat a bit more off list if you are interested.

Dan

On 03/08/2011 06:10 PM, Eric Aspengren wrote:
I know this has likely been covered on this list before. So, pardon if 
this is redundant, I just signed up. I'm looking for a good tutorial 
to get the new 2010 Census data for a state loaded into a PostGIS 
database, including all the recent Tiger files and demographic data.


Is there a good one out there?

--
Eric Aspengren
Data Manager
Planned Parenthood of the Heartland
(402) 478-VOTE
ericas...@gmail.com 


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Deleting many points falling outside a large polygon (but within the bbox)

2010-07-22 Thread Dan Putler

Hi Tim,

A point-in-polygon test approach would seem to make sense. I did a quick 
Google search and ran into posting on Paul Ramsey's Clever Elephant blog 
that seems to describe exactly your situation: 
http://blog.cleverelephant.ca/2008/09/point-in-polygon-shortcuts.html


A few more comments on this from Paul about implementation would likely 
be helpful, at least for me.


Dan

On 07/22/2010 12:44 PM, Tim Keitt wrote:

I have a very large table of points; basically I pushed a raster
extending over much of the western hemisphere at 1km resolution into
the db as xyz (actually 2d points + z in another column). Not a crazy
as it sounds as you can do a lot of interesting things intersecting
these grid points with other geometries, and I have a lot of RAM and
disk space available. The region of interest however is much smaller
than the original raster and is defined by a large polygon (certain
continent margins) composed of many vertices. Deleting the points
outside the bounding box of the roi is reasonably quick, however there
are many points that remain within the bbox, but outside the polygon.
These take forever to cull as it appears the entire polygon has to be
searched for each point. Its looking like this will at least take
days, perhaps much more on a fairly fast machine.

I'm curious if anyone has a reasonable solution. I was thinking of
dumping the roi polygon as points and then recursively subdividing the
bounding box, building quad-polygons on the way down. Those quads that
contain roi points are split into 4 while those that contain no points
remain. After a few recursion levels, you figure out which
quad-polygons are disjoint from the roi polygon and delete any
enclosed grid points. Points intersecting quads that are within the
roi polygon are not touched. Grid points within the remaining quads
would have to be searched one-by-one, but that should be a small
fraction of the total. Basically the idea is to emulate a kind of
quad-tree index in pure sql. Or alternatively I'll just come back in a
few weeks and see if the brute force query is done...

THK

   


___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] 'Clustering' records in space and time

2010-07-16 Thread Dan Putler

Hi Will,

Yes, DBSCAN is a much better choice for what you want to do. However, 
how to include the temporal element becomes something of an issue with 
DBSCAN or PAM/CLARA. Then there are the issues of selecting an epsilon 
radius and min pts for the DBSCAN algorithm. At this point, given the 
nature your questions, you are likely to find the R-sig-Geo mailing list 
a better choice than the PostGIS-User list. Here is the link to 
subscribe to that list: https://stat.ethz.ch/mailman/listinfo/r-sig-geo


Dan

On 07/16/2010 09:59 AM, William Furnass wrote:

Thanks Pierre and Dylan for your helpful replies.  FYI my dataset is
90K records describing events that occurred over 14 years over an area
of 50^2m.

The suggestion of using R's PAM and CLARA functions for clustering
lead me to the 'dbscan' algorithm which may well be a better choice
for my needs as one doesn't need to know in advance how many clusters
require identification.  "Clusters require a minimum no of points
(MinPts) within a maximum distance (eps) around one of its members
(the seed). Any point within eps around any point which satisfies the
seed condition is a cluster member (recursively). Some points may not
belong to any clusters."
(http://bm2.genes.nig.ac.jp/RGM2/R_current/library/fpc/man/dbscan.html).

Another approach I'm considering is to discretize 2D space and time
(three dimensions) into a cellular matrix, associate each event with a
cell and amalgamate all records that have the same cell reference.
This would of course fail to cluster 'close' events that happen to
fall either side of a cell divide but _might_ be easy to implement
using say PL/SQL.

For reference it appears that a clustering function for PostGIS has
already been proposed:

http://opengeo.org/products/coredevelopment/postgis/bi-utilities/

Thanks again for pointing me towards PAM/CLARA.

Cheers,

Will

On 15 July 2010 21:33, Pierre Racine  wrote:
   

I would suggest you ask your question to the r-sig-geo mailing list. You will 
get a R solution. You can then get your PostGIS table from R using the gdal/ogr 
package or use PL/R in PostgreSQL.

Pierre

 

-Original Message-
From: Dylan Beaudette [mailto:debeaude...@ucdavis.edu]
Sent: 15 juillet 2010 16:02
To: Pierre Racine
Cc: PostGIS Users Discussion; w...@thearete.co.uk
Subject: Re: [postgis-users] 'Clustering' records in space and time

On Thursday 15 July 2010, Pierre Racine wrote:
   

What should happen when event A is at a distance n minus epsilon from B, B
is at a distance n-epsilon from C but A is at a distance 2*n-epsilon from
C? Should A and C be in the same cluster with B?

Pierre
 

Interesting. The choice of clustering algorithm would need to be based on the
questions the OP was trying to answer. Without much thought (warning!) I
pictured a 3D space (x, y, time) partitioned around medoids (PAM algorithm)
of data.

In this very simple case chunks of data in (x, y, time) space would be
collected based on their proximity. For this to work, space and time
coordinates would need to be standardized accordingly... For x and y, I think
that subtracting the mean and dividing by the standard deviation should do. I
am not sure about the standardization of time... maybe the same thing, but
applied to the number of seconds | minutes | hours | days elapsed since the
start of the experiment?

Dylan


   

-Original Message-
From: postgis-users-boun...@postgis.refractions.net [mailto:postgis-users-
boun...@postgis.refractions.net] On Behalf Of Dylan Beaudette
Sent: 15 juillet 2010 15:10
To: w...@thearete.co.uk; PostGIS Users Discussion
Subject: Re: [postgis-users] 'Clustering' records in space and time

Hi,

Can you give us some hints about your data?

1. how many records
2. temporal domain (i.e. 1 year?)
3. spatial domain (local, regional, continental?)

If you don't have too much data, you may be able to standardize them, and
apply an algorithm like PAM, or CLARA (see cluster package in R).

Cheers,
Dylan

On Thursday 15 July 2010, William Furnass wrote:
   

I have a PostGIS table of records describing events therefore the
table has a timestamp attribute.  I wish to replace 'clusters' of
events that occur within a m-hour window and a spatial radius of n
with single events which have the mean timestamp and central position
of the cluster.  I understand that I can quantize my data spatially
using the St_SnapToGrid function but using this function alone I lose
some of the distinct events that occurred at the same point in space
but at very different times (it's my understanding that St_SnapToGrid
only allows one point to be stored at each node in the grid).  Also, I
am unsure as to how I could use St_SnapToGrid in such a way so as not
to relocate points that are unique within the aforementioned spatial
and temporal window boundaries.

Has anyone any suggestions as to how this can be achieved
programmatically using SQL (rather than a graphical tool)?  Should I
perhaps be looking

Re: [postgis-users] List pre-emption.

2010-05-18 Thread Dan Putler
You are not alone. Both this list and another I am on that is hosted by
Refractions Research will go through periods where message delivery is
substantially delayed.

Dan

On Tue, 2010-05-18 at 20:39 -0700, Ben Madin wrote:
> G'day all,
> 
> I have many problems, (and don't want to dwell on them) but I am wondering if 
> I am the only person who routinely receives list emails in the wrong order - 
> for instance, I have just replied to request which arrived at 11:24 (only a 
> few minutes ago) only to discover that others have also replied, at 11:09, 
> 10:44 and 10:24.
> 
> Just being in the Southern Hemisphere doesn't explain this, nor being 
> Australian. Is it a gold membership thing to have your questions answered 
> before you submit them...where do I sign up? More seriously, I haven't 
> noticed this effect with other (albeit lower volume) lists that I subscribe 
> to?
> 
> cheers
> 
> Ben
> 
> 
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
-- 
Dan Putler
Sauder School of Business
University of British Columbia

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] TIGER geocoder with Census 2009 shapefiles

2010-03-02 Thread Dan Putler
Hi Kevin,

For non-disclosure reasons, the address range data has been
"fuzzed-up" (i.e., if the actual range is 901 to 957 Main St, they will
give the range as 901 to 961 Main St) so the match isn't exact. In
addition, if there are too few housing or business units on a road
segment, they do not include the address range for that segment, again
for non-disclosure reasons.

Dan Putler

On Tue, 2010-03-02 at 10:58 -0500, Kevin Galligan wrote:
> I actually bought an early access copy of the book.  I work in linux
> and have been playing around with different geocoders and the tiger
> files.  Most recently with a ruby geocoder, for no other reason than
> I'm trying to find one that is fairly complete and functional.
> 
> 
> Any idea how "production quality" this particular one is?  If its
> fairly high, I'll probably put some time in to get it working on
> linux.  I have the full 2009 tiger dataset on an EC2 block drive,
> waiting to import into a different database.
> 
> 
> Right now I'm using zip+4 data to get a rough geocode, which is good
> enough for what we're doing, but it only gets 92% of our non-PO Box
> data.  From my experience with the tiger data, it only adds a couple
> percent at most above that, but the geocoders I've used have been
> pretty hacky, so its possible that was the issue.  Also, some of them
> seem to not be concerned with stuff like matching "Main St" when
> you're looking for "Main Ln", which is pretty terrible.
> 
> 
> On the plus side, if there is major work going on with this geocoder
> (or any tiger geocoder), I have a huge national data volume that will
> help stress test the system.
> 
> 
> Recently I've been toying with USC's free geocoder project.  In some
> areas it actually gets about half of the data I previously could not,
> which is impressive.
> 
> 
> The really frustrating thing is, in general, the first 90% is
> cheap/free.  The next 3-4% is marginally expensive.  The rest is
> really pricey.
> 
> 
> Is there any idea how complete the tiger data is, and why there is
> this apparent lack of data in there?  I find it strange.  Some streets
> are just missing.  Stuff like that.
> 
> 
> Rambling.  Anyway, will take a look later.  Thoughts on the quality of
> the geocoder appreciated.
> 
> 
> -Kevin
> 
> On Fri, Feb 26, 2010 at 11:52 PM, Paragon Corporation 
> wrote:
> David,
> 
> As a matter of fact we've been working on that for chapter 10
> of our
> upcoming book and think we have it all working.  As a part of
> the example
> generation process for our chapter 10, we had to come up with
> a way to load
> the tables that works on both windows and Linux.
>  Unfortunately we haven't
> had a chance to test the Linux loading approach, but is pretty
> much a
> parallel of the windows approach.
> 
> To do so we started out with Steve's code, added some
> additional skeleton
> tables and a database function that generates a command line
> script for the
> respective OS.  Hopefully it all makes sense from the readme
> file we have
> packaged.
> 
> We also changed one of the functions because there was an
> error in it and
> revised slightly to work with Tiger 2009 data.  You can
> dowload our slightly
> hacked version of Steve's code from our chapter 10 page.
> 
> Steve -- if you are listening we are hoping to remerge your
> version with our
> loader part and bring back into the PostGIS distribution as
> part of PostGIS
> 1.5.1 or 2.0 release.
> 
> http://www.postgis.us/chapter_10
> 
> 
> Leo and Regina
> http://www.postgis.us/
> 
> 
> 
> -Original Message-
> From: postgis-users-boun...@postgis.refractions.net
> [mailto:postgis-users-boun...@postgis.refractions.net] On
> Behalf Of Dave
> Fuhry
> Sent: Friday, February 26, 2010 3:04 PM
> To: PostGIS Users Discussion
> Subject: [postgis-users] TIGER geocoder with Census 2009
> shapefiles
> 
> I'm trying to set up the TIGER geocoder from
> http://www.snowman.net/git/tiger_geocoder/ which is new and
> aims to work
> with the new TIGER shapefiles.  I'm trying with the 2009
> shapefiles from
> www2.census.gov/geo/tiger/TIGER2009

Re: [postgis-users] hi, kevin, I'll explain my initial motivation for clustering.

2010-02-10 Thread Dan Putler
Sorry forgot to paste in the link to connecting R with PostGIS:
http://wiki.intamap.org/index.php/PostGIS#Connecting_PostGIS_with_R

Dan

On Wed, 2010-02-10 at 14:09 -0800, Dan Putler wrote:
> I have to agree with Kevin. My advice would be to do the clustering in
> R. Here is a link on how to read a PostGIS table into R (either via ODBC
> or GDAL/OGR). I haven't done it, but my guess is that doing it via
> GDAL/OGR (via the rgdal package) will be less painful since it will read
> the data into an sp class object in R. Once the data is clustered you
> can get its "shape" via creating a polygon by taking the convex hull of
> the points.
> 
> Having done this type of thing a fair amount, I can tell you that
> K-Means is really a poor choice for this type of analysis (it is biased
> in two dimensional space to creating circular clusters). I would look at
> a clustering algorithm called DBSCAN for this type of application. In
> addition, you will want to look into cluster validation techniques to
> determine the appropriate clustering parameters (e.g., the number of
> clusters in K-Means or the epsilon radius used in DBSCAN).
> 
> Instead of reading the data into R, you can try using PL/R as David
> suggests, but I think this is not the way to go since the ability to
> determine the appropriate clustering parameters using cluster validation
> methods would be very cumbersome following this route.
> 
> Dan
> 
> On Wed, 2010-02-10 at 13:44 -0800, Kevin Neufeld wrote:
> > You want to extend the SQL language by adding the CLUSTERING keyword and 
> > then extend PostGIS to somehow implement the 
> > clustering?
> > 
> > Sorry, but I think this would be nothing less than an 
> > enormous/colossal/massive undertaking.
> > 
> > I'm totally guessing here, but at a very high level I think the steps might 
> > be
> > 
> > 1. Add the keyword CLUSTERING to PostgreSQL's Parser
> >   http://www.postgresql.org/docs/8.3/static/parser-stage.html
> >   (the Postgresql development lists should be able to help you out here)
> > 
> > 2. Somehow add an operator (or set of operators) that define CLUSTERING.  
> > As I understand it, ORDER BY using the typical 
> > ordering operators (<, >, =, <=, etc) to perform it's operations.  
> > Similiarly, GROUP BY uses the = operator.  PostGIS 
> > implements these operators as function calls that operate on the bounding 
> > box of geometries.  This ultimately allows you 
> > to issue a "ORDER BY" or "GROUP BY" types of queries against geometries.  I 
> > would think you would need to do something 
> > similar for your CLUSTERING idea.  Create some operator[s] that only the 
> > CLUSTERING keyword understands. Again, try the 
> > Postgresql development lists for help here.
> > 
> > 3. Dive into PostGIS source and implement the CLUSTERING operator[s] you 
> > defined in step 2, which I assume your going to 
> > implement something like k-means clustering (which by itself is not 
> > trivial).
> > 
> > 4. Polygonize your clustered points using PostGIS.  This is the easiest as 
> > it's a single sql query using the 
> > ST_ConvexHull function.
> > 
> > 
> > Sorry to burst your enthusiasm, but I think you'd be far better off to try 
> > to implement k-means (or the clustering 
> > algorithm of your choice) external to the database using your favorite 
> > language (Java, C, Python, ...).  Extract data 
> > from the database, cluster, save the results back in the database, and 
> > polygonize.
> > 
> > -- Kevin
> > 
> > On 2/10/2010 12:36 PM, sunpeng wrote:
> > >
> > >  hi,Kevin,thanks for your help.
> > >
> > >  Now, I'll explain my initial motivation.
> > >
> > >  Suppose we have a table with:
> > >  create table houses  (
> > > NAME  VARCHAR(128) not null,
> > >  );
> > >  SELECT AddGeometryColumn('houses', 'location', 4214, 'POINT', 2);
> > >
> > >  if we want to do clustering(in data mining environment) the houses
> > > on location, that is, clustering those house near as a cluster(or a
> > > group), and then calculate each cluster's shape, we can not use the
> > > following sql:
> > >
> > >  select ST_Boundary(*)
> > >  from houses
> > >  group by location
> > >
> > >  I would like to extend the postgresql or postgis to support the
> > > following sql:

Re: [postgis-users] hi, kevin, I'll explain my initial motivation for clustering.

2010-02-10 Thread Dan Putler
  How could I do? Any detailed steps?Like I should modify kwlist.h to
> > support "CLUSTERING" keyword, and the following steps?
> >
> >  Thanks!
> >  peng
> >
> > 
> > ---
> >
> > we all know
> >
> > I'm not sure I follow.  Can you explain what exactly you want to do?
> >
> > The following query will collect points into clusters (multipoints),
> > clustered on a 100x100 grid.
> >
> > -- generate a sample random point dataset
> > CREATE TABLE points AS
> > SELECT ST_MakePoint(random()*1000, random()*1000) AS geom
> > FROM generate_series(1, 10);
> >
> > -- create point clusters
> > SELECT st_collect(geom)
> > FROM points
> > GROUP BY
> >round(st_x(geom)/100)*100,
> >round(st_y(geom)/100)*100;
> >
> >
> >     Kevin
> >
> >
> > On 2/4/2010 11:26 PM, sunpeng wrote:
> >  > I want to write "cluster by" instead of "group by" on geospatial
> > point
> >  > data,should I write the code at postgresql or postgis ?
> >  > thanks
> >  >
> >  > peng
> >
> >
> >
> >
> > ___
> > postgis-users mailing list
> > postgis-users@postgis.refractions.net
> > http://postgis.refractions.net/mailman/listinfo/postgis-users
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
-- 
Dan Putler
Sauder School of Business
University of British Columbia

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


RE: [postgis-users] OT: Any one know of a Census list that discussesthe Tiger data

2008-06-20 Thread Dan Putler
Actually, there is one (very inactive) list that does deal with the
StatsCan Road Network Files. It was created to deal with the lack of
local geography identifiers in the StatsCan RNF (e.g., like the Zip Code
information given in TIGER data). The only identifier is (implicitly)
the province, since the files are released on a province by province
basis, but I digress.

The list is can_rnf, and is hosted by OSGEO.

Dan

On Fri, 2008-06-20 at 15:25 -0700, [EMAIL PROTECTED]
wrote:
> On Fri, 20 Jun 2008, Obe, Regina wrote:
> > I suppose we can have just one list to cover all national data formats. 
> > Would be interesting to compare across all the nations how each tracks
> > similar things.  We might be able to learn a lot from each other.
> 
> Agreed that it's convenient to have all national data formats on topic on 
> a single list; since one thing I would have wanted to ask questions on
> in the past is aligning data at the political boundaries between 
> countries.
> 
> > I don't think the list will be that that trafficked anyway if it was just
> > Tiger. I've been playing with Tiger and Canada's equivalent for a project
> > I'm working on so having just one list to subscribe to would be nice.
> 
> Ah. Curious what (and where) Canada's equivalent is?  I could have
> used something like that a few years back and didn't see one.
> 
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
-- 
Dan Putler
Sauder School of Business
University of British Columbia

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] TIGER/Line Shapefiles released

2008-03-31 Thread Dan Putler
True to an extent. However, I've pulled up fe_2007_06085_addr.dbf (Santa
Clara County, California), and have discovered that one TLID 123180546
has 1246 different right side address records, while TLID 122928561 has
88 right side address records. Cleaning things up so that you could use
a standard address geocoder (which would need the *most-inclusive*
address range) would be an interesting challenge.

On Mon, 2008-03-31 at 22:18 -0400, Stephen Frost wrote:
> Dan,
> 
> * Dan Putler ([EMAIL PROTECTED]) wrote:
> > Unfortunately, the data isn't what one would hope for. It appears that
> > the 2007 release will not include address range or zip code information,
> > but later releases (2008?) will. Here is the link to the right point in
> > the FAQ: http://www.census.gov/geo/www/tiger/faq.html#18
> 
> Please don't take offense, but I think you're smoking something pretty
> good..  That FAQ is about the *most-inclusive* address range
> information.  The address ranges are there (I'm looking at them right
> now...), they've just been normalized out of the actual shape files and
> are only in .dbf files (eg: fe_2007_01133_addr.dbf).  There can now also
> be more than one address range for a given edge, which is what they're
> talking about in the second paragraph of that FAQ.  Actually, that was
> true in the old TIGER/Line data, but it was in RT-6.
> 
> I'd encourge you to read the TIGER/Shapefiles relationship documentation
> found here:
> http://www.census.gov/geo/www/tiger/rel_file_desc.pdf
> and here:
> http://www.census.gov/geo/www/tiger/rel_file_desc.txt
> 
> I'm still working on importing the data, so perhaps I've missed
> something, but I don't think so..
> 
>   Thanks,
> 
>   Stephen
> 
> > On Mon, 2008-03-31 at 20:57 -0400, Stephen Frost wrote:
> > > * Peter Foley ([EMAIL PROTECTED]) wrote:
> > > > For those of you who have been waiting, the Census bureau finally 
> > > > released
> > > > the new TIGER/Line shapefiles.
> > > > The information page and download links are here:
> > > > http://www.census.gov/geo/www/tiger/tgrshp2007/tgrshp2007.html
> > > 
> > > Yup, *finally*.
> > > 
> > > > This download page is more wget-friendly:
> > > > http://www2.census.gov/geo/tiger/TIGER2007FE/
> > > 
> > > I think they may have also upgraded their pipe..  I got about 1.41MB/s
> > > (11 Mb/s) for the whole transfer.  It's about 22G all told.  I'll
> > > probably be trying to load it up into PG on one of our servers tomorrow.
> > > It was a bit over 4 hours for me to pull down off of their
> > > ftp2.census.gov ftp site.
> > > 
> > > lftp ftp2.census.gov:/geo/tiger/TIGER2007FE> mirror .
> > > Total: 3313 directories, 56534 files, 0 symlinks
> > > New: 56520 files, 0 symlinks
> > > 22363894773 bytes transferred in 15137 seconds (1.41M/s)
> > > 
> > >   Enjoy,
> > > 
> > >   Stephen
> > > ___
> > > postgis-users mailing list
> > > postgis-users@postgis.refractions.net
> > > http://postgis.refractions.net/mailman/listinfo/postgis-users
> > -- 
> > Dan Putler
> > Sauder School of Business
> > University of British Columbia
> > 
> > ___
> > postgis-users mailing list
> > postgis-users@postgis.refractions.net
> > http://postgis.refractions.net/mailman/listinfo/postgis-users
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
-- 
Dan Putler
Sauder School of Business
University of British Columbia

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] TIGER/Line Shapefiles released

2008-03-31 Thread Dan Putler
Unfortunately, the data isn't what one would hope for. It appears that
the 2007 release will not include address range or zip code information,
but later releases (2008?) will. Here is the link to the right point in
the FAQ: http://www.census.gov/geo/www/tiger/faq.html#18

Dan

On Mon, 2008-03-31 at 20:57 -0400, Stephen Frost wrote:
> * Peter Foley ([EMAIL PROTECTED]) wrote:
> > For those of you who have been waiting, the Census bureau finally released
> > the new TIGER/Line shapefiles.
> > The information page and download links are here:
> > http://www.census.gov/geo/www/tiger/tgrshp2007/tgrshp2007.html
> 
> Yup, *finally*.
> 
> > This download page is more wget-friendly:
> > http://www2.census.gov/geo/tiger/TIGER2007FE/
> 
> I think they may have also upgraded their pipe..  I got about 1.41MB/s
> (11 Mb/s) for the whole transfer.  It's about 22G all told.  I'll
> probably be trying to load it up into PG on one of our servers tomorrow.
> It was a bit over 4 hours for me to pull down off of their
> ftp2.census.gov ftp site.
> 
> lftp ftp2.census.gov:/geo/tiger/TIGER2007FE> mirror .
> Total: 3313 directories, 56534 files, 0 symlinks
> New: 56520 files, 0 symlinks
> 22363894773 bytes transferred in 15137 seconds (1.41M/s)
> 
>   Enjoy,
> 
>   Stephen
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
-- 
Dan Putler
Sauder School of Business
University of British Columbia

___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users


Re: [postgis-users] Thematic mapping with PostGIS

2008-02-09 Thread Dan Putler
Hi Jim,

QGIS (which is what I assume you mean by "Quantum") allows you to create
thematic maps using PostGIS layers. Go to the properties of a layer and
muck with the legend type. Alternatively, you can load PostGIS layers
into uDig (http://udig.refractions.net), and alter the style of a layer.

Dan

On Sat, 2008-02-09 at 19:08 -0500, Doug Foster wrote:
> I am new to PostGIS, and want a good Windows desktop mapping tool to
> view and thematically map boundary data from my PostGIS database.  I
> have Quantum, which is a nice viewer, but it doesn’t seem to do
> thematic mapping.  I am a heavy MapInfo and Maptitude user, but they
> don’t read PostGIS spatial boundaries.  I wish they would.
> 
>  
> 
> Is there a free/inexpensive tool to view and do some nice thematics?
> I would also like to have the “natural break” routine, which I use by
> default since it’s the beast way to break up the categories.
> 
>  
> 
> I have been doing all my database work in SQL Server and then linking
> from MapInfo and linking with equivalent boundary files (census block
> groups and zip codes for all USA).  That’s for the birds when I’m
> running routines in SQL Server and want to view the results
> graphically on an interactive basis.  It’s very clumsy.  So
> PostgeSQL/PostGIS is a much better solution but I haven’t found a good
> way to view the results spatially.
> 
>  
> 
> Thanks…..
> 
>  
> 
> Doug 
> 
>  
> 
>  
> 
> 
> ___
> postgis-users mailing list
> postgis-users@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
-- 
Dan Putler
Sauder School of Business
University of British Columbia
___
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users