Re: [GRASS-user] r.stream* add-ons for GRASS 7 missing?

2013-12-04 Thread William Kyngesburye
For matplotlib (my package) installation, see the readme - it says how to 
install pyparsing.  In a future update I plan to make this automatic.  I think 
Michael has a copy of pyparsing in his GRASS app, so that's why it isn't 
missing in GRASS 7.

On Dec 4, 2013, at 12:55 PM, Carlos Grohmann wrote:

> Hi
> 
> I'm trying to run r.basin in GRASS 7 (Michael Barton's package for OSX) but 
> some of the add-ons are missing:
> 
> r.stream.angle
> r.stream.extract
> 
> Also, this two modules want to install a new wxPython, giving an error 
> message that it couldn't find a suitable wx (version 2.8 or above). Can I 
> override this?
> 
> r.ipso
> r.wf
> 
> 
> BTW, in GRASS 6.4.3 (kyngchaos's package) I could install all add-ons, but 
> then I can't run matplotlib because it complains of missing pyparsing, 
> although I don't get that error in GRASS 7.
> 
> 
> best
> 
> Carlos

-
William Kyngesburye 
http://www.kyngchaos.com/

"We are at war with them. Neither in hatred nor revenge and with no particular 
pleasure I shall kill every ___ I can until the war is over. That is my duty."

"Don't you even hate 'em?"

"What good would it do if I did? If all the many millions of people of the 
allied nations devoted an entire year exclusively to hating the  it 
wouldn't kill one ___ nor shorten the war one day."

 "And it might give 'em all stomach ulcers."

- Tarzan, on war

___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] DEM : Point cloud DEM fusion?

2013-12-04 Thread Valentin Wittich
Hi there, 

check  r.in.xyz    to
convery your cloud into a raster and  r.mapcalc
   to make your
"fusion".

Regards Valentin



-

Geographical web design  - vsign.de 
--
View this message in context: 
http://osgeo-org.1560.x6.nabble.com/DEM-Point-cloud-DEM-fusion-tp5092716p5092816.html
Sent from the Grass - Users mailing list archive at Nabble.com.
___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user


Re: [GRASS-user] Organizing spatial (time series) data for mixed GIS environments

2013-12-04 Thread Blumentrath, Stefan
Hi Sören,

First of all thank you very much for the excellent temporal framework! It is 
really great work!
Thank you also for your answers. They are already very helpful too!

I will test the solution with external Geotiffs.

Updates of the Geotiffs by external software are expectable (possibly by 
cron-jobs), so I have to think about a strategy for updating all time-space 
datasets down-streams which depend on a file updated (decade, year, month, 
whatever...)

I`ll report back after some first tests...

Best regards
Stefan


-Original Message-
From: Sören Gebbert [mailto:soerengebb...@googlemail.com] 
Sent: 4. desember 2013 18:02
To: Blumentrath, Stefan
Cc: grass-user@lists.osgeo.org list
Subject: Re: [GRASS-user] Organizing spatial (time series) data for mixed GIS 
environments

Hi Stefan,

2013/12/3 Blumentrath, Stefan :
> Dear all,
>
>
>
> On our Ubuntu server we are about to reorganize our GIS data in order 
> to develop a more efficient and consistent solution for data storage 
> in a mixed GIS environment.
>
> By “mixed GIS environment” I mean that we have people working with 
> GRASS, QGIS, PostGIS but also many people using R and maybe the 
> largest fraction using ESRI products, furthermore we have people using 
> ENIV, ERDAS and some other. Only few people (like me) actually work 
> directly on the server…
>
> Until now I stored “my” data mainly in GRASS (6/7) native format which 
> I was very happy with. But I  guess our ESRI- and PostGIS-people would 
> not accept that as a standard…
>
>
>
> However, especially for time series data we cannot have several copies 
> in different formats (tailor-made for each and every software).
>
>
>
> So I started thinking: what would be the most efficient and convenient 
> solution for storing a large amount of data (e.g. high resolution 
> raster and vector data with national extent plus time series data) in 
> a way that it is accessible for all (at least most) remote users (with 
> different GIS software). As I am very fond of the temporal framework 
> in GRASS 7 it would be a precondition that I can use these tools on 
> the data without unreasonable performance loss. Another precondition 
> would be that users at remote computers in our (MS Windows) network can have 
> access to the data.
>
>
>
> In general, four options come into my mind:
>
> a)  Stick to GRASS native format and have one copy in another format
>
> b)  Use the native formats the data come in (e.g. temperature and
> precipitation comes in zipped ascii-grid format)
>
> c)   Use PostGIS as a backend for data storage (raster / vector) (linked
> by (r./v.external.*)
>
> d)  Use another GDAL/OGR format for data storage (raster / vector)
> (linked by (r./v.external.*)
>
>
>
> My question(s) are:
>
> What solutions could you recommend or what solution did you choose?

I would suggest to use r.external and uncompressed geotiff files for raster 
data. But you have to make sure that external software does not modify these 
files, or if they do, that the temporal framework is triggered to update 
dependent space time raster datasets.

I would suggest to use the native GRASS format, in case of vector data. Hence 
vector data needs to be copied. But maybe PostgreSQL with topology support will 
be a solution? I think Martin Landa may have an opinion here.

>
> Who is having experience with this kind of data management challenge?

No experience here from my side.

> How do externally linked data series perform compared to GRASS native?

It will be slower than the native format for sure. But i don't know how much 
slower.

>
>
> I searched a bit the mailing list and found this:
> (http://osgeo-org.1560.x6.nabble.com/GRASS7-temporal-GIS-database-ques
> tions-td5054920.html) where Sören recommended “postgresql as temporal 
> database backend”. However I am not sure if that was meant only for 
> the temporal metadata and not the rasters themselves…

My recommendation was related to the temporal metadata only. The sqlite 
database will not scale very well for select requests if you have more than 
30,000 maps registered in your temporal database.
PostgreSQL will be much faster for select requests. But PostgreSQL performs 
very badly in managing (insert, update, delete) many maps. I am not sure what 
the reason for this is, but from my experience has PostgreSQL a scaling problem 
with many tables. Hence if you do not modify you data often, PostgreSQL is your 
temporal database backend of choice. Otherwise i would recommend Sqlite, even 
if its slower for select requests.

> Furthermore in the idea collection for the Temporal framework 
> (http://grasswiki.osgeo.org/wiki/Time_series_development, Open issues

This discussion is pretty old and does not reflect the current temporal 
framework implementation. Please have a look at the new TGRASS paper:
https://www.sciencedirect.com/science/article/pii/S136481521300282X?np=y
and the Geostat workshop:
http://geostat-course.org/Topic_Gebbert

[GRASS-user] GRASS GIS through PyWPS bridge

2013-12-04 Thread Nitendra Gautam
Hi all,

I want to make a simple web page that uses web processing capabilities of GRASS 
as  a back end and displays image on the webpage.I am using CGI interface of 
python and PyWPS bridge connecting them .
right now my web page(http://134.129.125.124/)  displays a normal JPG image by 
from .
I want to write a  simple python script which uses Grass GIS to convert a TIFF 
image to png or other formats.This script should run as a back end.
I have pasted my script (http://pastie.org/8529097).But its not working right 
now.

Can you please give me suggestions ?

Thank You
Nitendra




___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user

[GRASS-user] r.stream* add-ons for GRASS 7 missing?

2013-12-04 Thread Carlos Grohmann
Hi

I'm trying to run r.basin in GRASS 7 (Michael Barton's package for OSX) but
some of the add-ons are missing:

r.stream.angle
r.stream.extract

Also, this two modules want to install a new wxPython, giving an error
message that it couldn't find a suitable wx (version 2.8 or above). Can I
override this?

r.ipso
r.wf


BTW, in GRASS 6.4.3 (kyngchaos's package) I could install all add-ons, but
then I can't run matplotlib because it complains of missing pyparsing,
although I don't get that error in GRASS 7.


best

Carlos




-- 
Prof. Carlos Henrique Grohmann
Institute of Geosciences - Univ. of São Paulo, Brazil
- Digital Terrain Analysis | GIS | Remote Sensing -

http://carlosgrohmann.com
http://orcid.org/-0001-5073-5572

Can’t stop the signal.
___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user

Re: [GRASS-user] Organizing spatial (time series) data for mixed GIS environments

2013-12-04 Thread Sören Gebbert
Hi Stefan,
there is a FOSS4G presentation online as well:
http://elogeo.nottingham.ac.uk/xmlui/handle/url/288

Best regards
Soeren

2013/12/4 Sören Gebbert :
> Hi Stefan,
>
> 2013/12/3 Blumentrath, Stefan :
>> Dear all,
>>
>>
>>
>> On our Ubuntu server we are about to reorganize our GIS data in order to
>> develop a more efficient and consistent solution for data storage in a mixed
>> GIS environment.
>>
>> By “mixed GIS environment” I mean that we have people working with GRASS,
>> QGIS, PostGIS but also many people using R and maybe the largest fraction
>> using ESRI products, furthermore we have people using ENIV, ERDAS and some
>> other. Only few people (like me) actually work directly on the server…
>>
>> Until now I stored “my” data mainly in GRASS (6/7) native format which I was
>> very happy with. But I  guess our ESRI- and PostGIS-people would not accept
>> that as a standard…
>>
>>
>>
>> However, especially for time series data we cannot have several copies in
>> different formats (tailor-made for each and every software).
>>
>>
>>
>> So I started thinking: what would be the most efficient and convenient
>> solution for storing a large amount of data (e.g. high resolution raster and
>> vector data with national extent plus time series data) in a way that it is
>> accessible for all (at least most) remote users (with different GIS
>> software). As I am very fond of the temporal framework in GRASS 7 it would
>> be a precondition that I can use these tools on the data without
>> unreasonable performance loss. Another precondition would be that users at
>> remote computers in our (MS Windows) network can have access to the data.
>>
>>
>>
>> In general, four options come into my mind:
>>
>> a)  Stick to GRASS native format and have one copy in another format
>>
>> b)  Use the native formats the data come in (e.g. temperature and
>> precipitation comes in zipped ascii-grid format)
>>
>> c)   Use PostGIS as a backend for data storage (raster / vector) (linked
>> by (r./v.external.*)
>>
>> d)  Use another GDAL/OGR format for data storage (raster / vector)
>> (linked by (r./v.external.*)
>>
>>
>>
>> My question(s) are:
>>
>> What solutions could you recommend or what solution did you choose?
>
> I would suggest to use r.external and uncompressed geotiff files for
> raster data. But you have to make sure that external software does not
> modify these files, or if they do, that the temporal framework is
> triggered to update dependent space time raster datasets.
>
> I would suggest to use the native GRASS format, in case of vector
> data. Hence vector data needs to be copied. But maybe PostgreSQL with
> topology support will be a solution? I think Martin Landa may have an
> opinion here.
>
>>
>> Who is having experience with this kind of data management challenge?
>
> No experience here from my side.
>
>> How do externally linked data series perform compared to GRASS native?
>
> It will be slower than the native format for sure. But i don't know
> how much slower.
>
>>
>>
>> I searched a bit the mailing list and found this:
>> (http://osgeo-org.1560.x6.nabble.com/GRASS7-temporal-GIS-database-questions-td5054920.html)
>> where Sören recommended “postgresql as temporal database backend”. However I
>> am not sure if that was meant only for the temporal metadata and not the
>> rasters themselves…
>
> My recommendation was related to the temporal metadata only. The
> sqlite database will not scale very well for select requests if you
> have more than 30,000 maps registered in your temporal database.
> PostgreSQL will be much faster for select requests. But PostgreSQL
> performs very badly in managing (insert, update, delete) many maps. I
> am not sure what the reason for this is, but from my experience has
> PostgreSQL a scaling problem with many tables. Hence if you do not
> modify you data often, PostgreSQL is your temporal database backend of
> choice. Otherwise i would recommend Sqlite, even if its slower for
> select requests.
>
>> Furthermore in the idea collection for the Temporal framework
>> (http://grasswiki.osgeo.org/wiki/Time_series_development, Open issues
>
> This discussion is pretty old and does not reflect the current
> temporal framework implementation. Please have a look at the new
> TGRASS paper:
> https://www.sciencedirect.com/science/article/pii/S136481521300282X?np=y
> and the Geostat workshop:
> http://geostat-course.org/Topic_Gebbert
>
>> section) limitations were mentioned regarding the number of files in a
>> folder, which would be possibly a problem both for file based storage. The
>> ext2 file system had “"soft" upper limit of about 10-15k files in a single
>> directory” but theoretically many more where possible. Other file systems
>> may allow for more I guess… Will usage of such big directories > 10,000
>> files lead to performance problems?
>
> Modern file systems should not have problems with many files. I am
> using ext4 and the temporal framework with 100.000 m

Re: [GRASS-user] Organizing spatial (time series) data for mixed GIS environments

2013-12-04 Thread Sören Gebbert
Hi Stefan,

2013/12/3 Blumentrath, Stefan :
> Dear all,
>
>
>
> On our Ubuntu server we are about to reorganize our GIS data in order to
> develop a more efficient and consistent solution for data storage in a mixed
> GIS environment.
>
> By “mixed GIS environment” I mean that we have people working with GRASS,
> QGIS, PostGIS but also many people using R and maybe the largest fraction
> using ESRI products, furthermore we have people using ENIV, ERDAS and some
> other. Only few people (like me) actually work directly on the server…
>
> Until now I stored “my” data mainly in GRASS (6/7) native format which I was
> very happy with. But I  guess our ESRI- and PostGIS-people would not accept
> that as a standard…
>
>
>
> However, especially for time series data we cannot have several copies in
> different formats (tailor-made for each and every software).
>
>
>
> So I started thinking: what would be the most efficient and convenient
> solution for storing a large amount of data (e.g. high resolution raster and
> vector data with national extent plus time series data) in a way that it is
> accessible for all (at least most) remote users (with different GIS
> software). As I am very fond of the temporal framework in GRASS 7 it would
> be a precondition that I can use these tools on the data without
> unreasonable performance loss. Another precondition would be that users at
> remote computers in our (MS Windows) network can have access to the data.
>
>
>
> In general, four options come into my mind:
>
> a)  Stick to GRASS native format and have one copy in another format
>
> b)  Use the native formats the data come in (e.g. temperature and
> precipitation comes in zipped ascii-grid format)
>
> c)   Use PostGIS as a backend for data storage (raster / vector) (linked
> by (r./v.external.*)
>
> d)  Use another GDAL/OGR format for data storage (raster / vector)
> (linked by (r./v.external.*)
>
>
>
> My question(s) are:
>
> What solutions could you recommend or what solution did you choose?

I would suggest to use r.external and uncompressed geotiff files for
raster data. But you have to make sure that external software does not
modify these files, or if they do, that the temporal framework is
triggered to update dependent space time raster datasets.

I would suggest to use the native GRASS format, in case of vector
data. Hence vector data needs to be copied. But maybe PostgreSQL with
topology support will be a solution? I think Martin Landa may have an
opinion here.

>
> Who is having experience with this kind of data management challenge?

No experience here from my side.

> How do externally linked data series perform compared to GRASS native?

It will be slower than the native format for sure. But i don't know
how much slower.

>
>
> I searched a bit the mailing list and found this:
> (http://osgeo-org.1560.x6.nabble.com/GRASS7-temporal-GIS-database-questions-td5054920.html)
> where Sören recommended “postgresql as temporal database backend”. However I
> am not sure if that was meant only for the temporal metadata and not the
> rasters themselves…

My recommendation was related to the temporal metadata only. The
sqlite database will not scale very well for select requests if you
have more than 30,000 maps registered in your temporal database.
PostgreSQL will be much faster for select requests. But PostgreSQL
performs very badly in managing (insert, update, delete) many maps. I
am not sure what the reason for this is, but from my experience has
PostgreSQL a scaling problem with many tables. Hence if you do not
modify you data often, PostgreSQL is your temporal database backend of
choice. Otherwise i would recommend Sqlite, even if its slower for
select requests.

> Furthermore in the idea collection for the Temporal framework
> (http://grasswiki.osgeo.org/wiki/Time_series_development, Open issues

This discussion is pretty old and does not reflect the current
temporal framework implementation. Please have a look at the new
TGRASS paper:
https://www.sciencedirect.com/science/article/pii/S136481521300282X?np=y
and the Geostat workshop:
http://geostat-course.org/Topic_Gebbert

> section) limitations were mentioned regarding the number of files in a
> folder, which would be possibly a problem both for file based storage. The
> ext2 file system had “"soft" upper limit of about 10-15k files in a single
> directory” but theoretically many more where possible. Other file systems
> may allow for more I guess… Will usage of such big directories > 10,000
> files lead to performance problems?

Modern file systems should not have problems with many files. I am
using ext4 and the temporal framework with 100.000 maps without
noticeable performance issues.

>
> The “Working with external data in GRASS 7” – wiki entry
> (http://grasswiki.osgeo.org/wiki/Working_with_external_data_in_GRASS_7)
> covers the technical part (and to some degree performance issues) very well.
> Would it be worth adding a part on the strat

[GRASS-user] DEM : Point cloud DEM fusion?

2013-12-04 Thread image
Dear all,

I have two Points clouds DEM. One of them is our altimetric reference
(LIDAR). The other one is our DEM data that we would like improve thanks to
the lidar values. So we would  like generate a raster DEM in fusioning those
2 points cloud DEM. 

According to you, is there possible to do that in grass? 

With kind regards. 





--
View this message in context: 
http://osgeo-org.1560.x6.nabble.com/DEM-Point-cloud-DEM-fusion-tp5092716.html
Sent from the Grass - Users mailing list archive at Nabble.com.
___
grass-user mailing list
grass-user@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/grass-user