I really would like to learn more about the SQL command-based
operations in GRASS. To summarize: I have created a new mapset
where the driver is set to SQL. I have created second layers for
all of my centroids (the tables in the second layers have a
different name like "Db_runivar" ). I used 'v.db.addtable' to
have GRASS create columns to upload the results from extended
statistics of r.univar text files (e.g., Db_runivar.txt) for
each centroid.
Now, I have been trying to get 'db.execute' to work. One
suggestion was that I use a loop for automated updating (I am assuming
that I have to type this in the Terminal):
I came
up with a statement (below) by looking at the manual in
'r.univar' to create an SQL command file:
sed -e
'1d' Dogubayazit_runivar.txt | awk -F'l' '{print "UPDATE
Dogubayazit_runivar SET non_null_cells = "$2", null_cells =
"$3", min = "$4", max = "$5", range = "$6", mean = "$7",
mean_of_abs = "$8", stddev = "$9", variance = "$10",
coeff_var = "$11", sum = "$12", sum_abs = "$13", first_quart
= "$14", median = "$15", third_quart = "$16", perc_90 =
"$17" WHERE cat = "$1";"}'
The above will create a list of update statements, one for each row
in the runivar.txt file. You can't feed db.execute a list of
statements, only one at a time. So you'll have to dump the output of
the above command into an intermediary file, then use the db.execute
"input" parameter to specify that file of SQL commands.
It might be worth noting that if you have many UPDATE's, then doing
them one by one with an individual call to the database each time
might take a bit of time. In this case, going back to sqlite, you
could wrap all the UPDATE statements between a BEGIN...COMMIT
clause, then feed that file to db.execute (or directly to sqlite),
and it will open one connection, and push all the updates at once.
For many 1000's of rows this would probably be much faster.
that can
be used in 'db.execute' but I am getting a syntax error:
DBMI-SQLite driver error:
Error in sqlite3_prepare():
near "sed": syntax error
ERROR: Error while executing: 'sed -e '1d' ............
What seems to be the problem?
Thank you for your time.
Bulent
On Wed, Jan 4, 2012 at 9:16 AM, Micha
Silver <mi...@arava.co.il>
wrote:
On 01/03/2012 10:44 PM, Bulent Arikan wrote:
Thank you! This seems to be an
interesting solution to the problem especially because
it involves changing the driver from DBF to SQLITE. I
ended up using 'v.rast.stats', which adds columns for
extended statistics and uploads values all at once.
The only thing is: data are recorded as part of the
same layer (i.e., Layer 1). So, I cheated!
Yes, v.rast.stats is definitely the way to go if you want
raster univariate statistics pushed into a polygon vector.
However, I started trying your method. I created
a new mapset to use sqlite and copied some vector
maps from a mapset where default driver is DBF. I
defined SQLITE as the new driver in the new mapset
using 'db.connect'
(driver=sqlite,
database='$GISDBASE/$LOCATION_NAME/$MAPSET/sqlite.db',
no flags checked). Then I wanted to use
'v.db.connect' for a vector map but I cannot select
table name etc. I also tried just using
'v.db.connect' but I still cannot see anything under
Table. So. I am confused about how to define a new
driver in a mapset.
I do as follows:
eval `g.gisenv`
(This creates the environment variables for GISDBASE,
etc.)
Now:
db.connect driv=sqlite
database=$GISDBASE/$LOCATION_NAME/$MAPSET/sqlite.db
Next:
g.copy vect=old,new
to create a new copy of the original vector. The original
will still have its attrib table as dbf. THe new copy will
have an sqlite based attribute table.
Now you can do:
v.db.addcol <new_vector> col="..., ..." .
BTW, I think that v.rast.stats will automatically create
the needed columns .
Cheers,
Micha
Thank you again,
Bulent
On Tue, Jan 3, 2012 at
9:13 PM, Micha Silver <mi...@arava.co.il>
wrote:
On 01/03/2012 04:50 PM, Bulent Arikan
wrote:
Dear List,
I rasterized a centroid and ran
'r.univar', whose extended stats are
saved as a text file. Then, I ran
'v.db.addtable' to create a second
layer in the attribute table of the
centroid and I had columns added in
this second layer using 'v.db.addcol'.
I want GRASS to upload values from the
text file but I am not sure how to
perform this task.
Is there a shorter way of creating
a second layer and uploading values
from the text file or what should I do
next so that the columns in the second
layer will be populated using the text
file I have?
I'm not sure about a shorter way, but here's
an option:
First setup your mapset to save attributes
to sqlite.
> v.db.connect centroid driver=sqlite
database=....
Now add the columns which will accept
r.univar values to the sqlite table:
> v.db.addcol centroid col="n double,
null_cells double, cells double, min double,
max double, range double, mean double,
mean_of_abs double, stddev double, variance
double, coeff_var double, sum double"
Run r.univar on your raster, putting results
into a text file
> r.univar your_rast -g > univar.txt
Now do this loop to update values for the
centroid:
> while read l; do \
col=`echo $l | cut -d= -f1`; val=`echo
$l | cut -d= -f2`; \
sqlite3 sqlite.db "UPDATE centroid SET
${col}=${val} ;"; \
> done < univar.txt
Assuming you want to run this for several
centroids/areas, you'll probably want to add
a WHERE clause to the UPDATE statement so as
to put values for only one certain row
(centroid).