Hello,

El miércoles 5 de junio de 2013 a las 20:31, Paul Ramsey escribió:

> Well, your objects are larger than the page size, so you're getting them out 
> of the toast tables, not directly out of main storage. You may also have your 
> type declared as 'main' storage, which means it's zipped up, so it's being 
> unzipped before you can access it, that's also an overhead.

Good to know. I'll check it.  

>  
> For metadata retrieval, the thing to do is store the metadata at the head of 
> the object (which I'm not looking into pgraster to see if you do, but I'll 
> assume for now) and then use PG_DETOAST_DATUM_SLICE in the metadata accessor 
> function, so that you only pull the bytes you want, rather than detoasting 
> the whole object just to get the header information.
>  
Ok. I'll check the PostGIS Raster functions too.
  
>  
> You may be causing further pain by having all the metadata functions 
> separate, so that in fact the object is being read 9 separate times by your 
> different functions. It'll float into cache quickly enough, but the 
> uncompression step at each access will still be there. You might want to 
> stuff the query through a sampling profiler (OSX Shark!) and confirm, but I 
> would guess you'll find a lot of cycles spinning in zlib for this query.
>  
Yes, you're right. Actually, replacing the calls with a general ST_Metadata 
call and unpacking the record at client side, it's really faster. Many thanks!

And many thanks about Shark too. Looks great.

Best regards,
Jorge

  
>  
> Paul  
>  
> --  
> Paul Ramsey
> http://cleverelephant.ca
> http://postgis.net
>  
>  
> On Wednesday, June 5, 2013 at 11:22 AM, Jorge Arévalo wrote:
>  
> > Hello,  
> >  
> > I'm running this PostGIS Raster query
> >  
> > select  
> > st_scalex(rast),  
> > st_scaley(rast),  
> > st_skewx(rast),  
> > st_skewy(rast),  
> > st_width(rast),  
> > st_height(rast),  
> > rid,  
> > st_upperleftx(rast),  
> > st_upperlefty(rast),  
> > st_numbands(rast)  
> > from  
> > my_postgis_raster_table
> >  
> >  
> >  
> > I want to remark that, even when 'rast' is a complex type and can be really 
> > big, I'm getting just metadata. Not the whole 'rast' column. Anyway, the 
> > average dimensions of a 'rast' column in like 600x400 pixels (8 bits per 
> > pixel). So, not so big (about 234 KB per rast object).  
> >  
> > My table has 1257 rows, and this query takes about 45 secs to execute 
> > (45646 msecs). I think it's too slow. I'm just getting metadata, not the 
> > whole 'rast' object, as said.  
> >  
> > This is the explain analyze output
> >  
> > Seq Scan on my_postgis_raster_table (cost=0.00..198.85 rows=1257 width=36) 
> > (actual time=86.867..51861.495 rows=1257 loops=1)
> > Total runtime: 51863.919 ms
> >  
> >  
> >  
> > So, basically a sequential scan. As expected, I guess (I'm not a postgres 
> > expert, so sorry if I'm talking nonsense)
> >  
> > I've calculated the effective transfer rate for this table
> >  
> > SELECT pg_size_pretty(CAST(pg_relation_size('my_postgis_raster_table') / 
> > 45646 * 1000 as int8)) AS bytes_per_second;  
> >  
> > As it's 27KB. Isn't it a slow rate? Is there any kind of index I could 
> > create to speed this query? Maybe use some kind of cache system?
> >  
> > Many thanks in advance,
> >  
> > --  
> > Jorge Arevalo
> > Freelance developer
> >  
> > http://www.krop.com/jorgearevalo
> > http://about.me/jorgeas80
> >  
> > Enviado con Sparrow (http://www.sparrowmailapp.com/?sig)
> >  
> >  
> >  
> >  
> > --  
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
> > (mailto:pgsql-general@postgresql.org))
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
>  





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to