Re: [gdal-dev] Anti-Aliasing

2009-10-05 Thread Adam Nowacki

Chris Emberson wrote:
The rasters do have colour tables applied, that are expressed using RGB 
values, using gdaldem:

75%10700
50%19610927
25%24719572
0%252233109
nv000 0

They are not derived from vector data however. I am unsure of your 
requirement in terms of the source of the raster data. All of the pixel 
values have been created and either fall between the values of 1-4 or 
contain no data.


The commands I am running on the files are:
gdal_translate -of GTiff -co "TILED=YES" -b 1 -b 1 -b 1 fileA.asc fileB.tif


Upsample fileB.tif here:
gdalwarp -r bilinear -ts newwidth newheight fileB.tif fileB2.tif
Where newwidth and newheight are a few times larger than original.


gdaldem color-relief fileB.tif rgb.txt fileC.tif -alpha
gdalwarp -s_srs EPSG:27700 -t_srs EPSG:900913 -r bilinear fileC.tif 
fileD.tif


an example of how the bilinear is working (fileD.tif) can be seen in the 
attached file. I have tried running running the gdalwarp command above 
prior to gdaldem but this makes little difference.


TIA,
Chris



 > Date: Mon, 5 Oct 2009 16:46:55 +0530
 > Subject: Re: [gdal-dev] Anti-Aliasing
 > From: chaitanya...@gmail.com
 > To: chrisember...@hotmail.com
 > CC: s...@pricepages.org; gdal-dev@lists.osgeo.org
 >
 > Chris,
 >
 > The images look like they have a colour table instead of RGB bands.
 > Also, they seem to be converted from vector to raster. Correct me if I
 > am wrong.
 >
 > Anti aliasing images with colour tables is a little more involved and
 > depends on the type of result required. Can you provide the source for
 > these images?
 >
 > One method would be to convert the image into RGB, apply bilinear or
 > any other interpolation method, and convert it back to the colour
 > table from RGB. If the source is a vector, we can smooth the shapes by
 > using curves in place of straight lines.
 >
 > On Mon, Oct 5, 2009 at 3:59 PM, Chris Emberson
 >  wrote:
 > > Seth,
 > >
 > > Thanks for the reply. The attached image hopefully will clarify the 
problem
 > > I am having. On the left of the image is the output I would like (I 
have

 > > produced this using a GIS). On the right is the output using mapnik and
 > > gdal. The problem is the pixellated effect between the raster 
values on the

 > > RHS.
 > >
 > > The GIS isn't that great as far as batch processing is concerned, 
hence why
 > > I am trying to use the GDAL utilities that ship with mapnik. The 
option in

 > > the GIS that produces the desired effect is "anti-aliasing (bilinear
 > > interpolation)", but it appears as though there is a more sophisticated
 > > smoothing algorithm being employed? I have tried running the various
 > > interpolation techniques with gdalwarp but none of them produce a crisp
 > > output. I have tried also increasing the resolution of the raster by 5x
 > > (both width and height) using "gdal_translate" and  "-outsize 500% 
500%"

 > > this doesn't help either.
 > >
 > > Any suggestions gratefully received,
 > >
 > > Chris
 > >
 > >
 > >
 > >
 > >> Date: Fri, 2 Oct 2009 12:36:24 -0500
 > >> Subject: Re: [gdal-dev] Anti-Aliasing
 > >> From: s...@pricepages.org
 > >> To: chrisember...@hotmail.com
 > >> CC: gdal-dev@lists.osgeo.org
 > >>
 > >> Could you give an example of the problem? Anti-aliasing generally only
 > >> applies when you are going from vector line drawings to rasterized 
images.

 > >>
 > >> If I was trying to smooth things I would first look at cubic or 
bilinear.

 > >> Any other resampling may enhance any noise in your image.
 > >> ~Seth
 > >>
 > >> On Fri, October 2, 2009 11:16 am, Chris Emberson wrote:
 > >> >
 > >> > I am looking for an anti-aliasing function to smooth out the 
pixellated

 > >> > effect between raster values. I have tried the various interpolation
 > >> > methods with gdalwarp and have also increased the resolution of the
 > >> > raster
 > >> > to try and minimise the effect, to no avail.
 > >> > Is this a function to be added soon?
 > >> >
 > >> > Thanks,
 > >> > Chris
 > >> >
 > >> > _
 > >> > Share your photos with Windows Live Photos – Free.
 > >> >
 > >> > 
http://clk.atdmt.com/UKM/go/134665338/direct/01/___

 > >> > gdal-dev mailing list
 > >> > gdal-dev@lists.osgeo.org
 > >> > http://lists.osgeo.org/mailman/listinfo/gdal-dev
 > >>
 > >>
 > >
 > > 
 > > New! Receive and respond to mail from other email accounts from within
 > > Hotmail Find out how.
 > > ___
 > > gdal-dev mailing list
 > > gdal-dev@lists.osgeo.org
 > > http://lists.osgeo.org/mailman/listinfo/gdal-dev
 > >
 >
 >
 >
 > --
 > Best regards,
 > Chaitanya kumar CH.


Add other email accounts to Hotmail in 3 easy steps. Find out how. 


--

Re: [gdal-dev] Building a Mercator Image from Un-projected data.. Major confusion and help requested.

2009-11-02 Thread Adam Nowacki

Cassanova, Bill wrote:

result = GDALGenImgProjTransform(hTransformArg, TRUE, 1, &x, &y,
  NULL, &success);
std::cout << std::fixed;
std::cout << result << " " << lower_right.first << ", " << 
lower_right.second << " = " << x << ", " << y << std::endl;


*//** 1 -65.656000, 49.718600 = -0.000590, 0.000450***

*// This is NOT Right** so apparently I am doing something wrong or not 
understanding the concept here*


2nd argument to GDALGenImgProjTransform is TRUE so you are reprojecting 
from, not to mercator

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Building a Mercator Image from Un-projected data..Major confusion and help requested.

2009-11-02 Thread Adam Nowacki

Cassanova, Bill wrote:

*Well, I sorta tried that:*
gdalwarp -s_src '+proj=WGS84' -t_srs '+proj=merc' 
hiradff_200910121800_f180_weather_radar.tiff foo.tiff


-s_srs not -s_src
Please, before posting on this list, at least check for typos!
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


[gdal-dev] Re: Motion: Commit Access for Adam Nowacki

2010-02-25 Thread Adam Nowacki

I agree with the RFC 3: GDAL Commiter Guildlines.

Even Rouault wrote:

Motion: Extend GDAL/OGR Commit Access to Adam Nowacki.

---

Hi,

Adam is the author of the GDAL WMS driver, that he contributed during the 2007 
Google Summer of Code. Since then, he has regularly taken part to discussions 
related to the driver and contributes patches, such as in :


http://trac.osgeo.org/gdal/ticket/3224
http://trac.osgeo.org/gdal/ticket/2750
http://trac.osgeo.org/gdal/ticket/2646
http://trac.osgeo.org/gdal/ticket/3420

Adam is also using GDAL heavily in the context of his professionnal 
activities.


It would be convenient to give him direct commit access to subversion.

Adam, could you reply to this message indicating your agreement to the
guidelines listed in:

http://trac.osgeo.org/gdal/wiki/rfc3_commiters

I'll start voting with my support:

+1

Best regards,




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] GDAL WMS Driver - Image Size

2010-06-29 Thread Adam Nowacki
default block size is 1024x1024 so the 2000x2000 image is split into 4 
requests: 1024x1024, 976x1024, 1024x976 and 976x976


Travis Kirstine wrote:

I am doing some testing of the WMS driver with gdal and have a
question.  If I define the  and  parameters as 2000px I
would expect that the WMS request would request a image width and
height of 2000.  When I checked the log files GDAL is requesting a
976x976 image and then rescaling the returned image.
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Re: Gdal_translate, WMS source and own certificate

2010-12-21 Thread Adam Nowacki
Can you test http://trac.osgeo.org/gdal/changeset/21304/trunk ? Add 
true inside .


Jukka Rahkonen wrote:

Jukka Rahkonen  mmmtike.fi> writes:

We would like to use gdal_translate for reading fromn our WMS service 
which is> secured by our own certificate and thus not automatically 
trusted.  Our developer had a quick look on a gdal source code and 
tried to find how options CURLOPT_SSL_VERIFYPEER (1) and CURLOPT_CAPATH
(2) are used there.  Numbers are referring to document 
http://curl.haxx.se/docs/sslcerts.html


The conclusion was that those options are not used and thus it is 
impossible to use dgal_translate with our server.  Is this true, and

if yes, would it be worth filing a ticket?


We got help for our immediate need by making an own modified build. We opened
also a new ticket about fixing the thing permanently
(http://trac.osgeo.org/gdal/ticket/3882) because our developer wrote me
"Building gdal with curl for Windows is non-trivial".

-Jukka Rahkonen-


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] How to create TIFF with overview from a TMS server ?

2011-01-20 Thread Adam Nowacki
Easiest would be to just gdal_translate -co COPY_SRC_OVERVIEWS=YES from 
properly setup gdalwms source, see http://gdal.org/frmt_wms.html . Local 
files can be accessed by using "file:///..." in in ServerURL.


On 2011-01-19 22:14, Jean-Claude Repetto wrote:

Hello,

I am looking for a method to create a pyramidal TIFF file containing
tiles downloaded from a TMS server.

I thought I could achieve that goal with the gdal_translate command and
the COPY_SRC_OVERVIEWS=YES option, but unfortunately it doesn't work (cf
).

Is there another way ?

Thanks,
Jean-Claude
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] gdal_translate compress is larger??

2011-06-15 Thread Adam Nowacki
Its just how compression works. If some data is reduced in size then 
other has to grow in size. Try -co compress=deflate -co PREDICTOR=2


On 2011-06-15 19:43, Matt Wilkie wrote:

Hi Folks,

I've run into a strange thing, "gdal_translate -co compress=lzw" is
creating files that are larger than the source uncompressed image.

Input:

04/04/2006 08:39 AM 959,236,993 A1.tif

Results of "gdal_translate -co compress=[lzw,packbits,none]"

15/06/2011 10:35 AM 1,003,987,364 _lzw.tif
15/06/2011 10:31 AM 958,908,507 _none.tif
15/06/2011 10:30 AM 965,940,411 _packbits.tif

gdalinfo report is attached.
source file at http://files.environmentyukon.ca/matt/gdal-trans-lzw/ (in
about 20 minutes).

GDAL 1.8.0, released 2011/01/12

what's up?



___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] RE: progressive rendering

2008-08-23 Thread Adam Nowacki

Norman Barker wrote:
I have created RFC 24 on 


http://trac.osgeo.org/gdal/wiki/rfc24_progressive_data_support


I'd suggest creating completely new interface for this.
Why is this better (imo):
- application decides when and how ofter the updates occur, minimal 
threading issues
- clearly defined synchronization points: NextAsyncRasterIOMessage(), 
LockBuffer(), UnlockBuffer()

- requests can be aborted with EndAsyncRasterIO();


class GDALAsyncRasterIOMessage {
  GDALAsyncRasterIO *asyncrasterio;

  void *userptr;

 // GARM_UPDATE, GARM_COMPLETE, GARM_ERROR, ...
  int what;

  int xoff, yoff;
  int xsize, ysize;
  // ...
};


class GDALAsyncRasterIO {
// lock a whole buffer
  void LockBuffer();

// lock only a block
  void LockBuffer(int xbufoff, int ybufoff, int xbufsize, int ybufsize);

  void UnlockBuffer();
};


class GDALAsyncDataset : public GDALDataset {
  GDALAsyncRasterIO *AsyncRasterIO( /* same as RasterIO */ , void 
*userptr);


  void EndAsyncRasterIO(GDALAsyncRasterIO *);

// if there are no new messages return NULL if wait is false or wait for 
new message if wait is true

  GDALAsyncRasterIOMessage *NextAsyncRasterIOMessage(bool wait);

  void ReleaseAsyncRasterIOMessage(GDALAsyncRasterIOMessage *m);
};




// ###
//  How to use it
// ###

GDALDataset *ds = GDALOpen( /* ... */, GA_ReadOnly);

// start asynchronous raster io, can have multiple running at same time
GDALAsyncRasterIO *r = ds->AsyncRasterIO(GF_Read, xoff, yoff, xsize, 
ysize, bufptr, bufxsize, bufysize, bufdatatype, 3, NULL, 0, 0, 0, userptr);


while (...) {
  GDALAsyncRasterIOMessage *m = ds->NextAsyncRasterIOMessage(true);

  if (m) {
if (m->what == GARM_UPDATE) {
// lock the buffer so there will be no updates while we read from it
  m->asyncrasterio->LockBuffer( /* ... */ );

// display updated region

  m->asyncrasterio->UnlockBuffer();
} else {
// handle completion, display error message, ...

}

ds->ReleaseAsyncRasterIOMessage(m);
  }

}

ds->EndAsyncRasterIO(r);

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] RE: progressive rendering

2008-08-28 Thread Adam Nowacki

Even Rouault wrote:
I don't know JPIP, but I can image that the driver would start a thread when 
AsyncRasterIO() is called. It communicates with the server and receives the 
updates with a polling loop. When it has received an update,it put the 
received data as well as the parameters describing the window, etc... in a 
structure (let's call it a ticket), pushes that ticket in a stack and goes on 
pushing tickets, or wait for the ticket to be consumed by the reader (both 
are possible, even if you can't push continuously new tickets as memory will 
increase, so the working thread would have to go in idle mode until the queue 
decreases a bit)


The NextAsyncRasterIOMessage() call will check that some message is available 
and unstack the first ticket. In fact, the LockBuffer() / UnlockBuffer() 
could probably be avoided at the API level. Of course the implementation of 
NextAsyncRasterIOMessage() needs an internal mutex to protect the accesses to 
the queue.


My idea was to update the data buffer given to AsyncRasterIO immediately 
after receiving data and write only window coordinates into the queued 
messages. That way the queue will remain small, a few KB's at most. This 
is also why LockBuffer() / UnlockBuffer() is there, to protect the 
buffer from async updates while we read from it. LockBuffer(xoff, yoff, 
xsize, ysize) allows almost no wait operation if used with coords from 
queue.

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] RE: progressive rendering

2008-08-29 Thread Adam Nowacki
Ill talk about my original proposal 
(http://lists.osgeo.org/pipermail/gdal-dev/2008-August/018088.html) 
instead of the one on trac 
(http://trac.osgeo.org/gdal/wiki/rfc24_progressive_data_support).


Tamas Szekeres wrote:

Honestly I didn't follow the observations that have turned the things
into a different approach, however I agree that doing callback on
different threads might not be the most reliable option in some cases.





By reviewing the current proposal I'm a bit uncertain about how the
objective described there is related to to the title of the document.
It seems like we are tending to swich to a sequential streaming
approach instead of providing an interface to the possible progressive
rendering modes. As far as I can see we would like to notify the user
about the section that have been (or should be) updated in the buffer,
then the user is responsible to put the image together from the chunks
in order to visually render it on the screen or do some other
interesting job with that.


While this seems to be the case for Norman's proposal it is not in mine. 
A image buffer is provided to the AsyncRasterIO call, same as it would 
be with normal RasterIO call. This buffer is later asynchronously 
updated as more data is received. A AsyncRasterIO call followed by a 
loop waiting for GARM_COMPLETE message would be equal to a normal 
RasterIO call.



Assuming that we would indeed like to support the progressive
rendering, in addition to the top-down image streaming mode we might
also consider that the data may be available in such order that may
not be described easily in the interface. For example i can imageine a
one-dimensional interlacing scheme where the every 8th, 4th, 2th.. row
arrives in the subsequent iterations. Or in a 2D interlacing mode the
resolution of the image may be enhanced during the iterations. How can
we describe the modified sections in these more esoteric cases? From
the user's perspective I would rather like to see a fully prepared
intermediary image to be created by the driver in every stage of the
process. User would provide the same buffer to the driver but the
diver would be responsible to modify the data incrementally according
to the incoming segments. In some cases the whole buffer may be
modified in every iteration.


This is exactly what i have in mind. If we receive every 8th row the 
driver could update a whole 8 row block by copying the row 8 times.



The driver should also handle the required downsampling of the rasters
according to the requested resolution of the image, how this have been
addressed with the current proposal? How the driver would prepare a
1000x1000 image into a buffer having 300*300 pixels. Would the driver
store the whole image in an in-memory dataset and rely on the current
RasterIO to copy the data to the user for example?


This is really internal to the driver. If the format has reduced 
resolution sets they should be used instead of the highest resolution image.



It seems we would like to serialize the incoming data into a message
queue by using the same chunk structure as it have been received by
the driver, however from the user's perspective it's not too
interesting to receive the data in the same fragments as it have been
received. For example the user might want to set up a timer and render
the actual snapshot of the image in every 100ms. Therefore the driver
would be responsible to put the data together by collecting every
segments that have been arrived in the meantime. From this aspect
there's no need to collect every segment in a message queue, the
driver may also use an internal buffer to collect the data between the
stages and present the whole data together to the user.


In my proposal the user could:
1) LockBuffer()
2) display the current snapshot
3) UnlockBuffer()
4) sleep for 100ms ignoring the update messages (but 
NextAsyncRasterIOMessage() still has to be called to get the 
GARM_COMPLETE message)



We have switched the previous pattern to another because we afraid of
the negative impacts of the multiple threads. But do we really need to
use multiple threads at all? Does the driver need to read the incoming
buffers of the socket as soon as the data have been arrived and
provide some 'real-time' action afterwards? I assume the socket
library will safely buffer the incoming data and the transmission
control protocol will be able to pause the transfer if this buffer
will eventually become full temporarily. Therefore I guess it would be
sufficient for the driver to do all the action (like reading and
preprocessing the TCP buffers) only inside the RasterIO related
functions called by the client.


My proposal also supports single threaded implementations. AsyncRasterIO 
would initialize the request while subsequent NextAsyncRasterIOMessage() 
calls would communicate with the server and update image buffer. 
NextAsyncRasterIOMessage() would either return as soon as possible, 
after all the data received since last call is

Re: [gdal-dev] RE: progressive rendering

2008-08-29 Thread Adam Nowacki

Tamas Szekeres wrote:

2008/8/29 Adam Nowacki <[EMAIL PROTECTED]>:

In my proposal the user could:
1) LockBuffer()
2) display the current snapshot
3) UnlockBuffer()
4) sleep for 100ms ignoring the update messages (but
NextAsyncRasterIOMessage() still has to be called to get the GARM_COMPLETE
message)

It seems you consider a multi threaded driver behaviour by default,
Why should the driver tamper the contents the buffer of the user in
the meantime at all. Wouldn't it be sufficient to update the buffer
contents in the NextAsyncRasterIOMessage call? I've mentioned that I'm
not totally sure about the necessity of using multiple threads, but
the driver could eventually collect the data in an internal temporary
buffer and copy the contents to the user buffer upon the subsequent
NextAsyncRasterIOMessage. However for collecting the data in an
internal buffer I don't think we should duplicate what the TCP/IP
driver is actually doing at the OS level.


Im trying to design a interface with lowest overhead possible. Driver 
doesnt have to keep its own buffer and later copy, received data can be 
directly dumped into user buffer.



The RasterIO function is not renamed. A new function is added
(AsyncRasterIO) with different behavior than the current RasterIO function.
RasterIO function doesnt change, a full backwards compatibility.



I see no problem if the original RasterIO behaves differently in some
cases controlled explicitly by the user. By specifying a global
setting that RasterIO will operate in asynchronous mode at dataset
level would be enough (like using the dataset creation option or a new
flag in the dataset). I guess the programmer should take care of the
fact how the dataset have been created before. We should indeed use
the synchronous mode as the default setting.


The RasterIO function has been there for years. Changing its behavior by 
some hidden variable will definitely make it easier to use (or read) it 
the wrong way. Im trying to protect programmer's foots :)



Related to the statements above - in my opinion - only the behaviour
of the existing RasterIO should be altered, which would provide an
alternative option (in a Win32 overlapped IO fashion), according to
the following example:

1. During the dataset creation the asynchronous behaviour of the
driver for this dataset could be specified as a new dataset creation
option (like RASTERIO_MODE=ASYNC)

With my proposal you can mix normal blocking RasterIO calls with
AsyncRasterIO calls.


Hmmm. I don't see the related use case when the the RasterIO should be
used in synchronous and asyncronous mode with the same dataset in
parallel. However the user have the freedom to create 2 datasets
specify different IO mode and use them simultaneously.


Being able to mix both blocking and async RasterIO calls sure would help 
to 'progressively' upgrade code from RasterIO to AsyncRasterIO.

Opening 2 datasets would be a real bad thing: no shared cache.


2. The user would use the existing RasterIO method to initiate the
operation as well as to fetch the next available data. The RasterIO
would return immediately if the buffer contains a proper set of the
intermediary data. In this case RasterIO would return a special error
code (IO_PENDING) to denote that more data will be available and the
user will have to initiate a subsequent call to RasterIO. (If we don't
like to introduce new error codes we could also use a separate
function like GetAsyncResult for this purpose).

3. The RasterIO  (or GetAsyncResult) would return no error if the
operation have been finished and no more new update will be available
in the buffer.

4. We could introduce a separate function like CancelIO to allow the
user to cancel the pending IO operation at any time.

While I like the simplicity there are a few 'problems'. Identifying
subsequent calls to RasterIO belonging to the same operation. Would it be
the buffer pointer, together with the offset and size maybe? Possibly a lot
of variables that have to be remembered by both the driver and user side.
Searching a list of all open async raster io operations ? Would the buffer
be updated with data received since last RasterIO call or a current snapshot
? Each open async raster io would also require its own data buffer, later
copied in whole (or only updated regions) into user buffer.



I consider the IO context should be stored in the dataset as member
variables by the driver. I don't see a pressing reason to support
multiple raster IO operations at the same time with the same dataset.
I think a similar operation could safely be utilized by creating 2 or
more datasets and invoke their rasterIO simultaneously.
In case if the user specifies a different buffer or image section in
RasterIO then the driver would return with an error or gracefully
initiate a new sequence to the server implicitly.


Consider a simple image browsing app with a overview box in a corner. 
Ha

Re: [gdal-dev] RE: progressive rendering

2008-09-02 Thread Adam Nowacki

Tamas Szekeres wrote:

Hi All,

Upon thinking about the issues I've been come up with previously, I
consider the following approach could be implemented easily either at
driver or at SWIG interface level. Requires a new class to be
implemented by the async IO supported drivers and a new additional
method should be added to the GDALDataset and GDALRasterBand.
The user could implement the async IO by using the following pseudo sequence:


...



I like it :)

typedef enum {
...,
CE_Again = 5 // timeout, buffer not updated, call RasterIO() again
} CPLErr;

typedef enum { // Async raster io status
GRS_Complete = 0, // raster io is complete, call to 
GDALRasterIOContext->RasterIO() will return CE_Failure
GRS_InProgress = 1, // raster io in progress, call 
GDALRasterIOContext->RasterIO() to get more data
GRS_Error = 2 // error, call to GDALRasterIOContext->RasterIO() 
will return CE_Failure

} GDALRasterIOStatus;

typedef enum { // Hints for GDALRasterIOContext->RasterIO(), driver 
implementations may ignore any or all
GRH_SingleBlock = 0x1, // exit after single block update, even if 
there is more data immediately available
GRH_Progressive = 0x2, // update with lower resolution data / fill 
with interlaced rows
GRH_WaitForTimeout = 0x4 // wait for timeout even if already 
updated some data

} GDALRasterIOHint;

class GDALRasterIOContext {
public:
GDALRasterIOContext() {
eStatus = GRS_Continue;
pszError = NULL;
}

public:
// Continue raster io, update pData buffer with new data, nUpdate??? = 
bounding box of all updated blocks

// return CE_None if updated with new data
// return CE_Again on timeout
// return CE_Failure on error
virtual CPLErr RasterIO(void *pData, int nTimeoutMilliseconds = -1, 
int nHint = 0);


public:
GDALRasterIOStatus eStatus;
char *pszError; // if eStatus == GRS_Error this should contain 
human readable error message, NULL otherwise

int nUpdateXOff;
int nUpdateYOff;
int nUpdateXSize;
int nUpdateYSize;
};

class GDALDataset {
...
public:
virtual GDALRasterIOContext *StartRasterIO(
GDALRWFlag eRWFlag, int nXOff, int nYOff, int nXSize, int nYSize,
int nBufXSize, int nBufYSize, GDALDataType eBufType,
int nBandCount, int *panBandMap, int nPixelSpace, int 
nLineSpace, int nBandSpace

);

virtual void EndRasterIO(GDALAsyncRasterIOContext *poContext);
};

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] VMAP GDAL driver

2008-09-15 Thread Adam Nowacki

http://gdal.org/ogr/drv_ogdi.html

Aurélien Kamel wrote:

Hi all,

 

For a project I’m working on I’m looking for a GDAL driver that could 
read (read only) VMAP files (level 0 and 1). Does such a driver already 
exist? The project is quite restrictive, we must use GDAL (and OGR) and 
we can’t do any conversion from VMAP to another format before opening it 
for reading.


 

I’m quite new to VPF format so implementing the driver myself would take 
quite some time.


 


Any informations would be very appreciated.


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Interleave Bands with RasterIO?

2008-10-18 Thread Adam Nowacki

int bands[4] = { 3, 2, 1, 4 };
GDALDataset::RasterIO(..., 4, bands, 32, 32 * xSize, 1);

but then you probably mixed bits with bytes so:

int bands[4] = { 3, 2, 1, 4 };
GDALDataset::RasterIO(..., 4, bands, 4, 4 * xSize, 1);

[EMAIL PROTECTED] wrote:
I have a BGRA buffer in memory, and a multi-band RGBA TIF image.  Is it 
possible to use RasterIO to read in values from each band and neatly 
interleave them in my buffer.


For example if I was loading an RGBA image I am hoping to be able to 
make 4 separate RasterIO calls. 


Assuming a BGRA image looks like:

0 - blue byte
8 - green byte
16 - red byte
24 - alpha byte
32 - blue byte
40 - green byte
48 - red byte
56 - alpha byte
64 - 

The first RasterIO call would read in the blue values and insert them at 
0, 32, 64, etc making it look like:


0 - blue byte
8 -
16 -
24 -
32 - blue byte
40 -
48 -
56 -
64 - 

The second RasterIO call would read the green values and insert them at 
8, 40, etc making it look like:


0 - blue byte
8 - green byte
16 -
24 -
32 - blue byte
40 - green byte
48 -
56 -
64 - 

Rinse and repeat for red and alpha.

Is this possible, or is there a much much simpler way that I'm 
overlooking?  I've been looking at the paramaters for RasterIO but can't 
seem to get it to work.  I do understand that I can just read the values 
into a separate buffer and memcpy them over to my image buffer.  I was 
trying to minimize the total number of times the same data has to be copied.


Thanks for the help.

Craig

 





___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Bigtiff question

2009-03-05 Thread Adam Nowacki

Even Rouault wrote:

What raster format would you suggest then?

I mean, based on those requirements:

- Multiband;
- Large files;
- Good performance *reading* the data in pixel space. Not the "band as
usual" ;)


How about rotating the axis so:
stored x axis = data time axis,
stored y axis = data x axis,
stored z (band) axis = data y axis. For your 5000x2500 with 312 bands 
example this would result in 312x5000 and 2500 bands tiff file. Reading 
all 312 values for some point would mean reading a single row (scanline) 
from a single band, a very fast operation.

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Availability of Lanczos and cubicspline in gdaladdo

2009-03-30 Thread Adam Nowacki

Seth Price wrote:

I actually hope to be addressing performance in my GSoC project. I'm
interested in rewriting the GDAL resampling code to CUDA, so the graphics
card does the hard work. For example, instead of processing one pixel at a
time, the latest GeForce GTX 260 would be able to process 216. I'm hoping
for CUDA to be 50 to 100 times faster.

Make sure you're using GDAL 1.6 or later, I recently rewrote the regular
Lanczos/cubicspline warping code to be much faster.
~Seth


You can achieve pretty amazing speeds on the CPU alone. I have a SSE3 
(would be trivial to rewrite as plain old SSE, probably a bit slower) 
Lanczos sampler with 4x4 kernel that runs at ~70 000 000 samples per 
second with float data type on a 2.33GHz Intel Xeon E5410 (one thread). 
I was considering porting the sampler to GDAL but found that I would 
have to rewrite the whole warping code to get any useful speed boost, 
including heavily optimizing coordinate transformations.

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Transform from lat/long/elevation to 3d sphere

2009-07-03 Thread Adam Nowacki

Something like:
OGRSpatialReference sr;
sr.SetFromUserInput("+proj=geocent");

Christopher Hunt wrote:

Sorry for the hopefully not so dumb question here.

If I want to transform from one projection to a projection of the earth 
in 3D, how would I set up my target srs?


I want to take a WGS-84 lat/long/elevation and have an x,y,z returned in 
metres.


Here's my present code:

  // Read the image's projection
  OGRSpatialReference projSpatialReference(dataset->GetProjectionRef());
  scoped_ptr 
latLongSpatialReference(projSpatialReference.CloneGeogCS());
  transform = 
OGRCreateCoordinateTransformation(latLongSpatialReference.get(), 
&projSpatialReference);


...and of course that transforms from lat/long/elevation to the 
projection srs nicely. What I'm looking for is a way to specify the 
target srs so that it yields the x, y, z values for WGS84 3d sphere i.e. 
the earth.

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Precission problem with atof

2009-07-22 Thread Adam Nowacki

Jorge Arévalo wrote:

 fPixelSizeX = atof(PQgetvalue(hPGresult, 0, 2));

I get 0.89976158142 instead of 0.9.


0.89976158142... is the value closest to 0.9 that a single precision 
float variable can store.

Using 'double' should give 0.90002220...

___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Re: Strange things with gdalwarp ...

2009-08-22 Thread Adam Nowacki
Some rather counterintuitive gdalwarp behavior: the bigger 
dfWarpMemoryLimit (-wm setting) the more cpu time will be wasted on 
warping not existing pixels. Why? Warping begins with the destination 
window size of entire output image size. If this size is larger than 
dfWarpMemoryLimit it is split in half along the longest axis and any 
half that doesn't contain the currently processed source file is 
discarded. With large dfWarpMemoryLimit this subdivision process will 
stop early with still large portions of out of source image pixels.


Hermann Peifer wrote:

Frank Warmerdam wrote

Jukka Rahkonen wrote:


Conclusion: Too slow to be useful with my settings.
Question 1: Should it work this way?
Question 2: If yes, is it worth trying to find the limits when it 
goes too slow

and perhaps file a ticket?



I hate to bring this up so late in the discussion, but have you tried
-wo SKIP_NOSOURCE=YES?  When mosaicing small images into large images it
prevents processing of chunks for which there is no source data.  
Otherwise

the whole output image is generated for each input image.

There is an open ticket on this issue to apply this option by default.

BTW, in combination with SKIP_NOSOURCE it can be helpful to keep the -wm
option to a modest size to the tiles aren't too large.  A 200MB should be
sufficient - especially if the input and output image are tiled.



Just to repeat my earlier observations: over the last days, I tried with 
all kinds of -wm and GDAL_CACHEMAX settings, and also added the options 
TILED=YES and SKIP_NOSOURCE=YES. I haven't documented the performance 
changes systematically, but my overall impression was that the changes 
were rather small. I also tried -multi on my dual processor machine, but 
I ended up with a fatal error.


What really helped was to follow Felix' approach, who wrote in the first 
mail of this thread:

Now, a second script, pasting first "slices" of 5*30° together,
and than merging all the slices...


This way, one can reduce the processing time to some 10-20% compared to 
the "all in one run approach". Concerning the size of the intermediate 
slices: the smaller they are, the faster it goes. The final step of 
gdalbuilding a vrt from the imtermediate slices and gdal_translating it 
to the final output.tif is also very fast.


In order to avoid NODATA gaps in the final output.tif: It seems to be 
safer if the intermediate slices are slightly overlapping, rather being 
than just adjacent.


Hermann
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev


Re: [gdal-dev] Re: Strange things with gdalwarp ...

2009-08-24 Thread Adam Nowacki

You can test with http://pastebin.com/m45e46f53
Also remember to use -wo SKIP_NOSOURCE=YES

Hermann Peifer wrote:

Adam Nowacki wrote:
Some rather counterintuitive gdalwarp behavior: the bigger 
dfWarpMemoryLimit (-wm setting) the more cpu time will be wasted on 
warping not existing pixels. Why? Warping begins with the destination 
window size of entire output image size. If this size is larger than 
dfWarpMemoryLimit it is split in half along the longest axis and any 
half that doesn't contain the currently processed source file is 
discarded. With large dfWarpMemoryLimit this subdivision process will 
stop early with still large portions of out of source image pixels.




Given Adam's explanations above, could someone tell me if my below 
assumptions are correct? Thanks.


Just to repeat, the input files are ASTER GTiffs in WGS84, 3601x3601 
pixels each. The output is a single mosaic of all tiles, a 100m GTiff, 
in LAEA projection



gdalwarp -wm 300 --config GDAL_CACHEMAX 300 --debug on ... reports:

Src=0,0,3601x3601 Dst=36375,31125,12125x10375

I understand this as follows: the source image of 3601x3601 pixels is 
read as one chunk (which after reprojection and resampling should be 
around 800x1100 pixels). However, 12125x10375 pixels are actually 
written to the output file. So 99.5% of the destination window must be 
completely unrelated to my input image.


If I counter-intuitively reduce the memory limit to, say: 40 MB and 
leave anything else unchanged, then gdalwarp behaviour changes to


Src=0,0,3601x3601 Dst=45468,38906,1516x2594

This tells me that the destination window is much smaller and gdalwarp 
will not waste time to write out so many irrelevant pixels, correct?


Perhaps gdalwarp should not only test if the destination window fits 
into memory, but also check what would be the minimum destination window 
for warping the input image. This could speed up the mosaicing of small 
tiles into a bigger output file.


Hermann
___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev




___
gdal-dev mailing list
gdal-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/gdal-dev