On 11/20/2016 09:44 PM, Hillf Danton wrote:
On Saturday, November 19, 2016 3:41 AM Jens Axboe wrote:
We ran into a funky issue, where someone doing 256K buffered reads saw
128K requests at the device level. Turns out it is read-ahead capping
the request size, since we use 128K as the default set
On Saturday, November 19, 2016 3:41 AM Jens Axboe wrote:
> We ran into a funky issue, where someone doing 256K buffered reads saw
> 128K requests at the device level. Turns out it is read-ahead capping
> the request size, since we use 128K as the default setting. This doesn't
> make a lot of sense
We ran into a funky issue, where someone doing 256K buffered reads saw
128K requests at the device level. Turns out it is read-ahead capping
the request size, since we use 128K as the default setting. This doesn't
make a lot of sense - if someone is issuing 256K reads, they should see
256K reads, r
On 11/16/2016 08:12 AM, Jens Axboe wrote:
On 11/16/2016 12:17 AM, Hillf Danton wrote:
On Wednesday, November 16, 2016 12:31 PM Jens Axboe wrote:
@@ -369,10 +369,25 @@ ondemand_readahead(struct address_space *mapping,
bool hit_readahead_marker, pgoff_t offset,
unsigned
On 11/16/2016 12:17 AM, Hillf Danton wrote:
On Wednesday, November 16, 2016 12:31 PM Jens Axboe wrote:
@@ -369,10 +369,25 @@ ondemand_readahead(struct address_space *mapping,
bool hit_readahead_marker, pgoff_t offset,
unsigned long req_size)
{
- unsi
On Wednesday, November 16, 2016 12:31 PM Jens Axboe wrote:
> @@ -369,10 +369,25 @@ ondemand_readahead(struct address_space *mapping,
> bool hit_readahead_marker, pgoff_t offset,
> unsigned long req_size)
> {
> - unsigned long max = ra->ra_pages;
> + unsig
Hi,
We ran into a funky issue, where someone doing 256K buffered reads saw
128K requests at the device level. Turns out it is read-ahead capping
the request size, since we use 128K as the default setting. This doesn't
make a lot of sense - if someone is issuing 256K reads, they should see
256K re
7 matches
Mail list logo