Since last email I have made pretty decent progress and I overall am pretty
pleased with the results - I have achieved 2-3 fold decrease in the boot
time however at the cost of increased memory usage. Please see my detailed
results and some comments below. I also I think there is still room for
On Wed, Nov 22, 2017 at 6:43 PM, Waldek Kozaczuk
wrote:
> I think that rather than implementing your suggestion which might not be
> all that trivial I would rather implement some kind of proper LRU based
> solution which would work like so:
>
This can work. Intererstingly, in the first patch wh
Alternatively if the LRU cache idea is too complicated to implement we
could instead have a simple hashmap by node-id type of cache with some TTL
when memory buffer would be freed eventually. This would be unbounded
inodes accessed count wise but it would be still limited to how big
individual
I think that rather than implementing your suggestion which might not be
all that trivial I would rather implement some kind of proper LRU based
solution which would work like so:
- Every time mfs_read() called it would try to find corresponding buffer
in it LRU cache (i-node is a key)
On Wed, Nov 22, 2017 at 8:01 AM, Waldek Kozaczuk
wrote:
> I managed to implement my change to make msf_read() use device strategy
> routing directly to bulk read however many blocks at once the uio parameter
> to mfs_read() specifies and as expected now the java run takes ~ 850 ms
> (down from 35
So in short one page fault now involves single exit to the host instead of 8.
Sent from my iPhone
> On Nov 22, 2017, at 01:01, Waldek Kozaczuk wrote:
>
> I managed to implement my change to make msf_read() use device strategy
> routing directly to bulk read however many blocks at once the uio
I managed to implement my change to make msf_read() use device strategy
routing directly to bulk read however many blocks at once the uio parameter
to mfs_read() specifies and as expected now the java run takes ~ 850 ms
(down from 3500-4500ms). Which means it still spends ~600 ms reading ~ 14MB
of
On Tue, Nov 21, 2017 at 3:43 PM, Waldek Kozaczuk
wrote:
>
> I am still interested how to make OSv page manager to request multiple
> pages.
>
It's been a while since I touched any of this code :-(
Unfortunately it's not yet clear to me how to do this cleanly. We could
change file_vma::fault() (
On Tue, Nov 21, 2017 at 3:43 PM, Waldek Kozaczuk
wrote:
> An idea for small improvement (8 times maybe). Given that bread() is
> blocking and requires one exit to the host and it reads only 1 block = 512
> bytes at a a time
>
I didn't realize this was the case :-(
> we could change mfs_read()
I am assuming that disk driver strategy() function can be called with
arguments to make it read many blocks at a time.
On Tue, Nov 21, 2017 at 8:43 AM, Waldek Kozaczuk
wrote:
> An idea for small improvement (8 times maybe). Given that bread() is
> blocking and requires one exit to the host and i
An idea for small improvement (8 times maybe). Given that bread() is
blocking and requires one exit to the host and it reads only 1 block = 512
bytes at a a time we could change mfs_read() to bypass bread() and directly
operate at bio level - bio->bio_dev->driver->devops->strategy(). At least
o
See my further question below.
On Monday, November 20, 2017 at 2:26:45 PM UTC-5, Nadav Har'El wrote:
>
> Hi, I read through your interesting measurements in this thread.
>
> I agree with you that the problem is likely the reading of 4096 bytes at a
> time. The reason why this is slow may be not j
Hi, I read through your interesting measurements in this thread.
I agree with you that the problem is likely the reading of 4096 bytes at a
time. The reason why this is slow may be not just that small reads have
more overhead, the problem is also the reads are completely synchronic:
1. OSv gets
If you look at the attached log you will see that most mfs_read() calls
read 4096 bytes (8 block) which makes me think these probably are called
when handling page faults which would happen if file was mmaped (which I
think is the case for .so files - void file::load_segment(const Elf64_Phdr&
p
Just for the reference I tried to measure the disk performance of the host
I am using (4 years old laptop).
dd if=libjvm-stripped.so of=/dev/null iflag=direct ibs=512 //default dd
block size
dd: error reading 'libjvm-stripped.so': Invalid argument
33534+1 records in
33534+1 records out
17169688
Symlinks were added to ramfs not so long ago in
https://github.com/cloudius-systems/osv/commit/18a2e45c13d61832de4b2fc37131035cb36fa568.
Mentioning that just in case it helps - I noticed +#define mfs_symlink ((
vnop_symlink_t)vop_nullop), but didn't look deeper into code.
> with much printing t
For reference I am including the link to my mfs branch
- https://github.com/wkozaczuk/osv/tree/readonly_mfs - that includes all my
changes.
Also I forgot to mention that I discover that mfs does not support links to
directories. I had to manually tweak java image.
On Sunday, November 19, 2017
On Wed, Nov 15, 2017 at 5:21 AM, Waldek Kozaczuk
wrote:
> A better solution could be to not have a read-write ramdisk at all, but
> rather a simple read-only filesystem on the disk image where we only read
> file blocks as they are needed, and also cache them somehow for the next
> reads.
>
>>
>>
Nadav,
Thanks for the reply.
Please see my extra comments below.
On Tuesday, November 14, 2017 at 4:49:02 PM UTC-5, Nadav Har'El wrote:
>
> Hi,
>
> On Fri, Nov 10, 2017 at 12:26 AM, Waldek Kozaczuk > wrote:
>
>> I found this article on OSv blog -
>> http://blog.osv.io/blog/2017/06/12/serverle
Hi,
On Fri, Nov 10, 2017 at 12:26 AM, Waldek Kozaczuk
wrote:
> I found this article on OSv blog - http://blog.osv.io/blog/
> 2017/06/12/serverless-computing-with-OSv/ - very interesting and
> inspiring.
>
Thanks :-)
> Besides the claim that OSv is a perfect platform for "serverless" it made
>
> For some reason I failed to create and run an image with java app
possibly due to the image size limit - I think the build process at some
point got stuck.
I remember having to do a clean build (I think I just rm ./build/) after
changing lzkernel_base. Does this help?
> Is it read-only?
I co
I found this article on OSv blog
- http://blog.osv.io/blog/2017/06/12/serverless-computing-with-OSv/ - very
interesting and inspiring. Besides the claim that OSv is a perfect platform
for "serverless" it made me think that "stateless" does not apply only to
"serverless" but also to many long-li
22 matches
Mail list logo