Quoth ron minnich :
> This is why 9p starts to perform poorly in networks with high
> bandwidth*delay products -- if you watch the net traffic, you see each
> T op on fid blocked by the previous Reply (by devmnt).
>
> I never figured out a way to fix this without fixing devmnt -- by
> removing
Quoth ron minnich :
> This is why 9p starts to perform poorly in networks with high
> bandwidth*delay products -- if you watch the net traffic, you see each
> T op on fid blocked by the previous Reply (by devmnt).
>
> I never figured out a way to fix this without fixing devmnt -- by
> removing
On 6/1/22, Steve Simon wrote:
> for performance testing why not copy from ramfs on one machine to ramfs on
> another?
ramfs is single-process and thus quite slow.
> the suggestion from a 9con passim was to have fossil/cwfs/hjfs etc add a Qid
> type flag to files indicating they are from backing
for performance testing why not copy from ramfs on one machine to ramfs on
another?
the suggestion from a 9con passim was to have fossil/cwfs/hjfs etc add a Qid
type flag to files indicating they are from backing store (QTSTABLE ?)and thus
may be copied in parallel. devices and synthetic would
hjfs is not exactly known for it's speed[0]. Running a cwfs
without a worm[1] is likely a more interesting comparison.
I also would recommend using kvik's clone[2] for copying
in parallel.
Would be curious how that stacks up.
Thanks,
moody
[0] http://fqa.9front.org/fqa4.html#4.3.6
[1]
In case this is not immediately clear: theoretically preventable 1rtt
minimum delays are much less bad than the practically unbounded
maximum delays in congested networks.
Put in another way: making some few things fast is much more easy than
making sure that everything else doesn't get
I don't think the reason nobody is doing this is that it's difficult per se.
Fcp also achieves parallelism without any changes to 9p.
And posix fs also share some of our statefulness.
A file system can have offsets, readahead can help.
Other synthetic FS need different tricks, but we can
On Tue, May 31, 2022 at 11:29 AM hiro <23h...@gmail.com> wrote:
>
> so virtiofs is not using 9p any more?
>
> and with 10 million parallel requests, why shouldn't 9p be able to
> deliver 10GB/s ?!
Everyone always says this. I used to say it too.
9p requires a certain degree of ordering -- as
And fcp?
On 6/1/22, Bakul Shah wrote:
> On May 31, 2022, at 9:14 AM, ron minnich wrote:
>>
>> On Mon, May 30, 2022 at 12:21 AM Bakul Shah wrote:
>>> 9p itself is low performance but that is a separate issue.
>>
>> Bakul, what are the units? It might be helpful to quantify this
>> statement.
On May 31, 2022, at 9:14 AM, ron minnich wrote:
>
> On Mon, May 30, 2022 at 12:21 AM Bakul Shah wrote:
>> 9p itself is low performance but that is a separate issue.
>
> Bakul, what are the units? It might be helpful to quantify this
> statement. Are you possibly conflating Plan 9 file systems
Quoth hiro <23h...@gmail.com>:
>
> and with 10 million parallel requests, why shouldn't 9p be able to
> deliver 10GB/s ?!
the tag field is 16 bits.
--
9fans: 9fans
Permalink:
so virtiofs is not using 9p any more?
and with 10 million parallel requests, why shouldn't 9p be able to
deliver 10GB/s ?!
On 5/31/22, ron minnich wrote:
> On Mon, May 30, 2022 at 12:21 AM Bakul Shah wrote:
>> 9p itself is low performance but that is a separate issue.
>
> Bakul, what are the
On Mon, May 30, 2022 at 12:21 AM Bakul Shah wrote:
> 9p itself is low performance but that is a separate issue.
Bakul, what are the units? It might be helpful to quantify this
statement. Are you possibly conflating Plan 9 file systems being slow
and 9p being slow?
As Rob pointed out in 2013,
> 9p itself is low performance but that is a separate issue.
wrong
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/T769854fafd2b7d35-M3261fe8a162f5dd9e1dc09c7
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
> the challenge is that 9p is stateful, so all servers must
> replay the same messages in the same order
no, not all servers.
9p state could be faked, that's not the main problem here.
the main problem is the higher layer application logic per server.
this is both good and bad.
e.g. some very few
On 5/30/22, Bakul Shah wrote:
> On May 29, 2022, at 10:01 PM, o...@eigenstate.org wrote:
>>
>> the challenge is that 9p is stateful, so all servers must
>> replay the same messages in the same order; this means that
>> if one of the replicas fails or returns a result that is not
>> the same as
On May 29, 2022, at 10:01 PM, o...@eigenstate.org wrote:
>
> the challenge is that 9p is stateful, so all servers must
> replay the same messages in the same order; this means that
> if one of the replicas fails or returns a result that is not
> the same as the other, the front falls off.
>
>
Quoth Bakul Shah :
>
> Some variation of this would be interesting for a clustered
> or distributed filesystem. The challenge would be doing this
> in an understandable way, cleanly and with good performance.
> Probably using separate namespaces for control & management
> operations.
the
On May 28, 2022, at 9:02 AM, fge...@gmail.com wrote:
>
> Has anybody considered (or maybe even implemented) a 9p server to
> multiply incoming 9p messages to 2 or more 9p servers?
> Maybe with 2 different strategies for responding to the original request?
> 1. respond as soon as at least 1
s/over 9p/higher than 9p/
On 5/29/22, fge...@gmail.com wrote:
> As a first approximation - assuming identical namespaces - this
> multiplier 9p server (9plier? multi9plier?) could be trivially(?)
> useful, used with recover(4) on all connections and with an
> independent synchronization
Thanks yes, this would be one use-case.
On 5/28/22, ron minnich wrote:
> not for 9p, but in 1993, when Gene Kim interned with me at the
> Supercomputing Research Center, we did this:
>
As a first approximation - assuming identical namespaces - this
multiplier 9p server (9plier? multi9plier?) could be trivially(?)
useful, used with recover(4) on all connections and with an
independent synchronization mechanism, in case states would fall out
of sync.
Furthermore I would not rule
not for 9p, but in 1993, when Gene Kim interned with me at the
Supercomputing Research Center, we did this:
https://www.semanticscholar.org/paper/Bigfoot-NFS-%3A-A-Parallel-File-Striping-NFS-Server-(-Kim/19cb61337bab7b4de856fcbf29b55965647be091,
similar in spirit to your idea.
The core idea was
Interesting idea!
This assumes the downstream servers have identical namespace hierarchy; right?
State management could be messy or impossible unless some sort of
transaction structure is imposed on the {walk, [open/create,
read/write]|[stat/wstat], clunk} sequences, where the server that
Has anybody considered (or maybe even implemented) a 9p server to
multiply incoming 9p messages to 2 or more 9p servers?
Maybe with 2 different strategies for responding to the original request?
1. respond as soon as at least 1 response from one of the 9p servers
is received,
2. respond only after
25 matches
Mail list logo