Re: [Mono-dev] Compiling mono with --with-gc=sgen Ubuntu 8.04

2009-11-10 Thread Alden Torres
Hello Mark,

With your indications, it's working now. Thanks a lot.

Alden

--- On Tue, 11/10/09, Mark Probst  wrote:

From: Mark Probst 
Subject: Re: [Mono-dev] Compiling mono with --with-gc=sgen Ubuntu 8.04
To: "Alden Torres" 
Cc: "Mono-devel-list" 
Date: Tuesday, November 10, 2009, 8:29 PM

On Tue, Nov 10, 2009 at 4:57 PM, Alden Torres  wrote:
> I'm trying to compile mono from the latest revision in the trunk with sgen 
> GC.My OS is Ubuntu 8.04 in a 1and1 VPS. I'm getting the following error:

Could you please try configuring mono with the additional option
"--with-minimal=aot" and comment out or delete the line "ENABLE_AOT=1"
in mcs/build/config.make?  I forgot to mention that for whatever
reason we have some issues with AOT and SGen.

Mark
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Compiling mono with --with-gc=sgen Ubuntu 8.04

2009-11-10 Thread Mark Probst
On Tue, Nov 10, 2009 at 4:57 PM, Alden Torres  wrote:
> I'm trying to compile mono from the latest revision in the trunk with sgen 
> GC.My OS is Ubuntu 8.04 in a 1and1 VPS. I'm getting the following error:

Could you please try configuring mono with the additional option
"--with-minimal=aot" and comment out or delete the line "ENABLE_AOT=1"
in mcs/build/config.make?  I forgot to mention that for whatever
reason we have some issues with AOT and SGen.

Mark
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] [PATCH] null keys in Lookup<>

2009-11-10 Thread ermau
So, note to self, don't write BCL code when distracted and you should be
doing other things. Added a few more tests and fixed Contains and
GetEnumerator.

On Tue, Nov 10, 2009 at 15:43, ermau  wrote:

> In reviewing removing the else, I realized there was a bug to begin with.
> Made your other requested changes, added a test for the bug and fixed it.
>
> On Tue, Nov 10, 2009 at 13:36, Jb Evain  wrote:
>
>> Hey,
>>
>> On 11/10/09, ermau  wrote:
>> > .NET supports null keys for groupings in
>> > Enumerable.ToLookup()/Lookup<>, here's a patch for review
>> > to improve Mono compatibility.
>>
>> +   Assert.IsTrue (l[null].Contains ("2"));
>>
>> Please add a space before indexing, like you do before calling a method.
>>
>> +   }
>> +   else if (!dictionary.TryGetValue (key, out
>> list)) {
>>
>> Put the else on the same line as the }
>>
>> +   if (key == null && nullGrouping != null)
>> +   return nullGrouping;
>> +   else
>> +   {
>> +   IGrouping group;
>> +   if (groups.TryGetValue (key, out
>> group))
>> +   return group;
>> +   }
>>
>> Remove the else and move the code to the same level as the if.
>>
>> When it's done please go ahead and commit to trunk and mono-2-6.
>>
>> Thanks!
>>
>> --
>> Jb Evain  
>>
>
>


System.Core.diff
Description: Binary data
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


[Mono-dev] Interactive C# shell for server monitoring

2009-11-10 Thread pablosantosl...@terra.es
Hi,

Any experience anyone using Interactive C# shell
(http://www.mono-project.com/CsharpRepl) embedded into a server process
for monitoring purposes (accessible through a socket maybe?)

Thanks,

pablo
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Soft Debugger Patch for Windows

2009-11-10 Thread Zoltan Varga
Hi,

  Looks ok.

Zoltan

On Tue, Nov 10, 2009 at 3:49 PM, Jonathan Chambers wrote:

> Hello,
>  Attached is a patch for supporting the soft debugger on Windows. The
> biggest changes IMO are not to the debugger, but to the mono-*
> synchronization utilities. The semaphores for example, will be used is other
> places in the runtime since MONO_HAS_SEMAPHORES is now defined. I'd like
> some input in this area. Also, all the utilities are currently done as
> macros. It seemed they might easier be done as functions, especially the not
> quite working conditional variables since there is no direct equivalent in
> Win32 (until Vista).
>
> FYI, these changes let me run 40/45 on the soft debugger unit tests, and I
> could debug using MD on Windows as well.
>
> Thanks,
> Jonathan
>
> ___
> Mono-devel-list mailing list
> Mono-devel-list@lists.ximian.com
> http://lists.ximian.com/mailman/listinfo/mono-devel-list
>
>
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] [PATCH] null keys in Lookup<>

2009-11-10 Thread ermau
In reviewing removing the else, I realized there was a bug to begin with.
Made your other requested changes, added a test for the bug and fixed it.

On Tue, Nov 10, 2009 at 13:36, Jb Evain  wrote:

> Hey,
>
> On 11/10/09, ermau  wrote:
> > .NET supports null keys for groupings in
> > Enumerable.ToLookup()/Lookup<>, here's a patch for review
> > to improve Mono compatibility.
>
> +   Assert.IsTrue (l[null].Contains ("2"));
>
> Please add a space before indexing, like you do before calling a method.
>
> +   }
> +   else if (!dictionary.TryGetValue (key, out
> list)) {
>
> Put the else on the same line as the }
>
> +   if (key == null && nullGrouping != null)
> +   return nullGrouping;
> +   else
> +   {
> +   IGrouping group;
> +   if (groups.TryGetValue (key, out
> group))
> +   return group;
> +   }
>
> Remove the else and move the code to the same level as the if.
>
> When it's done please go ahead and commit to trunk and mono-2-6.
>
> Thanks!
>
> --
> Jb Evain  
>


System.Core.diff
Description: Binary data
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Steve Bjorg
An updated version of ChunkedMemoryStream has been checked in.  It's  
not production ready yet due to insufficient test coverage.  But it  
should provide some insights into how the code handles both the  
variable and fixed buffer cases, and consolidates the chunks when  
needed.  Also, the code is now available under both Apache License 2.0  
and X11.

Feedback welcome.

http://viewvc.mindtouch.com/public/dream/trunk/src/mindtouch.dream/IO/

- Steve

--
Steve G. Bjorg
http://mindtouch.com
http://twitter.com/bjorg
irc.freenode.net #mindtouch

On Nov 10, 2009, at 10:15 AM, Steve Bjorg wrote:

> I have an updated ChunkedMemoryStream implementation that mimics  
> MemoryStream behavior up to the default chunk size, at which point  
> it starts to use chunks.  GetBuffer() consolidates all chunks into  
> the main buffer, which means that any direct changes would be  
> reflected.
>
> Where can I find the mono unit tests for MemoryStream?  That will  
> save me some time from having to write my own.  At that point, I  
> would encourage everyone to look at it and try it out with real  
> world data.
>
> - Steve
>
> --
> Steve G. Bjorg
> http://mindtouch.com
> http://twitter.com/bjorg
> irc.freenode.net #mindtouch
>
> On Nov 10, 2009, at 9:56 AM, Avery Pennarun wrote:
>
>> On Tue, Nov 10, 2009 at 12:42 PM, Robert Jordan   
>> wrote:
>>> An algorithm based on a MemoryStream implemented with chunks will
>>> perform better in average. I fully agree with that.
>>>
>>> The problem is that one method (GetBuffer) *will be* unexpected
>>> slower,
>>
>> I just don't believe this is true.  I think we're moving the slowness
>> from "add to buffer" into GetBuffer().  However, it is not
>> *additional* slowness.  It is simply displaced slowness, and it's
>> potentially *less* slowness overall.
>>
>> I'm not sure I can imagine a program that would be negatively  
>> affected
>> by this.  Doesn't the gc cause random slowness sometimes anyway?
>>
>>> and another one, much harder to fix: it is allowed to change
>>> the buffer even before the stream has been closed. This means that
>>> after every GetBuffer call, the implementation must behave  
>>> differently
>>> because it must somehow deal with a changed underlying buffer.
>>
>> I don't think this is a problem either.  Since you're now using the
>> returned buffer as your one-and-only chunk, you can use it just as  
>> you
>> always would.  If someone then pushes so much new data into the  
>> stream
>> that you would exceed the buffer size, you would have to do what you
>> would do in the non-chunked implementation; either a) reject it, or  
>> b)
>> not guarantee that it ends up in the array from the earlier
>> GetBuffer().  I'm not sure which is the correct behaviour, but both
>> are easily implemented in the chunked implementation too,  
>> particularly
>> since it has to support user-supplied fixed-length buffers anyhow.
>>
>> Perhaps I'm missing something...
>>
>> Avery
>> ___
>> Mono-devel-list mailing list
>> Mono-devel-list@lists.ximian.com
>> http://lists.ximian.com/mailman/listinfo/mono-devel-list
>

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Miguel de Icaza
Hello,

> I agree (especially thinking about the chunk-pool I mentioned) having
> separate classes can be better, so that everyone can choose.

Well, this is a case where we can actively improve the class libraries
to do the right thing behind the scenes.  

I do not see the problem with having the *default* just act *better*.
Yes, code needs to be written, it beats having to have dozens of people
that are using MemoryStreams get a bad experience and not be able to
figure out that this is the source of their growth.

miguel

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] [PATCH] null keys in Lookup<>

2009-11-10 Thread Jb Evain
Hey,

On 11/10/09, ermau  wrote:
> .NET supports null keys for groupings in
> Enumerable.ToLookup()/Lookup<>, here's a patch for review
> to improve Mono compatibility.

+   Assert.IsTrue (l[null].Contains ("2"));

Please add a space before indexing, like you do before calling a method.

+   }
+   else if (!dictionary.TryGetValue (key, out 
list)) {

Put the else on the same line as the }

+   if (key == null && nullGrouping != null)
+   return nullGrouping;
+   else
+   {
+   IGrouping group;
+   if (groups.TryGetValue (key, out group))
+   return group;
+   }

Remove the else and move the code to the same level as the if.

When it's done please go ahead and commit to trunk and mono-2-6.

Thanks!

-- 
Jb Evain  
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Steve Bjorg
I have an updated ChunkedMemoryStream implementation that mimics  
MemoryStream behavior up to the default chunk size, at which point it  
starts to use chunks.  GetBuffer() consolidates all chunks into the  
main buffer, which means that any direct changes would be reflected.

Where can I find the mono unit tests for MemoryStream?  That will save  
me some time from having to write my own.  At that point, I would  
encourage everyone to look at it and try it out with real world data.

- Steve

--
Steve G. Bjorg
http://mindtouch.com
http://twitter.com/bjorg
irc.freenode.net #mindtouch

On Nov 10, 2009, at 9:56 AM, Avery Pennarun wrote:

> On Tue, Nov 10, 2009 at 12:42 PM, Robert Jordan   
> wrote:
>> An algorithm based on a MemoryStream implemented with chunks will
>> perform better in average. I fully agree with that.
>>
>> The problem is that one method (GetBuffer) *will be* unexpected
>> slower,
>
> I just don't believe this is true.  I think we're moving the slowness
> from "add to buffer" into GetBuffer().  However, it is not
> *additional* slowness.  It is simply displaced slowness, and it's
> potentially *less* slowness overall.
>
> I'm not sure I can imagine a program that would be negatively affected
> by this.  Doesn't the gc cause random slowness sometimes anyway?
>
>> and another one, much harder to fix: it is allowed to change
>> the buffer even before the stream has been closed. This means that
>> after every GetBuffer call, the implementation must behave  
>> differently
>> because it must somehow deal with a changed underlying buffer.
>
> I don't think this is a problem either.  Since you're now using the
> returned buffer as your one-and-only chunk, you can use it just as you
> always would.  If someone then pushes so much new data into the stream
> that you would exceed the buffer size, you would have to do what you
> would do in the non-chunked implementation; either a) reject it, or b)
> not guarantee that it ends up in the array from the earlier
> GetBuffer().  I'm not sure which is the correct behaviour, but both
> are easily implemented in the chunked implementation too, particularly
> since it has to support user-supplied fixed-length buffers anyhow.
>
> Perhaps I'm missing something...
>
> Avery
> ___
> Mono-devel-list mailing list
> Mono-devel-list@lists.ximian.com
> http://lists.ximian.com/mailman/listinfo/mono-devel-list

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Avery Pennarun
On Tue, Nov 10, 2009 at 12:42 PM, Robert Jordan  wrote:
> An algorithm based on a MemoryStream implemented with chunks will
> perform better in average. I fully agree with that.
>
> The problem is that one method (GetBuffer) *will be* unexpected
> slower,

I just don't believe this is true.  I think we're moving the slowness
from "add to buffer" into GetBuffer().  However, it is not
*additional* slowness.  It is simply displaced slowness, and it's
potentially *less* slowness overall.

I'm not sure I can imagine a program that would be negatively affected
by this.  Doesn't the gc cause random slowness sometimes anyway?

> and another one, much harder to fix: it is allowed to change
> the buffer even before the stream has been closed. This means that
> after every GetBuffer call, the implementation must behave differently
> because it must somehow deal with a changed underlying buffer.

I don't think this is a problem either.  Since you're now using the
returned buffer as your one-and-only chunk, you can use it just as you
always would.  If someone then pushes so much new data into the stream
that you would exceed the buffer size, you would have to do what you
would do in the non-chunked implementation; either a) reject it, or b)
not guarantee that it ends up in the array from the earlier
GetBuffer().  I'm not sure which is the correct behaviour, but both
are easily implemented in the chunked implementation too, particularly
since it has to support user-supplied fixed-length buffers anyhow.

Perhaps I'm missing something...

Avery
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Robert Jordan
Avery Pennarun wrote:
> On Tue, Nov 10, 2009 at 11:24 AM, Robert Jordan 
> wrote:
>> Right, but MemoryStream is pretty prevalent and one of its frequent
>>  usage pattern is:
>> 
>> var ms = new MemoryStream () or MemoryStream(somepredictedsize); //
>> fill ms with some stream APIs ms.Close (); var bytes = ms.GetBuffer
>> (); // pass `bytes' to byte[] APIs (e.g. unmanaged world)
> 
> But my argument is that your line
> 
> // fill ms with some stream APIs
> 
> might or might not result in the array being reallocated even in the 
> *naive* implementation.  Each reallocation will cause a copy of the 
> entire buffer every time.
> 
> Conversely, a chunked implementation would reallocate-and-copy the 
> data at most once, when you call GetBuffer().  So it is strictly 
> equal-or-better than the naive implementation, in terms of 
> reallocations and copies.

An algorithm based on a MemoryStream implemented with chunks will
perform better in average. I fully agree with that.

The problem is that one method (GetBuffer) *will be* unexpected
slower, and another one, much harder to fix: it is allowed to change
the buffer even before the stream has been closed. This means that
after every GetBuffer call, the implementation must behave differently
because it must somehow deal with a changed underlying buffer.

Robert

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Avery Pennarun
On Tue, Nov 10, 2009 at 11:24 AM, Robert Jordan  wrote:
> Right, but MemoryStream is pretty prevalent and one of its frequent
> usage pattern is:
>
> var ms = new MemoryStream () or MemoryStream(somepredictedsize);
> // fill ms with some stream APIs
> ms.Close ();
> var bytes = ms.GetBuffer ();
> // pass `bytes' to byte[] APIs (e.g. unmanaged world)

But my argument is that your line

  // fill ms with some stream APIs

might or might not result in the array being reallocated even in the
*naive* implementation.  Each reallocation will cause a copy of the
entire buffer every time.

Conversely, a chunked implementation would reallocate-and-copy the
data at most once, when you call GetBuffer().  So it is strictly
equal-or-better than the naive implementation, in terms of
reallocations and copies.

The only exception is if someone provides a huge somepredictedsize; if
you decide that "gosh, that's way too big for a single chunk!" and
allocate less than the predicted size, and then they use up the whole
predicted size so you allocate more chunks, and then they call
GetBuffer, you will be slower because you do one copy instead of zero.
 However, this is avoidable by simply honouring somepredictedsize and
allocating the initial chunk to be requested size.  If an app does
that and gets tons of fragmentation, well, they can stop requesting
such huge buffers.

>> For example, the first call to GetBuffer() could "coagulate" the
>> chunks into a single big array (perhaps with extra space at the end),
>> and then *keep that array*.  Subsequent calls to GetBuffer() could
>> avoid the copy.
>
> GetBuffer () is usually called only once per instance.

The argument in this thread is that "usually" is not good enough.  If
some programs call GetBuffer() more than once and the chunked stream
is inefficient in that case, it would be unacceptable.  I'm not
endorsing the behaviour of calling GetBuffer over and over, but simply
saying that it's easy to implement a chunked stream where this problem
is avoided (and I've done so in the past; in fact it's the most
obvious way to implement it).

Have fun,

Avery
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Robert Jordan
Hey,

Avery Pennarun wrote:
> On Tue, Nov 10, 2009 at 8:48 AM, Robert Jordan  wrote:
>> MemoryStream.GetBuffer's docs indirectly suggest that no copy
>> will be performed:
>>
>> "Note that the buffer contains allocated bytes which might be unused.
>> For example, if the string "test" is written into the MemoryStream
>> object, the length of the buffer returned from GetBuffer is 256, not 4,
>> with 252 bytes unused. To obtain only the data in the buffer, use the
>> ToArray method; however, ToArray creates a copy of the data in memory."
>>
>> So MemoryStream.GetBuffer must remain an O(1) operation in any case,
>> defeating any kind of optimization a chunked memory stream
>> implementation may introduce.
> 
> Although this might be strictly true if you want to react exactly as
> Microsoft's documentation claims (I thought 100% compatibility with
> .Net was not the primary goal of mono?), there may be other options
> that result in similar performance

Right, but MemoryStream is pretty prevalent and one of its frequent
usage pattern is:

var ms = new MemoryStream () or MemoryStream(somepredictedsize);
// fill ms with some stream APIs
ms.Close ();
var bytes = ms.GetBuffer ();
// pass `bytes' to byte[] APIs (e.g. unmanaged world)

> For example, the first call to GetBuffer() could "coagulate" the
> chunks into a single big array (perhaps with extra space at the end),
> and then *keep that array*.  Subsequent calls to GetBuffer() could
> avoid the copy.

GetBuffer () is usually called only once per instance.

Robert

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread James P Michels III
I use GetBuffer() quite often and I tend to think of memory stream as a
more OO mapping to a vanilla byte[]. ChunkedStream is something else.

I do not challenge the potential benefits of a ChunkedStream
implementation. However, this more advanced implementation should not be
forced on to the default case unless guarantees can be made for
equivalent performance and behavior in ALL cases. If these guarantees
can not be made, it should be an opt-in improvement.

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


[Mono-dev] Compiling mono with --with-gc=sgen Ubuntu 8.04

2009-11-10 Thread Alden Torres
Hello,

I'm trying to compile mono from the latest revision in the trunk with sgen 
GC.My OS is Ubuntu 8.04 in a 1and1 VPS. I'm getting the following error:

make[8]: Entering directory `/root/mcs/class/System'
** Warning: System.dll built without parts that depend on: Mono.Security.dll 
System.Configuration.dll
MCS [net_2_0] System.dll
Stacktrace:

  at (wrapper managed-to-native) object.__icall_wrapper_mono_value_copy 
(intptr,intptr,intptr) <0x00051>
  at (wrapper managed-to-native) object.__icall_wrapper_mono_value_copy 
(intptr,intptr,intptr) <0x00051>
  at System.Reflection.MonoMethodInfo.GetMethodInfo (intptr) <0x00074>
  at System.Reflection.MonoMethodInfo.GetAttributes (intptr) <0x0002a>
  at System.Reflection.MonoCMethod.get_Attributes () <0x00014>
  at System.Reflection.MethodBase.get_IsPublic () <0x00016>
  at System.Activator.CreateInstance (System.Type,bool) <0x000a6>
  at System.Activator.CreateInstance (System.Type) <0xe>
  at System.Collections.Generic.EqualityComparer`1..cctor () <0x0009d>
  at (wrapper runtime-invoke) object.runtime_invoke_void 
(object,intptr,intptr,intptr) <0x00049>
  at System.Collections.Generic.Dictionary`2.Init 
(int,System.Collections.Generic.IEqualityComparer`1) <0x>
  at System.Collections.Generic.Dictionary`2.Init 
(int,System.Collections.Generic.IEqualityComparer`1) <0x00053>
  at System.Collections.Generic.Dictionary`2..ctor (int) <0x00018>
  at Mono.CSharp.ConsoleReportPrinter..cctor () <0x00083>
  at (wrapper runtime-invoke) object.runtime_invoke_void 
(object,intptr,intptr,intptr) <0x00049>
  at Mono.CSharp.Driver.Main (string[]) <0x>
  at Mono.CSharp.Driver.Main (string[]) <0x0005c>
  at (wrapper runtime-invoke) .runtime_invoke_int_object 
(object,intptr,intptr,intptr) <0x00054>

Native stacktrace:

    /root/mono/mono/mini/mono [0x489b9b]
    /root/mono/mono/mini/mono [0x4d175d]
    /lib/libpthread.so.0 [0x2a960557d0]
    /root/mono/mono/mini/mono(mono_gc_wbarrier_value_copy+0x23) [0x56a573]
    [0x400142d1]

Debug info from gdb:


=
Got a SIGSEGV while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries 
used by your application.
=

/bin/sh: line 1:  7662 Aborted 
MONO_PATH="./../../class/lib/net_2_0:$MONO_PATH" 
/root/mono/runtime/mono-wrapper ./../../mcs/gmcs.exe /codepage:65001 -optimize 
-d:NET_1_1 -d:NET_2_0 -debug /noconfig -nowarn:618 -d:CONFIGURATION_2_0 -unsafe 
-resource:resources/Asterisk.wav -resource:resources/Beep.wav 
-resource:resources/Exclamation.wav -resource:resources/Hand.wav 
-resource:resources/Question.wav -r:PrebuiltSystem=../lib/net_2_0/System.dll 
-d:XML_DEP -r:System.Xml.dll -target:library 
-out:../../class/lib/net_2_0/tmp/System.dll @System.dll.sources

Thanks
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


[Mono-dev] [PATCH] null keys in Lookup<>

2009-11-10 Thread ermau
.NET supports null keys for groupings in Enumerable.ToLookup()/Lookup<>,
here's a patch for review to improve Mono compatibility.


System.Core NullKeyLookup.diff
Description: Binary data
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Thad Thompson
Hey Ya'll,
I'm all about performance, but there's more to Robert Jordan's point
than just the access time. As it's always been described and
implemented, a MemoryStream is an abstraction over an array of bytes.
GetBuffer is an escape interface which allows me to drop and resume that
abstraction whenever it's convenient. It would be unfortunate if this
pattern were broken everywhere it is currently used:


-
static void Main(string[] args)
{
var m = new MemoryStream();
var b = Encoding.UTF8.GetBytes("HELO World");
m.Write(b, 0, b.Length);

var vBuf = m.GetBuffer();

// Swap
var t = vBuf[0];
vBuf[0] = vBuf[1];
vBuf[1] = t;

// Prints "EHLO World"
System.Console.WriteLine(Encoding.UTF8.GetString(m.ToArray()));
}

-

If we're really worried about memory buffers and remoting performance,
I'd humbly suggest that perhaps the
System.ServiceModel.Channels.BufferManager class could use some lovin.

Regards,
-Thad

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Gladish, Jacob
It seems that the original motivation was to deal with fragmentation. In my 
opinion, work should be concentrated on sgen and not cherry-picking framework 
classes. If efficient memory buffering is required in the framework for 
remoting, etc, Then why not have an internal implementation of the chunked 
stream that does exactly what's need to enhance performance instead of trying 
to force the MemoryStream to be all things to all people.

After looking at the MSDN docs for MemoryStream, my feeling is that the 
intention of MemoryStream is to provide an adapter for byte arrays in places 
where the framework apis require a Stream, and not to provide an efficient way 
to deal with very large buffers and fragmentation.


I also think introducing things like zero'ing of buffers and pooling is adding 
functionality where it doesn't belong.

I know there isn't a hard requirement precise compatibility with MS.NET, but 
MemoryStream is "core" class that people moving from one environment to another 
are going to expect to perform and behave in a very similar manner. Again, 
referring to the MSDN docs, its very clear the MemoryStream is very naïve, and 
that its just an API on top of an array.

Quote from GetBuffer() docs: " To create a MemoryStream instance with a 
publicly visible buffer, use MemoryStream, MemoryStream(array[]()[], 
Int32, Int32, Boolean, Boolean), or MemoryStream(Int32). If the current stream 
is resizable, two calls to this method do not return the same array if the 
underlying byte array is resized between calls. For additional information, see 
Capacity"


Just my two cents.

-jake


> -Original Message-
> From: mono-devel-list-boun...@lists.ximian.com [mailto:mono-devel-list-
> boun...@lists.ximian.com] On Behalf Of Avery Pennarun
> Sent: Tuesday, November 10, 2009 10:05 AM
> To: Robert Jordan
> Cc: mono-devel-list@lists.ximian.com
> Subject: Re: [Mono-dev] Should we replace MemoryStream?
>
> On Tue, Nov 10, 2009 at 8:48 AM, Robert Jordan  wrote:
> > MemoryStream.GetBuffer's docs indirectly suggest that no copy
> > will be performed:
> >
> > "Note that the buffer contains allocated bytes which might be unused.
> > For example, if the string "test" is written into the MemoryStream
> > object, the length of the buffer returned from GetBuffer is 256, not 4,
> > with 252 bytes unused. To obtain only the data in the buffer, use the
> > ToArray method; however, ToArray creates a copy of the data in memory."
> >
> > So MemoryStream.GetBuffer must remain an O(1) operation in any case,
> > defeating any kind of optimization a chunked memory stream
> > implementation may introduce.
>
> Although this might be strictly true if you want to react exactly as
> Microsoft's documentation claims (I thought 100% compatibility with
> .Net was not the primary goal of mono?), there may be other options
> that result in similar performance
>
> For example, the first call to GetBuffer() could "coagulate" the
> chunks into a single big array (perhaps with extra space at the end),
> and then *keep that array*.  Subsequent calls to GetBuffer() could
> avoid the copy.
>
> In the event that your initial chunk wasn't big enough when pushing
> data into the buffer in the first place, a non-chunked implementation
> would have had to make an extra copy *anyway* at the time of the push.
>  So in the chunked implementation, the extra copy on the first
> GetBuffer() is actually not an *extra* copy at all vs. the naive
> single-buffer implementation.
>
> (I've written an efficient implementation of chunked buffering in C++,
> and these were the conclusions I drew after a lot of benchmarking of
> my library.  YMMV in C#, etc.)
>
> Have fun,
>
> Avery
> ___
> Mono-devel-list mailing list
> Mono-devel-list@lists.ximian.com
> http://lists.ximian.com/mailman/listinfo/mono-devel-list

IMPORTANT: The information contained in this email and/or its attachments is 
confidential. If you are not the intended recipient, please notify the sender 
immediately by reply and immediately delete this message and all its 
attachments. Any review, use, reproduction, disclosure or dissemination of this 
message or any attachment by an unintended recipient is strictly prohibited. 
Neither this message nor any attachment is intended as or should be construed 
as an offer, solicitation or recommendation to buy or sell any security or 
other financial instrument. Neither the sender, his or her employer nor any of 
their respective affiliates makes any warranties as to the completeness or 
accuracy of any of the information contained herein or that this message or any 
of its attachments is free of viruses.
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Avery Pennarun
On Tue, Nov 10, 2009 at 8:48 AM, Robert Jordan  wrote:
> MemoryStream.GetBuffer's docs indirectly suggest that no copy
> will be performed:
>
> "Note that the buffer contains allocated bytes which might be unused.
> For example, if the string "test" is written into the MemoryStream
> object, the length of the buffer returned from GetBuffer is 256, not 4,
> with 252 bytes unused. To obtain only the data in the buffer, use the
> ToArray method; however, ToArray creates a copy of the data in memory."
>
> So MemoryStream.GetBuffer must remain an O(1) operation in any case,
> defeating any kind of optimization a chunked memory stream
> implementation may introduce.

Although this might be strictly true if you want to react exactly as
Microsoft's documentation claims (I thought 100% compatibility with
.Net was not the primary goal of mono?), there may be other options
that result in similar performance

For example, the first call to GetBuffer() could "coagulate" the
chunks into a single big array (perhaps with extra space at the end),
and then *keep that array*.  Subsequent calls to GetBuffer() could
avoid the copy.

In the event that your initial chunk wasn't big enough when pushing
data into the buffer in the first place, a non-chunked implementation
would have had to make an extra copy *anyway* at the time of the push.
 So in the chunked implementation, the extra copy on the first
GetBuffer() is actually not an *extra* copy at all vs. the naive
single-buffer implementation.

(I've written an efficient implementation of chunked buffering in C++,
and these were the conclusions I drew after a lot of benchmarking of
my library.  YMMV in C#, etc.)

Have fun,

Avery
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


[Mono-dev] Soft Debugger Patch for Windows

2009-11-10 Thread Jonathan Chambers
Hello,
 Attached is a patch for supporting the soft debugger on Windows. The
biggest changes IMO are not to the debugger, but to the mono-*
synchronization utilities. The semaphores for example, will be used is other
places in the runtime since MONO_HAS_SEMAPHORES is now defined. I'd like
some input in this area. Also, all the utilities are currently done as
macros. It seemed they might easier be done as functions, especially the not
quite working conditional variables since there is no direct equivalent in
Win32 (until Vista).

FYI, these changes let me run 40/45 on the soft debugger unit tests, and I
could debug using MD on Windows as well.

Thanks,
Jonathan


soft_debug_commit.diff
Description: Binary data
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Robert Jordan
Leszek Ciesielski wrote:
> Choice is not always good, and I think this is one of the cases when 
> the default (i.e. the MemoryStream implementation) should make the 
> choices instead presenting them to the user. Though I agree that the 
> case of constructing a MemoryStream from an existing byte[] would 
> require a special path in the code, as this is a stream that most 
> likely won't be resized and in this case users are expecting the 
> constructor to have a complexity of O(1) and GetBuffer to also be 
> O(1). The same expectation is probably also true with a fixed size 
> MemoryStream.

MemoryStream.GetBuffer's docs indirectly suggest that no copy
will be performed:

"Note that the buffer contains allocated bytes which might be unused.
For example, if the string "test" is written into the MemoryStream
object, the length of the buffer returned from GetBuffer is 256, not 4,
with 252 bytes unused. To obtain only the data in the buffer, use the
ToArray method; however, ToArray creates a copy of the data in memory."

So MemoryStream.GetBuffer must remain an O(1) operation in any case,
defeating any kind of optimization a chunked memory stream
implementation may introduce.

Robert

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread pablosantosl...@terra.es
Hi,

> For variable, a chunked implementation will work better once the stream
> exceeds the maximum size for the first chunk.  Additional considerations
> for this case are: (a) should the first chunk have a smaller size
> initially to be more efficient for short streams, (b) should chunks be
> reusable and thus bypass the alloc/free cycle, (c) should a call to
> GetBuffer() automatically reset the first chunk with the newly created
> byte array?

I think (b) can be great, but obviously it can always be a
PooledCunkedMemoryStream class, or a different constructor.




> On Nov 10, 2009, at 4:47 AM, Steve Bjorg wrote:
> 
>> Allowing the first chunk to be variable sized doesn't make the code
>> that much more complex.  This would mean in read-only cases, all
>> operations would remain O(1) since the original byte array would be
>> preserved.  For write operations, new chunks would be allocated as
>> needed.  Determining which chunk to read from or write to would need
>> to take into account the first chunk size, but that's it.
>>
>> For the case where someone initializes the ChunkedMemoryStream with an
>> existing byte array, then appends to it, and then calls GetBuffer(),
>> we would end up with the same overhead as before since the
>> MemoryStream would have needed to reallocate the byte array when the
>> first append operation occurred, whereas the ChunkedMemoryStream does
>> it on GetBuffer().  However, if the array needed to be extended
>> multiple times due to many append operations, then the
>> ChunkedMemoryStream will come out ahead again  since it only
>> realloacted the buffer once.  At which point, the realloacted buffer
>> could replace the first chunk so we don't do this again for repeated
>> calls to GetBuffer().
>>
>>
>> On Nov 10, 2009, at 4:21 AM, Leszek Ciesielski wrote:
>>
>>> Choice is not always good, and I think this is one of the cases when
>>> the default (i.e. the MemoryStream implementation) should make the
>>> choices instead presenting them to the user. Though I agree that the
>>> case of constructing a MemoryStream from an existing byte[] would
>>> require a special path in the code, as this is a stream that most
>>> likely won't be resized and in this case users are expecting the
>>> constructor to have a complexity of O(1) and GetBuffer to also be
>>> O(1). The same expectation is probably also true with a fixed size
>>> MemoryStream.
>>>
>>> On Tue, Nov 10, 2009 at 1:09 PM, pablosantosl...@terra.es
>>>  wrote:
 I agree (especially thinking about the chunk-pool I mentioned) having
 separate classes can be better, so that everyone can choose.

 Andreas Nahr wrote:
> I'm still not sure this is a good idea. A lot of this depends on the
> use-case for MemoryStream.
> If
> 1) A MemoryStream is created with a parameterless constructor and
> then a lot
> of data written to it multiple times the ChunkedStream will be better
> always.
> 2) If a MemoryStream is created with a parameterless constructor
> and only
> gets a few bytes long ChunkedStream might bring considerable overhead.
> 3) If MemoryStream is created with a fixed size then ChunkedStream
> will be
> somewhat, but acceptably slower and have a higher overhead. But it
> will be
> totally abysmal once GetBuffer comes into play.
> 4) If MemoryStream is constructed from a (large) byte array (in the
> scientific field I'm coming from this is by far the most common
> usage I've
> seem; that is basically using MemoryStream as a (read-only)
> Stream-Wrapper
> around a byte array) then performance will be abysmal when
> constructing (if
> you chunkify e.g. a 500MB byte array) AND again with GetBuffer
> (recreate the
> array). So would be O (n) or even O (2*n) instead of O (0).
>
> It might be possible to create an implementation that can deal with
> all this
> (would need to have variable sized buffers, keep things it gets
> passed in
> the constructor alive with small overhead, etc.), but it will be quite
> complex and come with a large base overhead. And even then the
> GetBuffer
> O(n) problem remains in a few scenarios.
>
> Maybe it would be better to just leave the class as is and document
> that for
> certain scenarios alternative implementations are available that do
> a MUCH
> better job. Everybody can easily replace the use of MemoryStream
> with an
> alternative implementation if needed. But nobody expects this class to
> behave completely different from how it originally did (and seems
> to do in
> MS.Net).
>
> Andreas
>
>
 ___
 Mono-devel-list mailing list
 Mono-devel-list@lists.ximian.com
 http://lists.ximian.com/mailman/listinfo/mono-devel-list

>>> ___
>>> Mono-devel-list mailing list
>>> Mono-devel-list@

Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Steve Bjorg
After taking a closer look at the constructors for MemoryStream, I  
need to amend my earlier response.  In all cases where a byte array is  
passed into the constructor, the MemoryStream has fixed capacity.  So  
the case described below where a MemoryStream would start off with a  
byte array and then be appended to cannot happen.

In short, MemoryStream can operate in two modes: fixed and variable.   
For fixed, there is nothing to be gained by using chunks.  The  
existing implementation is optimal for all cases.  This is the  
situation that Andreas is referring to.

For variable, a chunked implementation will work better once the  
stream exceeds the maximum size for the first chunk.  Additional  
considerations for this case are: (a) should the first chunk have a  
smaller size initially to be more efficient for short streams, (b)  
should chunks be reusable and thus bypass the alloc/free cycle, (c)  
should a call to GetBuffer() automatically reset the first chunk with  
the newly created byte array?

Am I missing anything?


On Nov 10, 2009, at 4:47 AM, Steve Bjorg wrote:

> Allowing the first chunk to be variable sized doesn't make the code  
> that much more complex.  This would mean in read-only cases, all  
> operations would remain O(1) since the original byte array would be  
> preserved.  For write operations, new chunks would be allocated as  
> needed.  Determining which chunk to read from or write to would need  
> to take into account the first chunk size, but that's it.
>
> For the case where someone initializes the ChunkedMemoryStream with  
> an existing byte array, then appends to it, and then calls GetBuffer 
> (), we would end up with the same overhead as before since the  
> MemoryStream would have needed to reallocate the byte array when the  
> first append operation occurred, whereas the ChunkedMemoryStream  
> does it on GetBuffer().  However, if the array needed to be extended  
> multiple times due to many append operations, then the  
> ChunkedMemoryStream will come out ahead again  since it only  
> realloacted the buffer once.  At which point, the realloacted buffer  
> could replace the first chunk so we don't do this again for repeated  
> calls to GetBuffer().
>
>
> On Nov 10, 2009, at 4:21 AM, Leszek Ciesielski wrote:
>
>> Choice is not always good, and I think this is one of the cases when
>> the default (i.e. the MemoryStream implementation) should make the
>> choices instead presenting them to the user. Though I agree that the
>> case of constructing a MemoryStream from an existing byte[] would
>> require a special path in the code, as this is a stream that most
>> likely won't be resized and in this case users are expecting the
>> constructor to have a complexity of O(1) and GetBuffer to also be
>> O(1). The same expectation is probably also true with a fixed size
>> MemoryStream.
>>
>> On Tue, Nov 10, 2009 at 1:09 PM, pablosantosl...@terra.es
>>  wrote:
>>> I agree (especially thinking about the chunk-pool I mentioned)  
>>> having
>>> separate classes can be better, so that everyone can choose.
>>>
>>> Andreas Nahr wrote:
 I'm still not sure this is a good idea. A lot of this depends on  
 the
 use-case for MemoryStream.
 If
 1) A MemoryStream is created with a parameterless constructor and  
 then a lot
 of data written to it multiple times the ChunkedStream will be  
 better
 always.
 2) If a MemoryStream is created with a parameterless constructor  
 and only
 gets a few bytes long ChunkedStream might bring considerable  
 overhead.
 3) If MemoryStream is created with a fixed size then  
 ChunkedStream will be
 somewhat, but acceptably slower and have a higher overhead. But  
 it will be
 totally abysmal once GetBuffer comes into play.
 4) If MemoryStream is constructed from a (large) byte array (in the
 scientific field I'm coming from this is by far the most common  
 usage I've
 seem; that is basically using MemoryStream as a (read-only)  
 Stream-Wrapper
 around a byte array) then performance will be abysmal when  
 constructing (if
 you chunkify e.g. a 500MB byte array) AND again with GetBuffer  
 (recreate the
 array). So would be O (n) or even O (2*n) instead of O (0).

 It might be possible to create an implementation that can deal  
 with all this
 (would need to have variable sized buffers, keep things it gets  
 passed in
 the constructor alive with small overhead, etc.), but it will be  
 quite
 complex and come with a large base overhead. And even then the  
 GetBuffer
 O(n) problem remains in a few scenarios.

 Maybe it would be better to just leave the class as is and  
 document that for
 certain scenarios alternative implementations are available that  
 do a MUCH
 better job. Everybody can easily replace the use of MemoryStream  
 with an
 alternative implemen

Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Steve Bjorg
Allowing the first chunk to be variable sized doesn't make the code  
that much more complex.  This would mean in read-only cases, all  
operations would remain O(1) since the original byte array would be  
preserved.  For write operations, new chunks would be allocated as  
needed.  Determining which chunk to read from or write to would need  
to take into account the first chunk size, but that's it.

For the case where someone initializes the ChunkedMemoryStream with an  
existing byte array, then appends to it, and then calls GetBuffer(),  
we would end up with the same overhead as before since the  
MemoryStream would have needed to reallocate the byte array when the  
first append operation occurred, whereas the ChunkedMemoryStream does  
it on GetBuffer().  However, if the array needed to be extended  
multiple times due to many append operations, then the  
ChunkedMemoryStream will come out ahead again  since it only  
realloacted the buffer once.  At which point, the realloacted buffer  
could replace the first chunk so we don't do this again for repeated  
calls to GetBuffer().


On Nov 10, 2009, at 4:21 AM, Leszek Ciesielski wrote:

> Choice is not always good, and I think this is one of the cases when
> the default (i.e. the MemoryStream implementation) should make the
> choices instead presenting them to the user. Though I agree that the
> case of constructing a MemoryStream from an existing byte[] would
> require a special path in the code, as this is a stream that most
> likely won't be resized and in this case users are expecting the
> constructor to have a complexity of O(1) and GetBuffer to also be
> O(1). The same expectation is probably also true with a fixed size
> MemoryStream.
>
> On Tue, Nov 10, 2009 at 1:09 PM, pablosantosl...@terra.es
>  wrote:
>> I agree (especially thinking about the chunk-pool I mentioned) having
>> separate classes can be better, so that everyone can choose.
>>
>> Andreas Nahr wrote:
>>> I'm still not sure this is a good idea. A lot of this depends on the
>>> use-case for MemoryStream.
>>> If
>>> 1) A MemoryStream is created with a parameterless constructor and  
>>> then a lot
>>> of data written to it multiple times the ChunkedStream will be  
>>> better
>>> always.
>>> 2) If a MemoryStream is created with a parameterless constructor  
>>> and only
>>> gets a few bytes long ChunkedStream might bring considerable  
>>> overhead.
>>> 3) If MemoryStream is created with a fixed size then ChunkedStream  
>>> will be
>>> somewhat, but acceptably slower and have a higher overhead. But it  
>>> will be
>>> totally abysmal once GetBuffer comes into play.
>>> 4) If MemoryStream is constructed from a (large) byte array (in the
>>> scientific field I'm coming from this is by far the most common  
>>> usage I've
>>> seem; that is basically using MemoryStream as a (read-only) Stream- 
>>> Wrapper
>>> around a byte array) then performance will be abysmal when  
>>> constructing (if
>>> you chunkify e.g. a 500MB byte array) AND again with GetBuffer  
>>> (recreate the
>>> array). So would be O (n) or even O (2*n) instead of O (0).
>>>
>>> It might be possible to create an implementation that can deal  
>>> with all this
>>> (would need to have variable sized buffers, keep things it gets  
>>> passed in
>>> the constructor alive with small overhead, etc.), but it will be  
>>> quite
>>> complex and come with a large base overhead. And even then the  
>>> GetBuffer
>>> O(n) problem remains in a few scenarios.
>>>
>>> Maybe it would be better to just leave the class as is and  
>>> document that for
>>> certain scenarios alternative implementations are available that  
>>> do a MUCH
>>> better job. Everybody can easily replace the use of MemoryStream  
>>> with an
>>> alternative implementation if needed. But nobody expects this  
>>> class to
>>> behave completely different from how it originally did (and seems  
>>> to do in
>>> MS.Net).
>>>
>>> Andreas
>>>
>>>
>> ___
>> Mono-devel-list mailing list
>> Mono-devel-list@lists.ximian.com
>> http://lists.ximian.com/mailman/listinfo/mono-devel-list
>>
> ___
> Mono-devel-list mailing list
> Mono-devel-list@lists.ximian.com
> http://lists.ximian.com/mailman/listinfo/mono-devel-list


- Steve

--
Steve G. Bjorg
http://mindtouch.com
http://twitter.com/bjorg
irc.freenode.net #mindtouch


___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Leszek Ciesielski
Choice is not always good, and I think this is one of the cases when
the default (i.e. the MemoryStream implementation) should make the
choices instead presenting them to the user. Though I agree that the
case of constructing a MemoryStream from an existing byte[] would
require a special path in the code, as this is a stream that most
likely won't be resized and in this case users are expecting the
constructor to have a complexity of O(1) and GetBuffer to also be
O(1). The same expectation is probably also true with a fixed size
MemoryStream.

On Tue, Nov 10, 2009 at 1:09 PM, pablosantosl...@terra.es
 wrote:
> I agree (especially thinking about the chunk-pool I mentioned) having
> separate classes can be better, so that everyone can choose.
>
> Andreas Nahr wrote:
>> I'm still not sure this is a good idea. A lot of this depends on the
>> use-case for MemoryStream.
>> If
>> 1) A MemoryStream is created with a parameterless constructor and then a lot
>> of data written to it multiple times the ChunkedStream will be better
>> always.
>> 2) If a MemoryStream is created with a parameterless constructor and only
>> gets a few bytes long ChunkedStream might bring considerable overhead.
>> 3) If MemoryStream is created with a fixed size then ChunkedStream will be
>> somewhat, but acceptably slower and have a higher overhead. But it will be
>> totally abysmal once GetBuffer comes into play.
>> 4) If MemoryStream is constructed from a (large) byte array (in the
>> scientific field I'm coming from this is by far the most common usage I've
>> seem; that is basically using MemoryStream as a (read-only) Stream-Wrapper
>> around a byte array) then performance will be abysmal when constructing (if
>> you chunkify e.g. a 500MB byte array) AND again with GetBuffer (recreate the
>> array). So would be O (n) or even O (2*n) instead of O (0).
>>
>> It might be possible to create an implementation that can deal with all this
>> (would need to have variable sized buffers, keep things it gets passed in
>> the constructor alive with small overhead, etc.), but it will be quite
>> complex and come with a large base overhead. And even then the GetBuffer
>> O(n) problem remains in a few scenarios.
>>
>> Maybe it would be better to just leave the class as is and document that for
>> certain scenarios alternative implementations are available that do a MUCH
>> better job. Everybody can easily replace the use of MemoryStream with an
>> alternative implementation if needed. But nobody expects this class to
>> behave completely different from how it originally did (and seems to do in
>> MS.Net).
>>
>> Andreas
>>
>>
> ___
> Mono-devel-list mailing list
> Mono-devel-list@lists.ximian.com
> http://lists.ximian.com/mailman/listinfo/mono-devel-list
>
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread pablosantosl...@terra.es
I agree (especially thinking about the chunk-pool I mentioned) having
separate classes can be better, so that everyone can choose.

Andreas Nahr wrote:
> I'm still not sure this is a good idea. A lot of this depends on the
> use-case for MemoryStream.
> If 
> 1) A MemoryStream is created with a parameterless constructor and then a lot
> of data written to it multiple times the ChunkedStream will be better
> always.
> 2) If a MemoryStream is created with a parameterless constructor and only
> gets a few bytes long ChunkedStream might bring considerable overhead.
> 3) If MemoryStream is created with a fixed size then ChunkedStream will be
> somewhat, but acceptably slower and have a higher overhead. But it will be
> totally abysmal once GetBuffer comes into play.
> 4) If MemoryStream is constructed from a (large) byte array (in the
> scientific field I'm coming from this is by far the most common usage I've
> seem; that is basically using MemoryStream as a (read-only) Stream-Wrapper
> around a byte array) then performance will be abysmal when constructing (if
> you chunkify e.g. a 500MB byte array) AND again with GetBuffer (recreate the
> array). So would be O (n) or even O (2*n) instead of O (0).
> 
> It might be possible to create an implementation that can deal with all this
> (would need to have variable sized buffers, keep things it gets passed in
> the constructor alive with small overhead, etc.), but it will be quite
> complex and come with a large base overhead. And even then the GetBuffer
> O(n) problem remains in a few scenarios.
> 
> Maybe it would be better to just leave the class as is and document that for
> certain scenarios alternative implementations are available that do a MUCH
> better job. Everybody can easily replace the use of MemoryStream with an
> alternative implementation if needed. But nobody expects this class to
> behave completely different from how it originally did (and seems to do in
> MS.Net).
> 
> Andreas
> 
> 
___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list


Re: [Mono-dev] Should we replace MemoryStream?

2009-11-10 Thread Andreas Nahr
I'm still not sure this is a good idea. A lot of this depends on the
use-case for MemoryStream.
If 
1) A MemoryStream is created with a parameterless constructor and then a lot
of data written to it multiple times the ChunkedStream will be better
always.
2) If a MemoryStream is created with a parameterless constructor and only
gets a few bytes long ChunkedStream might bring considerable overhead.
3) If MemoryStream is created with a fixed size then ChunkedStream will be
somewhat, but acceptably slower and have a higher overhead. But it will be
totally abysmal once GetBuffer comes into play.
4) If MemoryStream is constructed from a (large) byte array (in the
scientific field I'm coming from this is by far the most common usage I've
seem; that is basically using MemoryStream as a (read-only) Stream-Wrapper
around a byte array) then performance will be abysmal when constructing (if
you chunkify e.g. a 500MB byte array) AND again with GetBuffer (recreate the
array). So would be O (n) or even O (2*n) instead of O (0).

It might be possible to create an implementation that can deal with all this
(would need to have variable sized buffers, keep things it gets passed in
the constructor alive with small overhead, etc.), but it will be quite
complex and come with a large base overhead. And even then the GetBuffer
O(n) problem remains in a few scenarios.

Maybe it would be better to just leave the class as is and document that for
certain scenarios alternative implementations are available that do a MUCH
better job. Everybody can easily replace the use of MemoryStream with an
alternative implementation if needed. But nobody expects this class to
behave completely different from how it originally did (and seems to do in
MS.Net).

Andreas

___
Mono-devel-list mailing list
Mono-devel-list@lists.ximian.com
http://lists.ximian.com/mailman/listinfo/mono-devel-list