On 11/27/2017 01:05 PM, Nicolas B. Pierron wrote:
On 11/26/2017 12:45 AM, smaug wrote:
On 11/24/2017 06:35 PM, Eric Rescorla wrote:
On Thu, Nov 23, 2017 at 4:00 PM, smaug <sm...@welho.com> wrote:
On 11/23/2017 11:54 PM, Botond Ballo wrote:
I think it makes sense to hide a 'new' call in a Make* function when
you're writing an abstraction that handles allocation *and*
deallocation.
So MakeUnique makes sense, because UniquePtr takes ownership of the
allocated object, and will deallocate it in the destructor. MakeRefPtr
would make sense for the same reason.
I almost agree with this, but, all these Make* variants hide the
information that they are allocating,
and since allocation is slow, it is usually better to know when allocation
happens.
I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear about
the functionality.
This seems like a reasonable argument in isolation, but I think it's more
important to mirror the standard C++ mechanisms and C++-14 already defines
std::make_unique.
Is it? Isn't it more important to write less error prone code and code where
the reader
sees the possible performance issues.
Recent-ish C++ additions have brought quite a few bad concepts which we
shouldn't use
without considering whether they actually fit into our needs.
Something being new and hip and cool doesn't mean it is actually a good thing.
One example, is that the standard has a single std::function type which always
allocate, and does not report failures because we disable exceptions.
In this particular case I would recommend to have a stack-function wrapper and
a heap-function wrapper. By diverging from the standard we could avoid
bugs such as refusing stack-references for lambda being wrapped in
heap-function (use-after-free/unwind), and adding extra checks that
stack-function
wrapper can only be fields of stack-only classes. C++ compilers have no way to
understand the code enough to provide similar life-time errors.
(This broken record keeps reminding about the security issues and memory leaks
auto and ranged-for have caused.)
Do you have pointers? Is this basically when we have a non-standard
implementation of begin/end methods which are doing allocation?
ranged-for causes issues when you modify the data structure while iterating.
This used to be bad in nsTArray case at least, now we just crash safely.
And auto hides the type, so reading the code and understanding lifetime
management becomes harder.
This has caused security sensitive crashes and leaks. (And there was some other
type of error too recently, but it escapes my mind now)
As a post-script, given that we now can use C++-14, can we globally replace
the MFBT clones of C++-14 mechanisms with the standard ones?
-Ekr
-Olli
But in cases where a library facility is not taking ownership of the
object, and thus the user will need to write an explicit 'delete', it
makes sense to require that the user also write an explicit 'new', for
symmetry.
NotNull is a bit of a weird case because it can wrap either a raw
pointer or a smart pointer, so it doesn't clearly fit into either
category. Perhaps it would make sense for MakeNotNull to only be
usable with smart pointers?
Cheers,
Botond
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform