On 7/25/16 9:19 PM, Charles Hixson via Digitalmars-d-learn wrote:
On 07/25/2016 05:18 PM, ketmar via Digitalmars-d-learn wrote:
On Monday, 25 July 2016 at 18:54:27 UTC, Charles Hixson wrote:
Are there reasons why one would use rawRead and rawWrite rather than
fread and fwrite when doiing binary random io?  What are the advantages?

In particular, if one is reading and writing structs rather than
arrays or ranges, are there any advantages?

yes: keeping API consistent. ;-)

for example, my stream i/o modules works with anything that has
`rawRead`/`rawWrite` methods, but don't bother to check for any other.

besides, `rawRead` is just looks cleaner, even with all `(&a)[0..1])`
noise.

so, a question of style.

OK.  If it's just a question of "looking cleaner" and "style", then I
will prefer the core.stdc.stdio approach.  I find it's appearance
extremely much cleaner...except that that's understating things. I'll
probably wrap those routines in a struct to ensure things like files
being properly closed, and not have explicit pointers persisting over
large areas of code.

It's more than just that. Having a bounded array is safer than a pointer/length separated parameters. Literally, rawRead and rawWrite are inferred @safe, whereas fread and fwrite are not.

But D is so nice with UFCS, you don't have to live with APIs you don't like. Allow me to suggest adding a helper function to your code:

rawReadItem(T)(File f, ref T item) @trusted
{
   f.rawRead(&item[0 .. 1]);
}

-Steve

Reply via email to