and, finally, this works using the taskPool.map, as in the
std.parallelism example. So, the trick appears to be that the
call to chomp is needed.
auto lineRange = File(fn).byLineCopy();
auto chomped = std.algorithm.map!a.chomp(lineRange);
auto nums =
Unfortunately, this is not a very good example for
std.parallelism, since the measured times are better using the
std.algorithm.map calls. I know from past experience that
std.parallelism routines can work well when the work is spread
out correctly, so this example could be improved.
This is
I'm a bit confused by the documentation of the ctfe limitations
wrt static arrays due to these seemingly conflicting statements,
and the examples didn't seem to clear anything up. I was
wondering if anyone has examples of clever things that might be
done with static arrays and pointers using
On Sunday, 24 May 2015 at 18:14:19 UTC, anonymous wrote:
Static array has a special meaning. It does not mean static
variable with an array type. Static arrays are those of the
form Type[size]. That is, the size is known statically.
Examples:
1) static int[5] x; -- x is a static variable
On Friday, 22 May 2015 at 05:24:15 UTC, Jay Norwood wrote:
first result uses
if (((x-1)(x|0x8000))==0)
00F81005 mov eax,edx
00F81007 lea ecx,[edx-1]
00F8100A or eax,8000h
00F8100F testecx,eax
Above is what a Microsoft C++ compiler does
This formula measures a little faster on dmd. Release build,
three tests, find all values for 0..uint.max.
first result uses
if (((x-1)(x|0x8000))==0)
second result uses
if ((x (x - 1) | !x) == 0)
D:\pow2\pow2\pow2\Releasepow2
duration(msec)=10259
duration(msec)=10689
Very nice.
I wonder about representation of references, and perhaps
replication, inheritance. Does SDL just punt on those?
This library allow to specify the internal base of the arbitrary
precision numbers( default is decimal), as well as allows
specification of the precision of floating point values. Each
floating point number precision can be read with .precision().
Also supports specification of rounding
On Monday, 24 November 2014 at 15:27:19 UTC, Gary Willoughby
wrote:
Just browsing reddit and found this article posted about D.
Written by Andrew Pascoe of AdRoll.
From the article:
The D programming language has quickly become our language of
choice on the Data Science team for any task that
On Monday, 24 November 2014 at 23:32:14 UTC, Jay Norwood wrote:
Is this related?
https://github.com/dscience-developers/dscience
This seems good too. Why the comments in the discussion about
lack of libraries?
https://github.com/kyllingstad/scid/wiki
On Wednesday, 24 September 2014 at 10:28:05 UTC, Suliman wrote:
string path = thisExePath()
Seems like dirName in std.path is a good candidate ;)
http://dlang.org/phobos/std_path.html#.dirName
You'll find many other path manipulation functions there.
Thanks! But if I want to strip it, how I
On Friday, 26 September 2014 at 03:32:46 UTC, Jay Norwood wrote:
On Wednesday, 24 September 2014 at 10:28:05 UTC, Suliman wrote:
string path = thisExePath()
Seems like dirName in std.path is a good candidate ;)
http://dlang.org/phobos/std_path.html#.dirName
You'll find many other path
I have a use case that requires repeating performance
measurements of blocks of code that do not coincide with function
start and stop. For example, a function will be calling several
sub-operations, and I need to measure the execution from the
call statement until the execution of the
On Friday, 25 July 2014 at 21:10:56 UTC, monarch_dodra wrote:
Functionally nothing more than an alias? EG:
{
alias baz = foo.bar;
...
}
Yes, it is all just alias. So
with ( (d,e,a,b,c) as (ar.rm.a, ar.rm.b, ar.r.a, ar.r.b, ar.r.c)){
d = a + c;
e = (c==0)?0:(a+b)/c;
}
On Friday, 25 July 2014 at 01:54:53 UTC, Jay Norwood wrote:
I don't recall the exact use case for the database expressions,
but I believe they were substituting a simple symbol for the
fully qualified object.
The sql with clause is quite a bit different than I remembered.
For one thing, I
I was playing around with use of the dual WITH statement. I
like the idea, since it makes the code within the with cleaner.
Also, I got the impression from one of the conference
presentations ... maybe the one on the ARM debug ... that there
are some additional optimizations available that
On Tuesday, 22 April 2014 at 15:25:04 UTC, monarch_dodra wrote:
Yeah, that's because join actually works on RoR, R, rather
than R, E. This means if you feed it a string[], string,
then it will actually iterate over individual *characters*. Not
only that, but since you are using char[], it will
Wow, joiner is much slower than join. Such a small choice can
make this big of a difference. Not at all expected, since the
lazy calls, I thought, were considered to be more efficient.
This is with ldc2 -O2.
jay@jay-ubuntu:~/ec_ddt/workspace/diamond/source$ ./main
1/dev/null
brad: time:
On Monday, 21 April 2014 at 08:26:49 UTC, monarch_dodra wrote:
The two key points here, first, is to avoid using appender.
Second, instead of having two buffer: and **\n,
and two do two slice copies, to only have 1 buffer
*, and to do 1 slice copy, and a single '\n' write. At
On Tuesday, 25 March 2014 at 08:42:30 UTC, monarch_dodra wrote:
Interesting. I'd have thought the extra copy would be an
overall slowdown, but I guess that's not the case.
I installed ubuntu 14.04 64 bit, and measured some of these
examples using gdc, ldc and dmd on a corei3 box. The
On Sunday, 23 March 2014 at 20:33:15 UTC, Daniel Murphy wrote:
It still needs a lot of work, but it's functional.
Is there a test suite that you have to pass to declare it fully
functional?
Interesting. I'd have thought the extra copy would be an
overall slowdown, but I guess that's not the case.
I also tried your strategy of adding '\n' to the buffer, but I
was getting some bad output on windows. I'm not sure why \n\n
works though. On *nix, I'd have also expected a double
These are times on ubuntu. printDiamond3 was slower than
printDiamond.
brad: time: 12387[ms]
printDiamond1: time: 373[ms]
printDiamond2: time: 722[ms]
printDiamond3: time: 384[ms]
jay1: time: 62[ms]
sergei: time: 3918[ms]
jay2: time: 28[ms]
diamondShape: time: 2725[ms]
printDiamond: time:
On Tuesday, 25 March 2014 at 15:31:12 UTC, monarch_dodra wrote:
I love how D can achieve *great* performance, while still
looking readable and maintainable.
Yes, I'm pretty happy to see the appender works well. The
parallel library also seems to work very well in my few
experiences with
This is a first attempt at using parallel, but no improvement in
speed on a corei7. It is about 3x slower than the prior
versions. Probably the join was not a good idea. Also, no
foreach_reverse for the parallel, so it requires extra
calculations for the reverse index.
void
On Wednesday, 26 March 2014 at 04:47:48 UTC, Jay Norwood wrote:
This is a first attempt at using parallel, but no improvement
oops. scratch that one. I tested a pointer to the wrong function.
This corrects the parallel example range in the second foreach.
Still slow.
void printDiamonde2cpa(in uint N)
{
size_t N2 = N/2;
char p[] = uninitializedArray!(char[])(N2+N);
p[0..N2] = ' ';
p[N2..$] = '*';
char nl[] = uninitializedArray!(char[])(1);
nl[] = '\n';
Very nice example. I'll test on ubuntu later.
On windows ...
D:\diamond\diamond\diamond\Releasediamond 1 nul
brad: time: 19544[ms]
printDiamond1: time: 1139[ms]
printDiamond2: time: 1656[ms]
printDiamond3: time: 663[ms]
jay1: time: 455[ms]
sergei: time: 11673[ms]
jay2: time: 411[ms]
not through yet with the diamond. This one is a little faster.
Appending the newline to the stars and calculating the slice
backward from the end would save a w.put for the newlines ...
probably faster. I keep looking for a way to create a dynamic
array of a specific size, filled with the
These were times on ubuntu. I may have printed debug build times
previously, but these are dmd release build. I gave up trying to
figure out how to build ldc on ubuntu. The dmd one click
installer is much appreciated.
brad: time: 12425[ms]
printDiamond1: time: 380[ms]
printDiamond2: time:
Hmmm, looks like stderr.writefln requires format specs, else it
omits the additional parameters. (not so on derr.writefln)
stderr.writefln(time: %s%s,sw.peek().msecs, [ms]);
D:\diamond\diamond\diamond\Releasediamond 1nul
time: 16[ms]
time: 44[ms]
I converted the solution examples to functions, wrote a test to
measure each 100 times with a diamond of size 1001. These are
release build times. timon's crashed so I took it out. Maybe I
made a mistake copying ... have to go back and look.
D:\diamond\diamond\diamond\Releasediamond 1nul
A problem with the previous brad measurement is that his solution
creates a diamond of size 2n+1 for an input of n. Correcting the
size input for brad's function call, and re-running, I get this.
So the various solutions can have overhead computation time of
40x difference, depending on the
On Sunday, 23 March 2014 at 17:30:20 UTC, bearophile wrote:
The task didn't ask for a computationally efficient solution
:-) So you are measuring something that was not optimized for.
So there's lot of variance.
Bye,
bearophile
Yes, this is just for my own education. My builds are
These were the times on ubuntu 64 bit dmd. I added diamondShape,
which is slightly modified to be consistent with the others ..
just removing the second parameter and doing the writeln calls
within the function, as the others have been done. This is still
with dmd. I've downloaded ldc.
The computation times of different methods can differ a lot.
How do you suggest to measure this effectively without the
overhead of the write and writeln output? Would a count of
11 and stubs like below be reasonable, or would there be
something else that would prevent the optimizer
I decided to redirect stdout to nul and print the stopwatch
messages to stderr.
So, basically like this.
import std.stdio;
import std.datetime;
import std.cstream;
StopWatch sw;
sw.start();
measured code
sw.stop();
derr.writefln(time: , sw.peek().msecs, [ms]);
Then, windows results
On Friday, 21 March 2014 at 00:31:58 UTC, bearophile wrote:
This is a somewhat common little exercise: Write a function
Bye,
bearophile
I like that replicate but easier for me to keep track of the
counts if I work from the center.
int blanks[];
blanks.length = n;
int stars[];
stars.length
This one calculates, then outputs subranges of the ba and sa char
arrays.
int n = 11;
int blanks[];
blanks.length = n;
int stars[];
stars.length = n;
char ba[];
ba.length = n;
ba[] = ' '; // fill full ba array
char sa[];
sa.length = n;
sa[] = '*'; // fill full sa array
int c = n/2; // center
How funny ...
The win7 gui copy and paste failed to copy a folder with a deep
directory structure ... putting up a message dialog about the
pathnmames being too long!
A d parallel copy rewrite works.
On Friday, 14 March 2014 at 15:44:24 UTC, Bruno Medeiros wrote:
A new version of DDT - D Development tools is out.
This has really nice source browsing... much better than the
VisualD. I end up using both because the debugging support is
still better in VisualD.
One browsing issue I noticed
On Tuesday, 18 March 2014 at 02:02:04 UTC, Vladimir Panteleev
wrote:
http://d.puremagic.com/issues/show_bug.cgi?id=8967
ok, thanks. I was able to work around my issues
The basic solution, as you indicated, is to prefix any long
paths. Also the prefix is only usable on an absolute path.
I ran into a problem with the std.file.remove() operation being
limited by the windows ascii maxpath of around 260 characters,
even though the low level code is calling the unicode version of
windows delete, which has the capability to go up to 32k. The
trick appears to be that the unicode
I updated to 2.065, and using visualD for the build on Windows.
VisualD finds this setAttributes call in file.d, but the build
fails to find it in the library. Does this build for someone
else?
import std.file;
void clrReadOnly( in char[] name)
{
uint oldAtt = getAttributes(name);
Sorry, this is my fault. I had an old installation of 2.064
still in the path.
I ran into this Kepler bug trying to update. The work-around is
stated, which involves renaming your eclipse.exe. Worked for me.
https://bugs.eclipse.org/bugs/show_bug.cgi?id=55
http://rosettacode.org/wiki/Factorial#D
to whomever is maintaining these:
Need to change all ints to longs in this example to get the
displayed results since the 15! result requires more than 32 bits.
After changing to longs, I made some test loops, and on release
build dmd, pc, these are the relative times I measured for the
different versions of factorial in that example. So the
iterative wins, and the functional style results in 4x penalty in
this case.
duration factorial (hnsecs)=98
http://www.reddit.com/r/programming/comments/1yts5n/facebook_open_sources_flint_a_c_linter_written_in/
Somewhere in that thread was a mention of facebook moving away
from git because it was too slow. I thought it was interesting
and found this info on the topic ... They rewrote some sections
On Friday, 21 January 2011 at 20:50:39 UTC, Jonathan M Davis
wrote:
On Friday, January 21, 2011 12:36:23 Ary Manzana wrote:
On 1/20/11 5:48 PM, Jacob Carlborg wrote:
On 2011-01-20 21:34, Steven Schveighoffer wrote:
On Thu, 20 Jan 2011 15:03:55 -0500, Jacob Carlborg
d...@me.com wrote:
On
On Friday, 8 February 2013 at 06:22:18 UTC, Denis Shelomovskij
wrote:
06.02.2013 19:40, bioinfornatics пишет:
On Wednesday, 6 February 2013 at 13:20:58 UTC, bioinfornatics
wrote:
I agree the spec format is really bad but it is heavily used
in biology
so i would like a fast parser to develop
On Wednesday, 13 February 2013 at 17:39:11 UTC, monarch_dodra
wrote:
On Tuesday, 12 February 2013 at 22:06:48 UTC, monarch_dodra
wrote:
On Tuesday, 12 February 2013 at 21:41:14 UTC, bioinfornatics
wrote:
Some time fastq are comressed to gz bz2 or xz as that is
often a
huge file.
Maybe we
I see comments about enums being somehow implemented as tuples,
and comments about tuples somehow being implemented as structs,
but I couldn't find examples of static initialization of arrays
of either.
Finally after playing around with it for a while, it appears this
example below works for
Yes, thanks, that syntax does work for the initialization.
The C syntax that failed for me was using the curly brace form
shown in the following link.
http://www.c4learn.com/c-programming/c-initializing-array-of-structure/
Also, I think I was trying forms of defining the struct and
On Sunday, 8 December 2013 at 22:30:25 UTC, bearophile wrote:
Try:
member.writeln;
Bye,
bearophile
yeah, that's pretty nice.
module main;
import std.stdio;
void main()
{
struct Suit {string nm; int val; int val2; string shortNm;};
static Suit[5] suits = [
It looks like the writeln() does a pretty good job, even for enum
names.
I also saw a prettyprint example that prints the structure member
name, and compared its output.
http://forum.dlang.org/thread/ip23ld$93u$1...@digitalmars.com
module main;
import std.stdio;
import std.traits;
void
Thanks. That's looking pretty clean.
I had already tried the shorter enum names without using the with
statement, and it failed to compile. I thought it might work
since the struct definition already specifies the enum type for
the two members.
with (Suit) with (SuitShort)
{
I notice that if Suit and SuitShort have an enum with the same
name, then you still have to fully qualify the enum names when
using the with statement. So, for example, if spd in SuitShort
was renamed spades, the first entry in the array initialization
would have to be {Suit.spades, 1, 6,
In Ali Çehreli very nice book there is this example of iterating
over enum range which, as he notes, fails
http://ddili.org/ders/d.en/enum.html
enum Suit { spades, hearts, diamonds, clubs }
foreach (suit; Suit.min .. Suit.max) {
writefln(%s: %d, suit, suit);
}
spades: 0
Thanks. This is exactly what I was looking for.
I tried this iteration below, based on the example shown in the
std.traits documentation, and the int values are not what I
expected, but your example works fine.
http://dlang.org/phobos/std_traits.html#.EnumMembers
import std.traits;
void
On Friday, 6 December 2013 at 07:23:45 UTC, Jacob Carlborg wrote:
I'm not sure which library/package that is part of. There's a
bunch of other Eclipse related repertoires in the
d-widget-toolkit github organization[1]. Is it part of any of
those?
Anyway, I don't have enough time to focus
long x = 0x123456789;
writef(%0x,x);
prints 123456789
Seems like it has enough info to fill out to 16 hex digits...
I'm reading a SystemC lecture which describes use of operator
overloading of comma in their syntax to support concatenation.
So, for example, they support the data operations below
sc_uint4 a, b, d, e;
sc_unit8 c;
c = (a, b);
(d, e) = c;
As I understand it, SystemC is C++.
I didn't find
While the interactive exploratory aspects of the pandas are
attractive, in my case the interaction has just been a crutch to
discover how to correctly use their api.
Once through that api learning curve, I'd mainly be interested in
repeating the operations that worked correctly. The
I've been using swt in java, and we use swtbot to do testing. Is
there an app with similar functionality that has been used for
testing dwt capabilities?
I've been playing with the python pandas app enables interactive
manipulation of tables of data in their dataframe structure,
which they say is similar to the structures used in R.
It appears pandas has laid claim to being a faster version of R,
but is doing so basically limited to what they
On Wednesday, 3 July 2013 at 08:23:40 UTC, monarch_dodra wrote:
On Wednesday, 3 July 2013 at 06:18:28 UTC, Jonathan M Davis
wrote:
On Wednesday, July 03, 2013 08:11:50 Josh wrote:
Long story short, I think both would be a great addition to
phobos/D. I'd personally really want to play with
On Saturday, 6 April 2013 at 14:50:50 UTC, Bruno Medeiros wrote:
Interesting thread. I've been working on a hand-written D
parser (in Java, for the DDT IDE) and I too have found a slew
of grammar spec issues. Some of them more serious than the ones
you mentioned above. In same cases it's
I also wrote a copy version that orders file sequence on disk
efficiently, using write through, and posted it. This speeds
up any
subsequent file system operations done in the directory order
as if you
have done a defrag. Great for hard drives, but not needed for
ssd.
On Thursday, 24 January 2013 at 07:41:23 UTC, Jacob Carlborg
wrote:
Someone posted code in these newsgroups of a parallel
implementation of copy and remove.
I posted a parallel implementation a while back, and also put it
on github.
The parallel trick is to create the folder structure
I was looking at the xtend example 4 Distances here, and see
that their new generation capability includes ability to do 3.cm
10.mm , and these result in calls to cm(3) and mm(10).
http://blog.efftinge.de/
I see that similar capability was discussed for D previously at
the link below.
I see from this other discussions that it looks like 2.059 ( or
maybe 2.060) does support something like 3.cm(). Not sure from
the discussion if it would also accept 3.cm as in the xtext/xtend
example.
http://forum.dlang.org/thread/smoniukqfxerutqrj...@forum.dlang.org
On Sunday, 20 May 2012 at 15:48:31 UTC, Stewart Gordon wrote:
On 19/05/2012 16:13, maarten van damme wrote:
Yes, that's a common optimisation. Faster still would be to
test 6k-1 and 6k+1 for each positive integer k. Indeed, I've
done more than this in my time: hard-coded all the primes up to
On Wednesday, 16 May 2012 at 09:26:45 UTC, Tiberiu Gal wrote:
hi
many claim their code solves the problem in order of ms (
c/pascal/haskell code)
I used the blockwise parallel sieve described here, and measured
nice speed-ups as described in his blog. It completes
calculations within
On Friday, 18 May 2012 at 22:10:36 UTC, Arne wrote:
According to:
http://dlang.org/phobos/std_path.html#globMatch
it is possible to use wildcards spanning multiple directories.
assert (globMatch(`foo/foo\bar`, f*b*r));
But wildcards with dirEntries() seem less powerful.
On Monday, 23 April 2012 at 11:27:40 UTC, Steven Schveighoffer
wrote:
I think using std.string.icmp is the best solution. I would
expect it to outperform even schwartz sort.
-Steve
icmp took longer... added about 1 sec vs 0.3 sec (for
schwartzSort ) to the program execution time.
bool
On Sunday, 22 April 2012 at 09:33:59 UTC, Marco Leise wrote:
So when you did your first measurements, with 160 seconds for
rmd, did you wait for the I/O to complete? Sorry if that's a
stupid question :p but that's the obvious difference when using
write-through from what the documentation
Table 5.1 in this article, and some surrounding description,
indicate that ntfs converts to upper case when doing directory
inserts, so if you want to optimize the disk order for the order
processed by directory entry it seems toUpper would be a better
choice.
On Sunday, 22 April 2012 at 02:29:45 UTC, Jonathan M Davis wrote:
Regardless of whether it's the Big(O) complexity or the
constant factor that's
the problem here, clearly there's enough additional overhead
that it's causing
problems for Jay's particular case. It's also the sort of thing
that
On Sunday, 22 April 2012 at 06:26:42 UTC, Jonathan M Davis wrote:
You can look at the code. It checks each of the characters in
place. Unlike
toLower, it doesn't need to generate a new string. But as far
as the
comparison goes, they're the same - hence that line in the docs.
- Jonathan M
On Sunday, 22 April 2012 at 00:36:19 UTC, bearophile wrote:
Performing the toLower every time the cmp function is called
doesn't change the O complexity. In Phobos there is an
alternative sorting (Schwartzian sorting routime) that applies
a function to each item before sorting them, usually is
I was able to achieve similar efficiency to the defrag result on
ntfs by using a modified version of std.file.write that uses
FILE_FLAG_WRITE_THROUGH. The ntfs rmdir of the 2GB layout takes 6
sec vs 161 sec when removing the unzipped layout. I posted the
measurements in D.learn, as well as
Below are measured times on operations on an unzipped 2GB layout.
My observation is that use of a slightly modified version of
std.file.write for the creation of the unzipped files results in
a folder that is much more efficient for sequential file system
operations. In particular, the ntfs
While playing with sorting the unzip archive entries I tried use
of the last example in
http://dlang.org/phobos/std_algorithm.html#sort
std.algorithm.sort!(toLower(a.name)
toLower(b.name),std.algorithm.SwapStrategy.stable)(entries);
It was terribly slow for sorting the 34k entries in my
On Saturday, 21 April 2012 at 23:54:26 UTC, Jonathan M Davis
wrote:
Yeah. toLower would be called on both strings on _every_
compare. And since
that involves a loop, that would make the overall call to sort
an order of
magnitude worse than if you didn't call toLower at all. I'm not
sure if
On Monday, 16 April 2012 at 09:16:09 UTC, Kagamin wrote:
Do you use FILE_FLAG_SEQUENTIAL_SCAN too?
The std.file.write does use FILE_FLAG_SEQUENTIAL_SCAN
void write(in char[] name, const void[] buffer)
my experimental code to create the empty file also uses it, but
it doesn't write any
I'm trying to figure out how to achieve folder deletion times
close to the times achieved with the parallel rmd after myDefrag
sortByName on a folder. It takes less than 3.5 secs for a 2G
layout that has been sorted, and with the rmd configured so that
it also works on a sorted list. This is a
On Tuesday, 3 April 2012 at 14:10:32 UTC, Jesse Phillips wrote:
Most of his code isn't available as it was kind of under
Microsoft. However I revived Juno for D2 awhile ago (still need
to play with it myself). Juno provides some nice tools and API.
On Wednesday, 11 April 2012 at 22:17:16 UTC, Eldar Insafutdinov
wrote:
example http://eldar.me/candydoc/algorithm.html . Among new
The outline panel links work fine on Google Chrome, but not on
IE8.
On Sunday, 8 April 2012 at 13:55:21 UTC, Marco Leise wrote:
Maybe the kernel caches writes, but synchronizes deletes? (So
the seek times become apparent there, and not in the writes)
Also check the file creation flags, maybe you can hint Windows
to the final file size and they wont be
I hacked up one of the file.d functions to create a function that
returns the first Logical Cluster Number for a regular file.
I've tested it on the 2GB layout that has been defragged with the
myDefrag sortByName() operation, and it works as expected.
Values of 0 mean the file was small
On Sunday, 8 April 2012 at 01:18:49 UTC, Jay Norwood wrote:
in it. Same 3.7 second delete. I'll have to analyze what is
happening, but this is a huge improvement. If it is just the
sequential LCN order of the operations, it may be that I can
just pre-sort the delete operations by the file
On Sunday, 8 April 2012 at 09:21:43 UTC, Somedude wrote:
Hi,
You seem to have done a pretty good job with your parallel
unzip. Have
you tried a parallel zip as well ?
Do you think you could include this in std.zip when you're done
?
I'm going to do a parallel zip as well. There is already
On Sunday, 8 April 2012 at 16:14:05 UTC, Jay Norwood wrote:
There are signficant improvements also in copy operations as a
result of defrag by Name. 43 seconds vs 1 min 43 secs for xcopy
of sorted 2GB vs unsorted.
this is the 2GB folder defragged with sorted LCN by pathname
G:\cmd /v:on /c
On Sunday, 8 April 2012 at 22:17:43 UTC, Somedude wrote:
Well, you can always do something like this:
version (parallel)
{
import std.parallelism;
// multithreaded
...
}
else
{
// single thread
...
}
Or rather:
// single thread zip
...
version (parallel)
{
import
On Saturday, 7 April 2012 at 05:02:04 UTC, dennis luehring wrote:
7zip took 55 secs _on the same file_.
that is ok but he still compares different implementations
7zip is the program. It unzips many formats, with the standard
zip format being one of them. The parallel d program is three
On Saturday, 7 April 2012 at 11:41:41 UTC, Rainer Schuetze wrote:
Maybe it is the trim command being executed on the sectors
previously occupied by the file.
No, perhaps I didn't make it clear that the rmdir slowness is
only an issue on hard drives. I can unzip the 2GB archive in
about
On Saturday, 7 April 2012 at 17:08:33 UTC, Jay Norwood wrote:
The mydefrag program uses the ntfs defrag api. There is an
article at the following link showing how to access it to get
the Logical Cluster Numbers on disk for a file. I suppose you
could sort your file operations by start LCN
These are measured times to unzip and then delete a 2GB folder in
Win7. Both are using the parallel rmd to remove the directory on
a regular hard drive. The first measurement is for an unzip of
the archive. The second is remove of the folder when no defrag
has been done. The third is unzip
On Wednesday, 4 April 2012 at 19:41:21 UTC, Jay Norwood wrote:
The work-around was to convert all the file operations to use
std.stream equivalents, and that worked well, but I see i the
bug reports that even that was only working correctly on
windows. So I'm on windows, and ok for me
101 - 200 of 263 matches
Mail list logo