[creduce-dev] branch for upcoming release

2023-06-13 Thread John Regehr
The other day Yang merged the PR from berolinux for LLVM 16 and also 
fixed a few problems (thanks to both!). Since then I've been working 
towards a version that we can release, it is here:


https://github.com/csmith-project/creduce/tree/creduce-2.11

Building using cmake, things look good on both OSX and Linux, which are 
the only two platforms that I plan to test.


I'm in the middle of moving our old web site from embed.cs.utah.edu into 
markdown in the repo. The other thing I want to do is get CMake to build 
and run the C-Reduce tests, currently it doesn't do any of that. At that 
point we can remove the autoconf stuff. Besides these items, I don't 
have any big plans for this release. Progress will be sporadic since I 
have some travel coming up and other stuff to work on.


John


Re: [creduce-dev] creduce with yarpgen

2023-06-02 Thread John Regehr
The key here, I think, is to ensure that your files both #include a 
header that has a prototype for the test function, and then to reject 
any variant that triggers the compiler warning where the prototype and 
the definition have a different number of arguments.


Another solution would be to merge the driver and function files (by 
hand), though in some cases doing this makes the compiler bug go away.


But also, yarpgen has built-in capacity to invoke C-Reduce with a 
known-good interestingness test. I am CC'ing the main yarpgen developer 
to see if he has anything to add here.


John


On 6/2/23 3:21 AM, Alessandro Mantovani wrote:

Dear all,

I am using C-reduce to reduce some C testcases produced by yarpgen, 
that result in miscompilations, i.e., my target compiler (clang based) 
produces two binaries, respectively with "O0" and "O3", that, when 
executed, output a different result of the checksum.


I have written an interestingness test that looks quite complete to 
me. Basically, I started with the one suggested on the C-reduce 
webpage (https://embed.cs.utah.edu/creduce/using/wrong1/test1.sh) and 
then added a compilation stage with asan, ubsan and msan (for both the 
"O0" and "O3" optimization levels). I also have tried by enabling the 
flag "-Werror=uninitialized", even if it could be redundant, as I have 
msan.


The command line that I use then is simply:
creduce ./test_interestingness.sh driver.c func.c init.h
where driver.c, func.c and init.h are the files generated by yarpgen.

That said, the problem I face is that when C-reduce ends its job I 
obtain two reduced driver.c and func.c files where the test() function 
invocation is messed up. Usually in the driver.c it is declared with 
the following signature:


void test()

and then invoked in the driver.c itself with a certain number N of 
parameters. Surprisingly, at least for me, in the func.c file, the 
same test() function is invoked with a number M of parameters where N 
!= M.


I have to underline that I am testing the arm backend of the compiler 
and thus, the elf binaries I obtain need to run on an Aarch64 
emulator, while the sanitized versions of the same code 
(asan,ubsan,msan) run on x86. I attach in the current email the 
interestingness script and the reduced testcases I obtain. If you want 
to generate the original yarpgen testcases, the yarpgen version is :


yarpgen version 2.0 (build fc8851a on 2022:10:30)

invoked with the --std=c flag and the seed is 2149690884. 
Unfortunately I cannot share the target compiler as it is private, 
sorry for that. Is there any additional check or flag I can add to my 
interestingness test to enforce the fact that the test() function 
should be declared and invoked coherently ?


Best Regards,

Alessandro


Re: [creduce-dev] How does C-reduce work for other other languages?

2021-08-31 Thread John Regehr
An example of a pass that works broadly across languages is the one that 
removes entire lines from a file.


Something you might do is reduce a large fortran program (or whatever) 
and then when C-Reduce finishes, it tells you which passes actually 
worked. So then those are the ones you might want to look at more closely.


John


On 8/31/21 4:55 AM, Rajesh K Singh wrote:

Hello,

I came to know about this magical tool C-reduce very recently. I read 
somewhere that it works well with languages other than C/C++. I have 
downloaded its sources and trying to understand how does it work 
(partially) for other languages (Fortran etc). I understand that 
C-reduce works in number of passes, I am not able to find which pass is 
actually responsible to treat languages irrespective of its grammar.


Thanks in advance.

Regards,
RKS


Re: [creduce-dev] Delta source code

2020-12-28 Thread John Regehr

Hi, this appears to be a mirror of it:

https://github.com/mpflanzer/delta

John


On 12/28/20 4:18 PM, Volker Weißmann wrote:

Hello,


on your website (https://embed.cs.utah.edu/creduce/) you compare creduce
to a tool called "Delta". Unfortunatly, the link
http://delta.tigris.org/ is dead.

Does anyone have the source code of Delta?


Greetings

Volker Weißmann


Re: [creduce-dev] question on creduce usage

2020-08-06 Thread John Regehr
Can you also bootstrap gcc with address sanitizer?  That might help 
detect the error more reliably...?


This is a good idea.

Also, in my experience, restarting creduce runs from scratch after 
improving your

oracle script etc. is kind of part of the territory...


Yep!

John



Re: [creduce-dev] question on creduce usage

2020-08-06 Thread John Regehr
Hi Jack, I believe the --save-temps command line option will give your 
the paper trail that you want here, could you give that a try? Do this 
on a machine with plenty of free disk space.


The version of the file being reduced that is in the original location 
where you invoked C-Reduce is guaranteed (unless C-Reduce contains bugs, 
but we know of no such bug) to have passed the interesting test once. 
But the probability may be very low. My guess is that is what happened 
to you: the test became less and less likely to pass, but it happened to 
pass and the file got copied over.


The real answer here is to avoid nondeterminism, for example by 
disabling ASLR on your machine. I'm not sure that you're going to get a 
useful result out of C-Reduce here, if you don't do that.


John



On 8/6/20 2:01 PM, Jack wrote:

Hello all,

I've been using creduce to help track down a stack-smashing bug in gcc 
when it compiles a particular file in the boost library.  I'm reducing 
the boost file, and the interestingness test just checks for "stack 
smashing" in the gcc output.  One problem is that the bug is not fully 
repeatable - sometimes only happening about 40% of the time.  My 
understanding is that this may cause creduce to reject some true 
failures as uninteresting, but should still produce valid results.  As 
far as I know, the test.sh does not produce any false positives, just 
some false negatives.


As a result of Isaiah, I lost power recently (all OK now, I'm in 
southeast CT) and when I started up the system just now, the cpp file in 
the working directory does not fail at all (over 100 tests.)  So, if I'm 
reducing test-file.cpp, what (if anything) can I assume about the 
version of test-file.cpp at any point during the run?.  If it is not 
necessarily an "interesting" file, then I don't understand how creduce 
keeps track of where it is.  If it should always be an "interesting" 
file, then what might have gone wrong for it to now NOT be interesting?


The last few lines of the log file (captured by a "tee" command) are
(90.8 %, 299086 bytes)
(90.8 %, 299079 bytes)
(90.8 %, 299056 bytes)
===< pass_clex :: rm-toks-4 >===
(90.8 %, 299028 bytes)
(90.9 %, 299002 bytes)
(90.9 %, 298990 bytes)
and the remining test cpp file is 298990 bytes long.

Thanks for any suggestions or pointers, even if just to the "fine 
manual" I need to read.


For now, I suppose I'll just have to start over, losing close to 80 
hours of work.


Separate suggestion: have creduce save some marker of its current state, 
so it can pick up where it left off, in case of a power failure or some 
other interruption.


Jack


Re: [creduce-dev] C-Vise project introduction

2020-04-23 Thread John Regehr

Hi Martin, this looks great.

and when he announced a final comparable port (2 years ago) there was 
still no feedback.


This isn't quite right, we did discuss this with Moritz. Our 
reservations about adopting the big patch centered around a few missing 
features in it, a question about who would maintain the Python code 
moving forward, and also our reluctance to take on a Python code base 
when none of us actively writes much Python.


We hope you'll continue to contribute changes back to clang_delta, it 
would be a shame for that utility to be meaningfully forked. In the 
future, we could perhaps look into splitting clang_delta out into its 
own repository, if that makes it easier to share the maintenance burden 
and avoid forking it.


(This reply is just from me-- perhaps Eric and Yang will want to chime 
in separately.)


John


Re: [creduce-dev] halfempty algorithm for creduce?

2019-06-14 Thread John Regehr

Thanks Vegard! This is very useful to know.

I'll just add that the top-level result here, where C-Reduce produces a 
smaller final output, but takes longer to do it, is basically the entire 
point of C-Reduce. We spent a lot of effort making it aggressive, and 
quite a lot less effort making it fast.


John



On 6/14/19 12:37 AM, Vegard Nossum wrote:

I did a creduce vs. halfempty benchmark at some point. These were my results:

"""
Same input (739 bytes C++), ~same test script, 1 vs 32 threads (on 32 cores).

Halfempty speedup = ~2.7 (-63%),
creduce speedup = ~8.7 (-89%).

at 32 cores the two programs were within 8 seconds of each other (!),
whereas on 1 core, halfempty took 7m27 and creduce took 22m57

The final file sizes were:

460 bytes for halfempty,
317 for c-reduce
"""

File dump from back then at:
https://gist.github.com/vegard/e79b96cefffbfb753da17c4646132fab


Vegard

On Thu, 13 Jun 2019 at 22:57, Nico Weber  wrote:


Hi,

creduce often takes more than an hour to run, with most cores being idle. 
https://github.com/googleprojectzero/halfempty is an approach to doing delta 
debugging in parallel. Could that approach be implemented in creduce as well?

Thanks,
Nico


Re: [creduce-dev] halfempty algorithm for creduce?

2019-06-14 Thread John Regehr
Hi Nico, it would be great if you could give us any speedup numbers your 
big machine. It could easily be time to increase the default number of 
cores that we use. Limiting C-Reduce to 4 cores by default was based on 
a few observations we made on some quite old hardware. Also, the actual 
speedup depends a lot on how slow the interestingness test is.


There's an obvious and possibly-important improvement left for parallel 
C-Reduce, which is that currently it produces variants in the main 
process and only runs interestingness tests in parallel. This kept 
things nice and simple. The alternative, of course, is to also produce 
the variants in parallel. I'm not sure how much of a difference this 
will make, and I'm not sure how hard it is to implement-- I haven't 
thought hard about this part of C-Reduce for a few years.


The only other changes I can think of that would help is to reconsider 
the first few passes that C-Reduce runs. These are always the most 
important: if the initial passes can kill a lot of text, then the 
reduction goes quickly, if not then C-Reduce gets stuck trying out 
little stuff for a long time.


John




On 6/14/19 5:55 AM, Nico Weber wrote:

Thanks for sharing the data and the test scripts!

The inputs we usually use with creduce are 2-4MB (basically what clang 
writes when it hits an assert -- it's a single cpp file with all .h 
files inlined via -frewrite-includes -E), so orders of magnitude larger. 
I'd expect the speedup to be much larger. If so, I suppose I could run 
halfempty first and then creduce second.


Thanks to your reply and went and looked at the default value for --n, 
and apparently it's 4 
(https://github.com/csmith-project/creduce/blob/a1aa2a3601addc4d8d22c203c1ddecdbdde3df6e/creduce/creduce_utils.pm#L151), 
which means I'm using only 10% of my cores when running creduce. I 
wasn't aware of that, so thanks for making me look it up. Maybe just 
passing --n 48 will make things much faster already.


I'll do a comparison of my last bisect and will report numbers.

On Fri, Jun 14, 2019 at 2:38 AM Vegard Nossum > wrote:


I did a creduce vs. halfempty benchmark at some point. These were my
results:

"""
Same input (739 bytes C++), ~same test script, 1 vs 32 threads (on
32 cores).

Halfempty speedup = ~2.7 (-63%),
creduce speedup = ~8.7 (-89%).

at 32 cores the two programs were within 8 seconds of each other (!),
whereas on 1 core, halfempty took 7m27 and creduce took 22m57

The final file sizes were:

460 bytes for halfempty,
317 for c-reduce
"""

File dump from back then at:
https://gist.github.com/vegard/e79b96cefffbfb753da17c4646132fab


Vegard

On Thu, 13 Jun 2019 at 22:57, Nico Weber mailto:tha...@chromium.org>> wrote:
 >
 > Hi,
 >
 > creduce often takes more than an hour to run, with most cores
being idle. https://github.com/googleprojectzero/halfempty is an
approach to doing delta debugging in parallel. Could that approach
be implemented in creduce as well?
 >
 > Thanks,
 > Nico



Re: [creduce-dev] halfempty algorithm for creduce?

2019-06-13 Thread John Regehr

Thanks again, this was all stuff that needed fixing for sure!!


Ugh, sorry, replied to wrong mail here, plz ignore.

John



Re: [creduce-dev] halfempty algorithm for creduce?

2019-06-13 Thread John Regehr

Thanks again, this was all stuff that needed fixing for sure!!

John


On 6/13/19 3:00 PM, John Regehr wrote:

https://blog.regehr.org/archives/749

Note that the performance numbers are way old. If you have a few slow 
reductions how about post them here and maybe I can find time to play 
around with increasing the speedup.


John


On 6/13/19 2:56 PM, John Regehr wrote:
Hi Nico, C-Reduce invented that approach years before halfempty 
existed :).


You can increase how many cores it's willing to use with a command 
line option, please give this a try and let us know if this gives any 
more speedup.


John


On 6/13/19 2:54 PM, Nico Weber wrote:

Hi,

creduce often takes more than an hour to run, with most cores being 
idle. https://github.com/googleprojectzero/halfempty is an approach 
to doing delta debugging in parallel. Could that approach be 
implemented in creduce as well?


Thanks,
Nico


Re: [creduce-dev] halfempty algorithm for creduce?

2019-06-13 Thread John Regehr

https://blog.regehr.org/archives/749

Note that the performance numbers are way old. If you have a few slow 
reductions how about post them here and maybe I can find time to play 
around with increasing the speedup.


John


On 6/13/19 2:56 PM, John Regehr wrote:

Hi Nico, C-Reduce invented that approach years before halfempty existed :).

You can increase how many cores it's willing to use with a command line 
option, please give this a try and let us know if this gives any more 
speedup.


John


On 6/13/19 2:54 PM, Nico Weber wrote:

Hi,

creduce often takes more than an hour to run, with most cores being 
idle. https://github.com/googleprojectzero/halfempty is an approach to 
doing delta debugging in parallel. Could that approach be implemented 
in creduce as well?


Thanks,
Nico


Re: [creduce-dev] halfempty algorithm for creduce?

2019-06-13 Thread John Regehr

Hi Nico, C-Reduce invented that approach years before halfempty existed :).

You can increase how many cores it's willing to use with a command line 
option, please give this a try and let us know if this gives any more 
speedup.


John


On 6/13/19 2:54 PM, Nico Weber wrote:

Hi,

creduce often takes more than an hour to run, with most cores being 
idle. https://github.com/googleprojectzero/halfempty is an approach to 
doing delta debugging in parallel. Could that approach be implemented in 
creduce as well?


Thanks,
Nico


[creduce-dev] visualization of c-reduce's development history

2019-06-09 Thread John Regehr

https://www.youtube.com/watch?v=2NSX5Gr_bYo

:)


[creduce-dev] paper using c-reduce as a baseline

2019-06-07 Thread John Regehr

"Compared to two state-of-the-art
program reducers C-Reduce and Perses, which time out on 6 programs and 2 
programs respectively in 12 hours, Chisel runs up to

7.1x and 3.7x faster and finishes on all programs."

https://www.cis.upenn.edu/~mhnaik/papers/ccs18.pdf


Re: [creduce-dev] creduce - ignore a pass

2019-05-07 Thread John Regehr

Great, yes, that's definitely the quick way to make this happen.

I can add a command-line option for skipping passes but I won't be able 
to do that this week.


John


On 5/7/19 6:48 AM, Martin Liška wrote:

On 5/7/19 2:31 PM, Konstantin Tokarev wrote:



07.05.2019, 15:27, "Martin Liška" :

Hi.

Sometimes I see it handy to skip rename pass. Is there any
option I can use? If not, can anybody help me how to write a patch
for it?


You can comment out any pass in the main creduce script, it has a table of
all active passes. Look for @all_methods



Thank you, works for me.

Martin



Re: [creduce-dev] C-Reduce 2.9.0 Released

2019-05-05 Thread John Regehr

Yay!!! Thanks for pushing this out, Eric.

John


On 5/5/19 8:08 PM, Eric Eide wrote:

C-Reduce 2.9.0 is released!

Get it from GitHub:
   https://github.com/csmith-project/creduce/releases/tag/creduce-2.9.0

Or if you want the official tarball:
   http://embed.cs.utah.edu/creduce/creduce-2.9.0.tar.gz

Notable improvements/changes since the previous release:

   + Supports and requires LLVM 7
 (Thank you to Ray Donnelly!)

   + New pass to remove constant `#if` blocks
 (Thank you to Amy Huang!)

   + New pass to remove `#if` blocks
 (Thank you to Amy Huang!)

   + New pass to remove `#line` directives
 (Thank you to Amy Huang!)

   + New binary-search pass for removing C++-style comments
 (Thank you to Amy Huang!)

   + Automatically run parallel "interestingness" tests on FreeBSD

   + New `--version` command-line option reports version and exits
 (Thank you to Pranav Kant!)

   + Numerous bug fixes
 (Thank you to Michal Janiszewski and Jakub Wilk!)

Eric.



Re: [creduce-dev] development stuff

2019-02-23 Thread John Regehr

I guess it depends on what people want.  Can we talk about this next week?


Sure, but I'll tell you what I want, which is that if you can make time 
for this we do a C-Reduce for LLVM 7 since this will benefit distros 
that are stuck on that version for a while, which is pretty common. Then 
we also do an LLVM 8 release soon after.


I've merged the markdown and the LLVM 7 PR. We should look at the rest 
of the PRs and see if there's anything to merge.


John



Re: [creduce-dev] development stuff

2019-02-22 Thread John Regehr
I meant to add that I think it's time to nuke the autoconf stuff, unless 
there's some compelling reason to keep it.


John



On 2/22/19 10:29 PM, John Regehr wrote:
Eric, do you envision an LLVM 7 based C-Reduce release or should I start 
pushing changes for LLVM 8?


I'm moving our README and INSTALL files to markdown, had wanted to do 
that for a long time.


John


[creduce-dev] development stuff

2019-02-22 Thread John Regehr
Eric, do you envision an LLVM 7 based C-Reduce release or should I start 
pushing changes for LLVM 8?


I'm moving our README and INSTALL files to markdown, had wanted to do 
that for a long time.


John


Re: [creduce-dev] LLVM 7 and 8

2019-02-22 Thread John Regehr

The LLVM 8 release is imminent, so we sort of missed the boat on LLVM 7.

John


On 2/22/19 8:16 AM, Eric Eide wrote:

Martin Liška  writes:


Any expectation when a new release will happen?


No, I can't give you a specific date.  I'm sorry.

This is on my to-do list, and I want to get to it as soon as possible (because
I feel it is "overdue"), but it is behind several other tasks in my
professional and personal lives that have hard deadlines attached to them.

The "good news" is that the top of the master branch works with LLVM 7, and so
people can get work done.  The bad news is that some downstream packagers wait
until we have an official release.

Eric.



Re: [creduce-dev] [RFC] Adding passes for reducing clang crash reports

2019-02-13 Thread John Regehr

Hi Reid, that's odd, I got Amy's mail from the list as usual.

I reproduce it below.

John


---
Hi all,

Reducing crashes in clang is a common task for compiler developers, so 
we would like to make it faster. Clang produces crash reports using 
“clang -E -frewrite-includes”, which leaves behind macro defines, line 
markers, ifdefs, etc. It does this because sometimes diagnostics and 
crashes in diagnostic code depend on the pre-processor state controlled 
by these directives. For example,


- Sometimes when line markers are removed, errors in system headers are 
no longer suppressed; this prevents the original crash from occurring


- Sometimes the crash depends on macros

Some of the code patterns that -frewrite-includes produces are line 
markers, #defines, and #includes contained in this type of ifelse block:


#if 0

#include "foo.h"

#else

// contents of foo.h

#endif

Currently CReduce removes the includes one by one at the beginning, and 
most of the other stuff in the lines pass, which is time consuming 
especially as they don't get collapsed by topformflat. Basically what we 
want to do is have some sort of pass to remove/simplify the line 
markers, macros, and ifs at the beginning. Maybe it could be added as an 
additional pass, or maybe as a sort of clang preprocessor step?


Any thoughts or suggestions?


Re: [creduce-dev] [RFC] Adding passes for reducing clang crash reports

2019-02-13 Thread John Regehr

Hi Amy,

I agree that reducing include-heavy C++ is clunky.

Before running C-Reduce, I usually preprocess using "-P" which leaves 
out the line markers.  I guess that doesn't help you if you are using a 
file produced by an automated crash reporter.


As Eric says, unifdef may help, it is already run very early in the pass 
schedule. There may be issues in this pass that could be cleaned up to 
make it do a better job.


Another pass that runs very early is pass_blank, which attempts to 
remove all lines starting with #. If this works it should give you what 
you want.


I would suggest doing a bit of experimentation where you look at some of 
our passes that should in principle do the job you want, and run just 
those passes alone, and see if they're doing anything useful. You can 
run an individual pass by clearing C-Reduce's pass schedule and 
explicitly adding the pass you like (see the command line help for the 
details).


John


On 2/13/19 2:10 PM, Amy Huang wrote:

Hi all,


Reducing crashes in clang is a common task for compiler developers, so 
we would like to make it faster. Clang produces crash reports using 
“clang -E -frewrite-includes”, which leaves behind macro defines, line 
markers, ifdefs, etc. It does this because sometimes diagnostics and 
crashes in diagnostic code depend on the pre-processor state controlled 
by these directives. For example,


- Sometimes when line markers are removed, errors in system headers are 
no longer suppressed; this prevents the original crash from occurring


- Sometimes the crash depends on macros


Some of the code patterns that -frewrite-includes produces are line 
markers, #defines, and #includes contained in this type of ifelse block:


#if 0

#include "foo.h"

#else

// contents of foo.h

#endif


Currently CReduce removes the includes one by one at the beginning, and 
most of the other stuff in the lines pass, which is time consuming 
especially as they don't get collapsed by topformflat. Basically what we 
want to do is have some sort of pass to remove/simplify the line 
markers, macros, and ifs at the beginning. Maybe it could be added as an 
additional pass, or maybe as a sort of clang preprocessor step?



Any thoughts or suggestions?



Re: [creduce-dev] Thanks letter

2019-01-21 Thread John Regehr

Hi Tony!

(1) We could add a option to let the user set how long the program is 
expected to run at most, once that time passed, creduce can kill the 
process and treat it as one failed reduce and just continue try 
something else. The reason we need this is that sometimes after reducing 
the program goes into infinite loop, and creduce never ends, I have to 
kill it and rewrite the script and kill the running step after certain 
amount of time (using time out) to tell creduce this is a failed 
reducing. This is something we can include into the tool, so as to make 
the user write simpler invoke script.


This option exists, it is called "--timeout". Please see the output of 
"creduce --help".


(2) Another thing is more general. According to my experience, this tool 
doesn't do every well when the source code has templates in it. I know 
it is called 'C' reduce, not for C++, but since it can be used for 
reducing C++ as well. We might try to do some improvement in C++ 
templates handling to make the tools even better.


Yes, we have already put quite a bit of work into this, but much work 
remains. We don't have much time for this work these days, but would 
welcome patches that make C-Reduce better at eliminating template codes.


John



[creduce-dev] LLVM 7 and 8

2019-01-09 Thread John Regehr
Does anyone have time to update C-Reduce to LLVM 7? I think we should do 
an LLVM-7-based release even if LLVM 8 is coming out fairly soon (I 
imagine it is) since OS distributions may be stuck on LLVM 7 for a while.


John


Re: [creduce-dev] C-Reduce Script for an OpenGL program

2018-11-30 Thread John Regehr

Thanks Vegard. Adding a timeout of 3s to ./a.out did seem to go well. I gave
it about 4 or 5 days, but eventually lost patience after a day or so of
"pass_lines". (The preprocessed cpp file was 302211 lines btw.)


Well, ugh. I don't know of a faster way to do this: pass_lines is 
hierarchical (starts by removing large chunks and then moves to smaller 
ones) and that's our best idea for making reduction go quickly.



Is it possible to save each cpp file (and exe) which produces an interestingness
timeout?


There's no support for this but it would be a very easy change to make.

John



Re: [creduce-dev] Creduce example with potential improvements

2018-10-10 Thread John Regehr
I was sure we had a class -> struct transformer but it looks like not, 
will see if we can do that one. We do a lot of similar stuff.


Russell, your list is great, the problem is we've all moved onto other 
jobs or projects and it's hard to find time for active C-Reduce development.


John


On 10/10/2018 10:13 AM, Reid Kleckner wrote:
On Wed, Oct 10, 2018 at 5:47 AM Russell Gallop > wrote:


* Replacing class { public: ... } with struct { ... }. This should
be equivalent and reduces the number of tokens.


This last one is really simple, I do it a lot, and I'd love to see it 
upstream. The rest seem tricky, but would be great if someone finds the 
time.


Re: [creduce-dev] Avoiding syntax warnings and errors

2018-03-14 Thread John Regehr
Hi Vegard, I'll agree with Yang that the point of C-Reduce is basically 
to *not* worry about any of the things you mention, deferring those 
problems to the interestingness test.


I typically include some tests for warnings in the interestingness test, 
or else just manually fix up the warnings at the end, once reduction 
finishes. If the final output is less than a couple hundred bytes, this 
is usually easy.


Those things said, you can probably disable some of the problematic 
passes, but I don't think any of us has ever tried to do that.


Regarding try-catch, yes, we'd like to have a pass for that. I added a 
note about this to our TODO list, but mostly we're too busy for adding 
new stuff these days.


John


On 3/14/18 12:47 AM, Yang Chen wrote:

Hi Vegard,


On 03/13/2018 12:08 PM, Vegard Nossum wrote:

Hi,

First of all, thanks for C-reduce! It's extremely useful and valuable.

I am trying to run C-reduce on a large number of (large-ish,
preprocessed) source files and I have run into the following problems
and/or minor annoyances:

Especially for sources that crash the compiler, programs often come
out with syntax errors and warnings, even though the syntax was fine
in the original program. One mitigation I have found that helps to a
certain degree is to test that the number of braces ("(", ")", "[",
"]", "{", "}", "<", ">") match up in the acceptance script, but this
breaks down if those characters are present and/or unbalanced, say, in
source comments or string literals.

There are other syntax errors which are introduced as well, for
example the removal of semicolons and return values (and return
statements). A mitigation for this is to count the number of warnings
in the original file, pass it to the acceptance script through, say,
an environment variable, and make sure that number never increases.
There are drawbacks, though: if there are any warnings to start with,
these could disappear and new ones be introduced (i.e. there is no
easy way to force this number to always go down). It also slows down
the whole reduction process by a ridiculous amount, going from maybe 1
minute to over 10 for the initial passes alone.

Another mitigation is of course passing -Werror to have those warnings
turned into errors, but again that only works if the original source
does not have those warnings to start with.

My final script uses a combination of brace counting, -Werror=
options, and stderr grepping, but it's far from perfect and it's
actually a huge slowdown compared to just using a dumb script and then
fixing up the resulting test case by hand (i.e. adding "void" or "int"
where the return type is missing, adding "return 0;" statements to the
end of functions, etc.).

I am not sure, but it seems like there might be a small number of
passes responsible for introducing the worst of these syntax errors.

Any hints, tips, or ideas for how to improve the situation? Could it
be possible to provide a command-line switch that will skip passes
that remove "too much"?


C-Reduce doesn't have a mode where it could only produce syntactically 
correct outputs or work towards producing less invalid outputs. There 
are quite a few passes that can introduce syntax-related errors. At the 
moment, I couldn't think of a better way to avoid that except massaging 
the script as you did.  Sorry for the inconvenience.



Another thing I noticed is that C-reduce often leaves superfluous
try...catch statements. I've often found the final output to contain a
try...catch that could simply be substituted by either just the
try-block, just the catch-block(s), just the catch-declaration(s), or
the concatenation of a combination of them (without the "try" and
"catch" themselves obviously).




Yes, this is a known issue. Currently, C-reduce doesn't have a specific 
pass for processing try/catch statements.


- Yang


Re: [creduce-dev] LLVM 6

2018-03-09 Thread John Regehr
Eric, also, if you have time to make some sort of rudimentary release 
checklist for C-Reduce, I can take care of subsequent releases more 
effectively.


John


On 3/9/18 11:39 AM, Eric Eide wrote:

John Regehr  writes:


Eric, if you don't have time for a release I'll do it.


I still want to do this.  Obviously I wasn't very successful recently, but I
think that my load of other stuff is lifting and I should be able to do it
soon.



Re: [creduce-dev] LLVM 6

2018-03-09 Thread John Regehr

OK travis is happy again.  Push the button anytime, Eric.

John


On 3/9/18 11:39 AM, Eric Eide wrote:

John Regehr  writes:


Eric, if you don't have time for a release I'll do it.


I still want to do this.  Obviously I wasn't very successful recently, but I
think that my load of other stuff is lifting and I should be able to do it
soon.



Re: [creduce-dev] LLVM 6

2018-03-09 Thread John Regehr
I want you to do it too! But I want a release more than I want you to do 
the release.


John


On 3/9/18 11:39 AM, Eric Eide wrote:

John Regehr  writes:


Eric, if you don't have time for a release I'll do it.


I still want to do this.  Obviously I wasn't very successful recently, but I
think that my load of other stuff is lifting and I should be able to do it
soon.



[creduce-dev] LLVM 6

2018-03-09 Thread John Regehr
OK, we managed to sit out LLVM 5 without a C-Reduce release, but LLVM 6 
just came out so now would be a good time. I just pushed changes 
updating to LLVM 6 (nothing broke, it looks like). Later today when I'm 
on a Linux machine I'll update our docker stuff.


Eric, if you don't have time for a release I'll do it.

John



Re: [creduce-dev] cmake bug?

2018-01-30 Thread John Regehr

I have weak cmake skills but can try to help.

In any case I just added this as an item on the TODO list so we don't 
forget, in case we don't do it right away.


John


On 01/30/2018 08:31 PM, Eric Eide wrote:

John Regehr  writes:


In souper we have similar foo.in -> foo transformations doing some
substitutions that do get run at "make" time.  I don't know if there's
something special about C-Reduce that makes this unworkable.


Now that you mention it, I vaguely remember trying `configure_file` and
failing.  I forget why I failed.  Probably because I don't know what I'm doing.
Maybe I should try again!

I vaguely recall that one problem was that `configure_file` doesn't let one
specify that the output should be executable.  But I think I had problems with
the actual substitutions, too?

Eric.



Re: [creduce-dev] cmake bug?

2018-01-30 Thread John Regehr

Thanks, I guess I just hadn't run into this before!

In souper we have similar foo.in -> foo transformations doing some 
substitutions that do get run at "make" time.  I don't know if there's 
something special about C-Reduce that makes this unworkable.


This is the mechanism we use there:

https://cmake.org/cmake/help/v3.0/command/configure_file.html

John


On 01/30/2018 07:32 PM, Eric Eide wrote:

John Regehr  writes:


There's something weird going on where the "creduce" file isn't getting
generated from "creduce.in" after I modify the latter.  So I change
creduce.in, run "make install", and then I just get the old version of the
file.  I can get the newer version if I remove my build directory and run
cmake again.  Might this be related to recent changes?


Anyway, "not an unknown bug."  I guess it's not a feature since it's not
documented :-/.



[creduce-dev] cmake bug?

2018-01-30 Thread John Regehr
There's something weird going on where the "creduce" file isn't getting 
generated from "creduce.in" after I modify the latter.  So I change 
creduce.in, run "make install", and then I just get the old version of 
the file.  I can get the newer version if I remove my build directory 
and run cmake again.  Might this be related to recent changes?


I'm using:

cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/creduce-install 
-DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++


John


Re: [creduce-dev] Experiences building C-Reduce on Windows

2017-10-13 Thread John Regehr

Glad you got this to work, Reid!

John


On 10/13/17 1:46 PM, Reid Kleckner wrote:
I actually ended up getting flex.exe from gnuwin32. I put flex.exe on 
PATH, and then CMake finished.


I was able to build unifdef from source and place it manually in the 
creduce build tree where it would normally be produced, and that seems 
to work.


I mucked with cpan for a while and now msysgit's perl seems to do the 
right thing.


I had to comment out the "is executable" check from creduce to get it to 
accept my shell script, and now I have a reduction.


I guess it all worked out. :)

On Fri, Oct 13, 2017 at 11:41 AM, Dmitry Babokin > wrote:


If you need flex/bison, it's quite difficult to get around it with
just "proper" configuration script ;-)

I would suggest trying either Gygwin (it does have pre-built
flex/bison and they should be relatively new) or GnuWin32 tool set
(flex/bison in this one are quite old and this may be an issue).

Dmitry.

On Fri, Oct 13, 2017 at 11:04 AM, Reid Kleckner mailto:r...@google.com>> wrote:

Has anyone else tried this with any success?

My goal here was to reduce a disagreement between MSVC and Clang
down to some minimal example where one errors in a specific way
but the other does not. I think the shortest path to a solution
for me is going to be running MSVC under Wine on Linux, but I
figured I should send this email to document the state of the art.

The INSTALL instructions don't mention Windows at all, but the
CMakeLists.txt commit messages do, so I blithely ignored all
advice about dependencies and did the usual CMake pattern:

git clone ... creduce
cd creduce
mkdir build
cd build
cmake -GNinja ..

Worked OK, until if failed to find flex:
-- Could NOT find FLEX (missing: FLEX_EXECUTABLE)
CMake Error at clex/CMakeLists.txt:23 (FLEX_TARGET):
   Unknown CMake command "FLEX_TARGET".

OK, now I go get flex, which has moved to github
(https://github.com/westes/flex
). This wants me to run
autogen.sh, which relies on autoconf and automake, which I do
not have installed. I only have what mingw64 gives me make &
GCC. Wasn't the point of autoconf to generate a shell script
that you check in so users don't need anything other than a shell?

So, let's ignore that build system. flex is like 10 .c files,
how hard can it be to build? Let's look at the README. Hm, looks
like it has a pretty hard dependency on bison "to generate
parse.c from parse.y". Was someone nice enough to check in
parse.c so that users could build without that dependence? Nope. =/

I'm two dependencies deep and wondering, how did mpflanzer do it?





Re: [creduce-dev] Work Before Next C-Reduce Release?

2017-03-20 Thread John Regehr

Push the button (as far as I'm concerned).

John


On 3/20/17 10:04 AM, Eric Eide wrote:

Is there work that should be committed before I make the next C-Reduce release?

Looking over the current issues and pull requests, I'm planning to put all of
these off until after the upcoming release.

So my pending work is basically just doing test builds on various platforms,
making a list of what's new, rolling the official tarball, etc.

Thanks ---



Re: [creduce-dev] Making creduce more parallel

2017-01-27 Thread John Regehr

I would suggest that you start out dealing only with one or two passes,
perhaps the line remover and the token remover.  These do some heavy
lifting, hopefully never crash, and are always easy to understand.


I think if you do this and don't worry about things like command-line 
arguments (or working across a network) this should end up being easy to 
get off the ground.


One area where you'll get a bit of a free pass is process management: 
the current scheme wants to asynchronously kill obviously-useless work 
and that is a bit painful.  If you have a merge strategy it doesn't 
sound like you'll ever want to kill off a test.


One place to be careful is in managing the pass state objects, which 
only make sense in the context of a variant.  You have to keep these 
paired up.  I could have included the variant in the pass state object, 
this would be much cleaner but would not be fast.


John



Re: [creduce-dev] Making creduce more parallel

2017-01-27 Thread John Regehr

How are the passes / reducers structured right now? If we can generate
all of a pass's potential reductions up front, then they can be
inserted into the queue in a random order to reduce the likelihood of
conflicts. If the passes don't separate generating a potential
reduction from testing it, then we may need to refactor more.


Let's talk about the line reduction pass at granularity 1 (each variant 
is created by removing one line).  We're running it on a 1000-line file.


The search tree here has 2^1000 leaves, so we certainly don't want to 
try to generate all variants up-front.


What we can do is speculate: assume the variants are unsuccessful 
(statistically this is the right guess) and now we only have 1000 
variants, so that is feasible, but not particularly fast since we're 
manipulating ~50 KB of text. Worse, this line of speculation becomes 
more and more out of sync with the current best reduction, assuming that 
some line removals succeed -- so merge conflicts are going to start 
happening.


The upshot is that while the coordinator can run ahead, it should run 
only far enough ahead that the queue doesn't empty out.


The API for a C-Reduce pass is purely functional. The pass takes a 
current test case and a pass state object, and either produces a new 
variant + pass state or else says that it has no more variants to 
generate.  This API is not designed to facilitate picking random variants.


However, a quick hack to get a random variant is to just repeatedly 
invoke the pass a random number of times.  This is not fast but it'll 
get things off the ground with very little effort.  Some experimentation 
will be required to determine how this parameter interacts with the 
likelihood of merge conflicts.



I was imagining that the "orchestrator" process would spawn worker
threads that spawn-and-wait on interestingness processes and use
CSP-style channels to comminucate with the main thread that owns the
queue. This would leverage the modularity of passes that
generate potential reductions and of the interesting test.

Something like this diagram:
https://gist.githubusercontent.com/fitzgen/bf1acdc6dad217f2ed5accbabce9cf73/raw/981bff47be0f818e69041908eb63035fabb4e25a/orchestrator-diagram.txt

I was planning on prototyping in Rust, which has channels in its
standard library. Python's `queue.Queue` should also be able to handle
the job.

If you have other suggestions, I am all ears.


This sounds fine, the only suggestion I have is that you might consider 
using a network-friendly mechanism for the orchestration in case we 
wanted to see how reduction across multiple machines works. Or at least 
design things so that this isn't difficult if anyone wants to try it out 
later.


I would suggest that you start out dealing only with one or two passes, 
perhaps the line remover and the token remover.  These do some heavy 
lifting, hopefully never crash, and are always easy to understand.


John



Re: [creduce-dev] A reduction attempt that creduce handled poorly, where delta was able to make progress

2017-01-27 Thread John Regehr

Ah. I should have mentioned that I needed to use delta's -in_place
option to get delta to work. Does creduce have an equivalent or do I
have to manually do the backups and restoration on any build or test
failure?


To support parallelism I removed C-Reduce's ability to reduce in place, 
you'll need to manually copy variants to whatever directory they should 
be in.



With some iterations of delta, manually removing unused functions, and
speculatively removing different arguments or calls I was able to get
a copy of SLPVectorizer.cpp down to 102 lines (though still
#include'ing 5 of the llvm headers) that gives the known-good result
with the good switch and segfaults with the bad, but it might also be
so small or full of UB that it isn't meaningful to determine what's
getting miscompiled. I'm not sure at this point.

Maybe if creduce had something like an in-place option to automate
this type of situation where there's an external build system that has
an absolute path to the file hard-coded, I would have gotten more
mileage out of it here.


Maybe!  But basically it's very difficult to reduce a file that is part 
of something like a compiler in a sensible way.  It may be better to 
approach this problem another way.


John



Re: [creduce-dev] A reduction attempt that creduce handled poorly, where delta was able to make progress

2017-01-26 Thread John Regehr

Hi Tony, thanks for the note and the examples.

That's a pretty serious interestingness test you have there!

One thing that might be going wrong is that C-Reduce is running multiple 
processes in parallel and they are cd-ing to /opt/llvm/whatever and 
stomping each other.  Can I ask you to re-run this reduction while 
telling C-Reduce to use just one CPU?


But also I'm not sure I understand the purpose of this reduction.  If 
you're making an LLVM source code file smaller, compiling it, and 
running it, then aren't you going to get killed by undefined behaviors 
introduced by the reducer?


Merging multiple source files is non-trivial!  I don't have a good way 
to do that for C++.  For C I use CIL:


https://github.com/cil-project/cil

John



Re: [creduce-dev] Making creduce more parallel

2017-01-25 Thread John Regehr

We put all potential reductions into a shared work queue, have a
worker per core which is pulling from this shared queue. We globally
maintain a current most-reduced test case.


I think this is reasonable.

At this point I should mention that Moritz Pflanzer has a 
re-implementation of C-Reduce in Python that we have tentatively planned 
to replace the Perl implementation with, at some suitable time.  It's 
possible that C-Reduce hacking of the type you are proposing should be 
done on that version.



However, if this reduction passes the interestingness test, but it is
not smaller than the current most-reduced global, then we add a new
potential reduction to the shared work queue: the merge of this
reduction and the current most-reduced global. The idea is that the
union of two reductions (a merge) is itself a potential reduction. If
there is a conflict in the merge, we discard it immediately and don't
even run the interestingness test.


Sounds right.

As I was saying, some randomness will need to be integrated into the 
passes to reduce the likelihood of conflicts.  The passes aren't really 
geared for that so that'll take a bit of thought.



Because merges are just another kind of potential reduction, and there
is no waiting for merges to complete before trying any given potential
reduction, this scheme should better saturate all those cores.


Yes, definitely, though at least sometimes we'll be running up against 
limits other than cores, such as the fact that compilers use a lot of 
memory bandwidth.



It is worth pointing out that this is non-deterministic and racy:
merges depend on what happens to be the current most-reduced test case
and what the order of potential reductions in the queue happen to be.
Running creduce on the same input file twice won't guarantee the same
order of reductions or even the same results. For what it is worth, my
understanding of the current paradigm is that it has similar
properties.


Often the interestingness test itself is non-deterministic (due to 
timeouts) so it's very hard to truly avoid non-determinism.


There's one algorithmic choice in the parallel reducer where we have to 
decide to take the first process that terminates with interesting test 
case or whether we pull these off the queue in order.  I can't remember 
for sure but I believe I went with the more conservative choice even 
through it sacrifices a bit of performance.


For a paper that we're working on about C-Reduce I have some experiments 
planned to evaluate the effect of determinism on reduction.  In other 
words, if we run the same reduction 100 times but with phase ordering 
randomized, what does the resulting distribution of final file sizes 
look like?  My guess is that in many cases the distribution will be 
fairly tight but that every now and then the randomness will find a much 
better solution.  Anyhow I'm looking forward to seeing the results of this!



I don't want to implement merging or rebasing patch files myself. What
if we leveraged git (or hg) for this? Each potential reduction would
locally clone a repository containing the test case into the temp dir,
commit the reduction's changes, and merging different reductions would
be outsourced to merging these commits with `git merge`.


My intuition is that git/hg would eat a lot of performance but I could 
be wrong.  It would certainly be amusing if C-Reduce ended up being a 
small pile of git hooks :).



I am interested in polishing this idea, prototyping it, and if all
goes well contributing these changes to creduce. My hope is that
implementation mostly involves changes to orchestration and that
reductions can remain unchanged.


The thing that I'm proudest of in the C-Reduce implementation is the 
modularity.  The core is just not that complicated, most of the good 
stuff lives in passes that can be thought of as purely functional.  So 
the structure does lend itself to the kind of experimentation you're 
talking about.


Anyhow take a look at Moritz's implementation too:

https://github.com/mpflanzer/creduce/tree/python

What IPC mechanism do you have in mind for the work queue?

John


Re: [creduce-dev] Making creduce more parallel

2017-01-24 Thread John Regehr

Hi Nick!

This sounds like cool work.  It sounds like you're using C-Reduce as a 
fuzzer, basically?


I'm afraid to say that 6 hours is not really that bad when dealing with 
C++, I think a number of us here have seen reductions take quite a bit 
longer than that.


I'm not sure why C-Reduce isn't using all of your cores.  In some phases 
there just isn't a lot of parallelism available but the workhorse 
line-removal passes do have a lot of parallelism.


The only thing that C-Reduce runs in parallel is the interestingness 
test, the transformations and the C-Reduce core remain serial. How long 
is your interestingness test taking?  If it's fairly fast that might 
explain the loss of parallelism.


Anyhow the only easy solution I can offer you is to run multiple 
C-Reduce instances, solving different problems.  Is there any chance 
that fits what you're doing here?



Initially, I thought about a map-reduce-y kind of thing: map out
potential reductions and interestingness tests across all cores and
then merge them back together in pairs in parallel. If a merge
conflicts, take the smaller of the two. Retest interestingness after
each merge and if it fails take the smaller of the two test cases
(both of which we know already passed the interestingness test).
Repeat this process to a fixed point.


This is interesting, and is certainly a way to un-bottleneck C-Reduce. 
It would probably be necessary to randomize the transformations to 
prevent them from creating merge conflicts too often (by default 
transformations walk through the program).


When I originally parallelized C-Reduce I had something like this in 
mind, but I eventually reached the conclusion that you reached, which is 
that merges will kill it.


Your next idea I need to think about some more!  More later.

John


Re: [creduce-dev] Understanding options to framac in a creduce script

2016-11-27 Thread John Regehr

Faraz, I'd suggest just using tis-interpreter instead of Frama-C:

http://trust-in-soft.com/tis-interpreter/

But to answer your questions, you are running the value analysis plugin 
along with options that favor (relatively) fast concrete interpretation, 
as opposed to abstract interpretation.



*[-stop-at-first-alarm]*


Don't continue after finding an error.


*[-no-val-show-progress]*


Don't waste time printing intermediate statuses.


*[-obviously-terminates]*
*[-precise-unions]*


I forgot.

John


Re: [creduce-dev] Error building creduce

2016-09-13 Thread John Regehr
Thanks. Could you try building against 3.9? We have not yet made any 
effort to support a post-3.9 LLVM version.  We'll work on that after we 
roll out the next release, probably later this month.


John


On 09/13/2016 01:05 PM, Faraz Hussain wrote:

Thanks John. Here they are:

LLVM/Clang (I downloaded these yesterday and built them myself):

fhussain@machine1:~/repos/research/llvmcentral/build/bin$ llvm-config 
--version

*4.0.0svn*

fhussain@machine1:~/repos/research/llvmcentral/build/bin$ ./clang 
--version

*clang version 4.0.0 (trunk 281283) (llvm/trunk 281282)*

The source and build directories are different, as per the 
instructions here: http://clang.llvm.org/get_started.html 
<http://clang.llvm.org/get_started.html>


source dir: /home/fhussain/repos/research/llvmcentral/llvm
build dir: /home/fhussain/repos/research/llvmcentral/build


creduce: I have the latest version from github:

fhussain@machine1:~/repos/research/creduce$ git log --oneline -1
90cee54 Indent.



Faraz.


On Tue, Sep 13, 2016 at 11:13 AM, John Regehr <mailto:reg...@cs.utah.edu>> wrote:


Faraz, you will need to tell us what version of C-Reduce you are
using, what version of LLVM/Clang you are using (specifically),
and what options you passed to the configure script.

John



On 9/13/16 11:08 AM, Faraz Hussain wrote:

Hi,

I get this error when executing creduce's make:

In file included from AggregateToScalar.h:18:0,
 from AggregateToScalar.cpp:15:
Transformation.h:18:35: fatal error: clang/AST/ASTConsumer.h:
No such
file or directory
compilation terminated.
Makefile:868: recipe for target
'clang_delta-AggregateToScalar.o' failed

I have  the latest llvm/clang from their svn, and creduce's
configure
 did find clang-format and llvm-config:

checking for llvm-config...
/home/fhussain/repos/research/llvmcentral/build/bin/llvm-config
checking for clang-format...
/home/fhussain/repos/research/llvmcentral/build/bin/clang-format

I think I may need the following in the include path to look
for the
headers: 
/home/fhussain/repos/research/llvmcentral/llvm/tools/clang/include

Where can I tell creduce to look for this?



Thanks,
Faraz.








Re: [creduce-dev] Cherry-Picking Commits?

2016-09-13 Thread John Regehr

It would be simpler, I imagine, to simply merge master into llvm-svn-compatible
as desired.


As you've probably noticed I do this periodically to keep the skew 
minimized.


John



Re: [creduce-dev] Error building creduce

2016-09-13 Thread John Regehr
Faraz, you will need to tell us what version of C-Reduce you are using, 
what version of LLVM/Clang you are using (specifically), and what 
options you passed to the configure script.


John


On 9/13/16 11:08 AM, Faraz Hussain wrote:

Hi,

I get this error when executing creduce's make:

In file included from AggregateToScalar.h:18:0,
 from AggregateToScalar.cpp:15:
Transformation.h:18:35: fatal error: clang/AST/ASTConsumer.h: No such
file or directory
compilation terminated.
Makefile:868: recipe for target 'clang_delta-AggregateToScalar.o' failed

I have  the latest llvm/clang from their svn, and creduce's configure
 did find clang-format and llvm-config:

checking for llvm-config...
/home/fhussain/repos/research/llvmcentral/build/bin/llvm-config
checking for clang-format...
/home/fhussain/repos/research/llvmcentral/build/bin/clang-format

I think I may need the following in the include path to look for the
headers:  /home/fhussain/repos/research/llvmcentral/llvm/tools/clang/include
Where can I tell creduce to look for this?



Thanks,
Faraz.





Re: [creduce-dev] release time

2016-09-07 Thread John Regehr
Can I get someone to add the nonsense to the INSTALL file about under 
which circumstances Clang must be used to build C-Reduce instead of GCC, 
and how to do that for autoconf and cmake?


Thanks!

John



On 09/07/2016 01:36 PM, Eric Eide wrote:

John Regehr  writes:


Since LLVM 3.9 is out and C-Reduce development seems quiet at the moment, we
should do a new release in the near future.  Is there anything specific that
needs to be done first?


I'm not aware of anything that "must" be done before a release.  AFAIK, all
currently pending merge requests can be deferred until after a release.

(And perhaps some of them are more easily handled after a release in any case.
Like, I'm assumeing that some of the pending merge requests have failing
Travis-CI build because the llvm-svn-compatible branch has a broken Travis-CI
build, which I haven't fixed, and which would be easiest to fix simply by doing
the release.)

Of course, there are things do to as part of making a release, which I expect
to be responsible for doing.

Eric.



Re: [creduce-dev] release time

2016-09-07 Thread John Regehr
Ok, I'll merge the llvm-svn-compatible branch into the trunk and then 
you can do your stuff, Eric.


John


On 9/7/16 1:36 PM, Eric Eide wrote:

John Regehr  writes:


Since LLVM 3.9 is out and C-Reduce development seems quiet at the moment, we
should do a new release in the near future.  Is there anything specific that
needs to be done first?


I'm not aware of anything that "must" be done before a release.  AFAIK, all
currently pending merge requests can be deferred until after a release.

(And perhaps some of them are more easily handled after a release in any case.
Like, I'm assumeing that some of the pending merge requests have failing
Travis-CI build because the llvm-svn-compatible branch has a broken Travis-CI
build, which I haven't fixed, and which would be easiest to fix simply by doing
the release.)

Of course, there are things do to as part of making a release, which I expect
to be responsible for doing.

Eric.



[creduce-dev] release time

2016-09-07 Thread John Regehr
Since LLVM 3.9 is out and C-Reduce development seems quiet at the 
moment, we should do a new release in the near future.  Is there 
anything specific that needs to be done first?


Thanks,

John


[creduce-dev] handling #defines

2016-08-11 Thread John Regehr
I just checked in a simple, very hacky pass for resolving #defines.  It 
only handles macros that lack arguments and misses lots of other cases. 
There may be additional bugs.  Writing this has strengthened my resolve 
not to reimplement CPP.  The right answer here is to reuse an external 
tool that resembles unifdef, but that handles selective macro expansion 
(I already asked Tony to support this in unifdef, maybe he'll do it 
sometime).


This is orthogonal to pull req #110 which Eric is looking at.

If you want to play with the macro expander by itself, you can invoke it 
like this:


  clex define 0 foo.c

This will expand the first #define found in foo.c

John


[creduce-dev] template hell

2016-08-02 Thread John Regehr
If anyone has some spare hacking time here's a spot where clang_delta 
could use a bit of upgrading:


https://github.com/csmith-project/creduce/issues/113

John


Re: [creduce-dev] reduction using dynamic information

2016-08-01 Thread John Regehr

Ok-- thanks again for doing this Yang.

I wrote the code to take advantage of this transformation (in a little 
standalone tool since it doesn't slot neatly into C-Reduce's framework) 
and it found 2 nice simplifications to do on an already fully-reduced 
test case that I'm working on.


I'll play with this some more and decide whether the support wants to go 
into C-Reduce or not.


John


On 07/24/2016 03:14 AM, Yang Chen wrote:

OK, I pushed a new pass that does pretty much the same as we discussed.
The pass has some simple analysis that is able to reduce the number of
trivial transformations. For example, given "foo(a+1, a+1)", we only
need to insert printf for one of these "a+1".

Furthermore, the pass tries to avoid transformations on the control-code
injected by C-Reduce. For example, the pass won't insert printf for
"!__creduce_printed_1" that appears in "if (!__creduce_printed_1)". This
extra work is actually quite important. Without it, it's almost certain
that we would end up repeatedly rewriting the first occurrence of
"!__creudce_printed_1", because "--counter=1" would always result in
valid transformation.

The examples below demonstrate how to use the pass. In these examples, I
pass "1" to the counter, but you can image that any valid counter value
would also work.

$ cat foo.c
int foo() {
  int a = 1;
  return a+1;
}

* insert code that prints the value of the candidate expression

$ clang_delta --transformation=expression-detector --counter=1 foo.c
int printf(const char *format, ...);
int foo() {
  int a = 1;
  {
  int __creduce_expr_tmp_1 = a+1;
  static int __creduce_printed_1 = 0;
  if (!__creduce_printed_1) {
printf("creduce_value(%d)\n", __creduce_expr_tmp_1);
__creduce_printed_1 = 1;
  }

  return __creduce_expr_tmp_1;
  }
}


* replace the designated expression by a passed value

$ clang_delta --transformation=expression-detector --counter=1
--replacement=123 foo.c
int foo() {
  int a = 1;
  return 123;
}

* insert code that checks if the candidate expression equals to the
referenced value

$ clang_delta --transformation=expression-detector --counter=1
--check-reference=123 foo.c
void abort(void);
int foo() {
  int a = 1;
  {
  int __creduce_expr_tmp_1 = a+1;
  static int __creduce_checked_1 = 0;
  if (!__creduce_checked_1) {
if (__creduce_expr_tmp_1 != 123) abort();
__creduce_checked_1 = 1;
  }

  return __creduce_expr_tmp_1;
  }
}


I am sure there are bugs in this new pass. I am just hoping there
wouldn't be too many.

- Yang



Re: [creduce-dev] reduction using dynamic information

2016-07-24 Thread John Regehr

Yang, excellent, thanks!


extra work is actually quite important. Without it, it's almost certain
that we would end up repeatedly rewriting the first occurrence of
"!__creudce_printed_1", because "--counter=1" would always result in
valid transformation.


Nice, I didn't think of that.

I'm leaving for a trip tomorrow morning but will have some time to work 
while away, and will let you know how this goes!


John


Re: [creduce-dev] Expanding macros?

2016-07-19 Thread John Regehr

There is not, because nobody has written it yet.  A patch would be welcome.

Keep in mind that if at all possible you should use CPP to expand macros 
before running C-Reduce.


John



On 7/19/16 10:42 AM, Ori Brostovski wrote:

Is there a creduce pass that is responsible for expanding macros? If
not, why?

I.e. replacing

// BEGINNING
#include 

#define D(x) x x
#define E(x) x x x

D(E(4))
D(1)
E(2)
// ENDING

with

// BEGINNING
#include 

#define D(x) x x
#define E(x) x x x

4 4 4 4 4 4
1 1
2 2 2
// ENDING




Re: [creduce-dev] reduction using dynamic information

2016-07-14 Thread John Regehr

Yes, I think we can implement this. My concern is that searching only
the first program point where a wrong value is taken might not always
lead to optimal output. For example, we may get a smaller reduced
program by keeping the second failing point. In other words, we could
lose other search paths if we only focus on the first failing point.

On the other hand, this early-abort strategy could serve as a trade-off
between reduction speed and reduction rate. More importantly, we will
never know if we don't give it a shot :)


Yeah, I'm not sure if I've thought it through very well yet, but it's 
worth trying, and also the conversion from a printf-style output to 
calling abort() on wrong code is something that Zhendong Su and his 
people do manually, so it is perhaps useful on its own.


John



Re: [creduce-dev] reduction using dynamic information

2016-07-13 Thread John Regehr

A couple more thoughts:

- Recording dynamic information will be particularly useful in reducing 
bugs in file-processing programs, if we can manage to support arrays. 
In this scenario, a file is loaded from disk into an array, processed 
(wrongly), and then compared against a reference image.  We should be 
able to automatically eliminate all of the code that interacts with the 
filesystem.  Arrays will be more painful to implement and we shouldn't 
worry about this right now.


- The scheme we've been talking about so far is about cutting out 
computations that satisfy the inputs necessary for seeing a compiler 
bug.  But compiler bugs also have an output side, which is necessary to 
observe that the bug happened.  In Csmith there's the checksum code for 
example.  By printing the values of variables in buggy and not-buggy 
executions, C-Reduce can search for the first program point where some 
variable takes on the wrong value.  In this case the generated code 
becomes:


{ int tmp = expr1;
  static int _creduce_checked = 0;
  if (!_creduce_checked) {
 if (tmp != reference_value) abort();
 _creduce_checked = 1;
  }
  x = foo(tmp, expr2);}

Thus, we can also shorten the output side of compiler bugs.  It remains 
to be seen if C-Reduce will tend to just remove this code, instead of 
removing the code that follows it.  There may be a way to convince it to 
do the right thing, such as requiring that an interesting program 
contains an abort() call.


John



On 7/12/16 1:00 PM, Yang Chen wrote:

On 2016-07-12 08:36, John Regehr wrote:


I have two small tweaks to suggest. We should print the value only
once (or else we might get a lot of output for an expression that
lives in a loop) and we should make the value easy to recognize in
case the program prints other stuff.  So perhaps:

  { int tmp1 = expr1;
static int _creduce_printed = 0;
if (!_creduce_printed) {
  printf("creduce_value(%d)\n", tmp1);
  _creduce_printed = 1;
}
x = foo(tmp1, expr2);}

Also we'll have to add a prototype for printf() to the compilation
unit or maybe it's better to simply include stdio.h.



Got it. Thanks for the suggestion.

- Yang



Re: [creduce-dev] reduction using dynamic information

2016-07-12 Thread John Regehr
Yang, that would be awesome if you could prototype this!  Of course I'll 
do the perl side (which will be a slight pain since C-Reduce has to be 
taught to compile and run the program).  Your design sounds good.  We're 
definitely interested in the low-hanging fruit here, no need to worry 
about changing the behavior of programs in corner cases.


I have two small tweaks to suggest. We should print the value only once 
(or else we might get a lot of output for an expression that lives in a 
loop) and we should make the value easy to recognize in case the program 
prints other stuff.  So perhaps:


  { int tmp1 = expr1;
static int _creduce_printed = 0;
if (!_creduce_printed) {
  printf("creduce_value(%d)\n", tmp1);
  _creduce_printed = 1;
}
x = foo(tmp1, expr2);}

Also we'll have to add a prototype for printf() to the compilation unit 
or maybe it's better to simply include stdio.h.


I think this'll be helpful in working around C-Reduce's lack of constant 
folding and will help eliminate the setup code needed to observe 
miscompilations in the wild.


John


On 7/12/16 12:50 AM, Yang Chen wrote:

Hi John,

On 2016-07-11 15:21, John Regehr wrote:


So what I'm looking for is an easy way to enumerate the expressions in
a C/C++ compilation unit and then, for the specified one, rewrite the
program so that the first value taken on by that expression is printed
out. The other thing we need is to replace the expression with its
value -- creating a C-Reduce variant that can be run through the
interestingness test.



Let me try to hack it up this week. I will make this new transformation
follow our existing convention: it will process the expression based on
the counter value passed from the command line. For now, I may put the
following restrictions on the transformation:

  (1) restrict it on C programs;
  (2) print out the value of the expression only if the expression is of
integer or float type (e.g., char, int, float, double, etc);
  (3) a new local variable will be created for each expression of interest;

Item (3) may need more explanation. Assume that we have the following
code snippet (where expr1 is of type int):

  x = foo(expr1, expr2);

the transformation would turn the code into something like:

  { int tmp1 = expr1;
printf("%d\n", tmp1);
x = foo(tmp1, expr2);}

The reason is that expr1 may have side-effect. To make the
transformation have as little impact as possible on the original
semantic, expr1 should only be executed once (note that we may still
change the original semantics because expr2 can be executed before expr1
for the unmodified code).

Once we have the value for tmp1, we can use the following command to
replace the expr1 with the value:

  $ clang_delta --transformation=replace-expr --counter=1 --value=123 foo.c

With this command, we would get:

  x = foo(123, expr2);

Can you check if my description meets the requirement? Thanks.

- Yang



[creduce-dev] reduction using dynamic information

2016-07-11 Thread John Regehr
I've been reducing some difficult wrong code bugs lately and C-Reduce 
tends to get stuck sometimes, it just can't see the transformations that 
need to be done.


One idea I've had, than I think will help out, is to convince C-Reduce 
to replace a variable (or other expression) with a value that is seen 
during actual execution.  If an expression takes on multiple values 
during execution, we can just pick one of them and try it out.


So what I'm looking for is an easy way to enumerate the expressions in a 
C/C++ compilation unit and then, for the specified one, rewrite the 
program so that the first value taken on by that expression is printed 
out. The other thing we need is to replace the expression with its value 
-- creating a C-Reduce variant that can be run through the 
interestingness test.


Doing this will be pretty slow, but I view it as a strategy of last 
resort that can be used when easier reduction methods have failed.


I imagine the right answer here is a clang plugin.  Does anyone have 
experience enumerating expressions (including all subexpressions) using 
a clang visitor, and also rewriting the subexpressions?


Thanks,

John


Re: [creduce-dev] Update to CMake-Based Build System

2016-06-28 Thread John Regehr
If I recall, cppp lacks a way to print the full list of possible 
definitions, which makes the indexing feature of a C-Reduce pass hard to 
implement.


This is the one I was looking at, I don't know anything about Mike 
Ernst's version:


http://www.muppetlabs.com/~breadbox/software/cppp.html

Anyhow my view is that this is a bit of a niche C-Reduce feature, 
perfectly fine to just disable it on Windows until we come up with a 
better solution!


John


On 6/28/16 9:20 PM, Moritz Pflanzer wrote:

I have not tested cccp but it seems to be from the GNU family. I suspect that 
it will not work on Windows either.

Instead I have found cppp 
(https://homes.cs.washington.edu/~mernst/software/cppp) which is a Perl script 
which should work on Windows. I will test later and come back with feedback.

Regards,

Moritz


On 28 Jun 2016, at 14:08, Eric Eide  wrote:

mor...@pflanzer.eu writes:


I can confirm that it works on OS X 10.11.


Thanks!


Unfortunately it does not work natively on Windows because "unifdef" cannot
be build with Visual Studio. Some dependencies are Posix only.  I would
suggest to make the build of "unifdef" conditional and to show a warning if
its build is deactivated. The user could then build it by themselves using
Cygwin etc.


Thanks!  I wonder if we could replace unifdef with a similar tool that does
build on Windows.  (I recall cccp, but I don't know if it is Windows-friendly.)

Eric.

--
---
Eric Eide   . University of Utah School of Computing
http://www.cs.utah.edu/~eeide/ . +1 (801) 585-5512 voice, +1 (801) 581-5843 FAX




Re: [creduce-dev] cache

2016-06-26 Thread John Regehr
I've been running some really long reductions and caching the results at 
pass granularity doesn't seem to be causing memory problems, so for now 
I'm going to stick with the current code, which is simple and non-invasive.


John


On 6/24/16 10:13 PM, Moritz Pflanzer wrote:

Hi,


I think it's a good idea. In terms of large memory consumption, perhaps we 
could start caching when the size of the input test drops to some value (e.g. 
10k)?


Or maybe an alternative would be to store the cached results as files on disk? 
Now that only every pass is cached it shouldn't be too much I hope. In this 
case a hash of the content could be used as key in the cache map, pointing to 
the location of the file (or rather the file name).

Another thought I had but which might be wrong: Take the position of the pass 
within an iteration -- currently $pass_num in the Perl code --, the pass name 
and argument, and a hash of the file contents before the pass has executed and 
store this information in a cache. If we now come to the point -- in a later 
iteration -- where this information is already in the cache when we want to 
store it, can we then abort the reduction immediately because apparently 
nothing has changed during a whole iteration?

Could that help to terminate the last round of the reduction when nothing 
changes earlier? Or have I forgot something that invalidates my logic?

Regards,

Moritz



Re: [creduce-dev] Nits in Test Reduction @ 9b0d493

2016-06-25 Thread John Regehr

void fn1() {
  a << 0;
}


I hear you but these days clang-format puts that kind of function on a 
single line.  We can provide it with a different style preference if we 
want, of course.


John



Re: [creduce-dev] Nits in Test Reduction @ 9b0d493

2016-06-25 Thread John Regehr

The parens are weird, I'll look into it.

This example looks pretty-printed to me, is there something unpretty 
about it?


John


On 6/25/16 10:41 PM, Eric Eide wrote:

At commit 9b0d493 (current master), reducing test #1 gives me this result:

-
long a;
void(fn1)() { a << 0; }
int main() { return 0; }
-

(It runs quite quickly now.  Bravo!!)  Two things:

  + The parens around fn1 are sort of weird.

  + Don't we pretty-print at the end any more?

Eric.



Re: [creduce-dev] Nits in Test Reduction @ 9b0d493

2016-06-25 Thread John Regehr

(I didn't run tests #4 and #5, because they require Frama-C and KCC,
respectively.)


Note that instead of Frama-C and lots of weird command-line options you 
can now just use tis-interpreter and zero command line options, and you 
do not even need to build lots of weird OCaml crap, just download the 
binary:


  http://trust-in-soft.com/tis-interpreter/

John



[creduce-dev] cache

2016-06-22 Thread John Regehr

Just thinking aloud a bit here...

C-Reduce used to have a cache for delta tests.  Since the test case 
usually gets smaller, the cache only hits when C-Reduce's execution 
becomes repetitive near the final fixpoint.  The cache sometimes used a 
lot of memory.  I ended up taking it out when it got in the way of 
debugging parallel execution.


But anyway I was thinking that most of the benefit could be gotten back 
by doing caching at the level of passes instead of individual 
transformations.  So when we're about to invoke a pass, check in the 
cache if this pass has seen this test input before.  If so, replace it 
with the output and move to the next pass.  Again, this'll only speedup 
the last round of execution, but that would be nice sometimes.


John


Re: [creduce-dev] [RFC] Switching from Perl to Python

2016-06-20 Thread John Regehr
Hi Moritz, I need to give your version a careful read and run it against 
the default version-- will get back to you.  Thanks!


John


On 6/19/16 9:08 PM, Moritz Pflanzer wrote:

Hi John,

It took me a good time longer than I expected but I now my Python version of C-Reduce is 
complete. I ported all passes to Python and except for the "skip key" feature 
all options are supported. The skip-key feature seems indeed to be quite a problem under 
Linux as there is no non-blocking read function which reads only a single character (not 
followed by ENTER). For now I stopped trying to implement the feature to see if there is 
further interest at all.

I pushed the current version to the "python" branch of my repository at: 
https://github.com/mpflanzer/creduce/tree/python
I added a second table to the spreadsheet for a speed comparison for the 
complete set of passes: 
https://docs.google.com/spreadsheets/d/1FIvuHr29X2T2H2wOrnGCU0BUM3NeQrvJY_GpKMVJRCA/edit?usp=sharing



Yes, absolutely, there's no requirement for a line-by-line rewrite. I'll have 
to look at the code to find things I'm unhappy with, but the short version is 
I've rewritten the C-Reduce core enough times that I'm pretty happy with it.


I think I found a few minor bugs in some passes which I fixed in the Python 
version. So if the Python version has no future on its own I will compare it 
against the Perl version and report the issues I found.

The only major change -- I think -- is that interestingness test are now Python modules, i.e. Python scripts. Currently 
each script has to define a "run" and a "check" function. Both function take a list of test cases 
as inout, "check" has to return True or False depending on the interestingness of the test cases and 
"run" has to exit with either 0 or 1; also depending on the interestingness.
Except for the fact that it have to be Python the "API" of the script could easily be 
changed. They have to be Python scripts because this allows to use the "multiprocessing" 
module which is platform independent and supports everything that is needed to run the 
interestingness tests as separate process and to wait on any process to finish.



One thing I've wanted is a way to better specify the pass ordering, which is 
kind of hacky.  It would also be nice to allow users to customize the passes 
using a little file.  Also we'd like to customize the passes based on the 
language that is being reduced.


At the top-level passes are now represented in groups. This allows to make 
different configurations based on languages etc. Each group consists of three 
subgroups (first, main, last) which itself contain a list with passes. The 
priority is now implied by the position in the list.
The downside is that the passes have to be repeated for each group but this 
layout makes it easy to take for instance a JSON file as input to define a 
custom group of passes (not implemented yet).



Regarding the passes, there's some pretty bad stuff in the regex-based passes.  
These are basically the original C-Reduce code from the mid/late 2000s.  It 
would be great to find better ways to do these rewrites.


I hope the nested regular expressions are now a bit nicer and maybe even 
faster. I had to write a custom parser for nested expressions as there seems to 
be no build-in approach. Currently it is more like a prototype -- though fully 
functional -- and I am sure it could be further improved. Both in terms of ease 
of use and performance.

I am happy about any questions or comments. In the meantime I will try to keep 
my version up-to-date with the master branch of C-Reduce.

Regards,

Moritz



Re: [creduce-dev] [RFC] Switching from Perl to Python

2016-05-26 Thread John Regehr

Oh, seems like a will have a hard time to convince you that Python is not too 
bad in the end. When I first used Python I was sceptical as well but after 
using it for some time it has become a convenient tool for tasks which require 
a little bit more than just a shell script. But I guess that is what you would 
use Perl for.


Well, I'm not especially opposed, it just has never grabbed me.


Just be clear, I run the same passes for both version (Perl and Python). 
Because the Python version is not complete yet, I had to disable some of the 
passes in the Perl version. I also checked that both version produce the same 
reduced file in the end.
It might be that not using the original test scripts but Python based ones have 
caused the difference. But as long as performance is not a main criterion for 
you I would spent more time in analysis this.


I see, very interesting.  Yes, probably the reduced forking is the win 
then.



I would be willing to do the rewriting. Though instead of just "translating" 
the Perl code into Python I would suggest to think about potential changes to improve the 
maintainability and readability. Maybe you have something that you always wanted to 
change anyway but never had the time?
I think that could be easily done in the style of small code reviews by 
splitting the work up into smaller chunks.


Yes, absolutely, there's no requirement for a line-by-line rewrite. 
I'll have to look at the code to find things I'm unhappy with, but the 
short version is I've rewritten the C-Reduce core enough times that I'm 
pretty happy with it.


One thing I've wanted is a way to better specify the pass ordering, 
which is kind of hacky.  It would also be nice to allow users to 
customize the passes using a little file.  Also we'd like to customize 
the passes based on the language that is being reduced.


Regarding the passes, there's some pretty bad stuff in the regex-based 
passes.  These are basically the original C-Reduce code from the 
mid/late 2000s.  It would be great to find better ways to do these rewrites.


In the meantime, it should be easy enough to just invoke the Perl 
interpreter to run these passes.  No need to touch them or even look at 
them!


John



Re: [creduce-dev] [RFC] Switching from Perl to Python

2016-05-26 Thread John Regehr
Hi Moritz, this is cool.  I've thought about the Perl vs Python issue a 
number of times and basically I just do not love Python no matter how 
many times I start writing it.  On the other hand I can probably get 
over this.


My guess is that the speedup you're seeing is mostly due to running 
fewer passes, since in general CPython is pretty suckily slow compared 
to Perl.  Probably not a big issue for C-Reduce, however, which is 
almost always bottlenecked by interestingness tests.


I do feel strongly that the abstraction boundary between the core and 
the passes and the interestingness tests should be a strong one, 
probably a process by default.


Anyway I need to think about it more and no doubt the other C-Reduce 
people will have opinions.  I'm open to moving to a different 
implementation of the C-Reduce core, but not until the replacement is 
feature complete (and I'm probably not going to have a lot of time to 
work on it myself, but I'm happy to do code reviews).


Keep in mind that the C-Reduce passes are not all equally useful and 
some merging and removing of functionality can probably be done without 
hurting the end results.


John


On 5/26/16 9:36 PM, Moritz Pflanzer wrote:

Hi all,

I am wondering if there might be interest in rewriting the C-Reduce core 
algorithm and the reduction passes in Python. Potential benefits could be:

- I suspect more people are familiar with Python than with Perl
- Python offers a lager set of features without the need to install additional 
modules (see below)
- The implementation seems to be a bit simpler and cross-platform compatibility 
seems to be easier (see below)
- Python is more actively maintained? (Here I am just guessing based on recent 
popularity)
- A Python based implementation could lead to smaller run-times (see below)

Feel free to add other points or to discuss about potential cons of switching. 
So far I could think of:

- Some effort is required to do the rewriting
- You guys might be more familiar with Perl?


To push a little bit more in the direction of switching over I created a first 
proof of concept Python version and compared (most of) the included test 
between the existing Perl and my Python version. Because the Python version is 
not complete yet (see below) I had to disable a few passes to allow a fair 
comparison. And I ran only tests 0-3 and 6, 7 because 4 and 5 make use of KCC 
and Frama-C and I did not want to go through to much trouble setting everything 
up. ;-) (Running them wouldn't have been a problem, though.)

My detailed results can be found here: 
https://docs.google.com/spreadsheets/d/1FIvuHr29X2T2H2wOrnGCU0BUM3NeQrvJY_GpKMVJRCA/edit?usp=sharing

In short: On Linux my Python version takes only 62% of the time on average, on 
Windows there is not much of a difference. (This might be because the 
bottleneck on Windows is the process creation -- as opposed to forking on Linux 
-- and not the passes themselves.)
On Linux the Perl variant used the original shell test scripts, for the Python 
variant I converted the tests to equivalent Python function. In both cases each 
test was run as a separate process, so I guess the comparison is fair.
On Windows, since I could not run the shell scripts, both variants used the 
same Python scripts.


Some words about the Python version. First, it can be found here: 
https://github.com/mpflanzer/creduce/blob/python/creduce/creduce.py
- It took me about 10-20 hours to write this version -- hard to say how long 
exactly since I could always only work for short periods. I would estimate that 
it is about 70% complete with respect to the Perl version.
- I have written it in Python3 as it offers some convenient features over 
Python2 and the recommendation is to start new work with Python3 anyway.
- It does not use anything but the modules which come with the default Python 
installation (both Linux and Windows)
- I think the largest missing piece are the passes that remove matched 
parentheses, braces etc. Python has no built-in functionality so a small custom 
parser would have to be written -- should not be to difficult
- I have not yet figured out the best way to represent, load and execute the 
interestingness tests. Ideally I would like to have a base class from which 
each custom test could inherit. Each test would then be written in a separate 
Python script but dynamically imported into the C-Reduce script. Then it could 
be used as any other class. If that's not really feasible it is however no 
problem to just run them as independent scripts -- the same way like it is now 
in the Perl version.


I think that is all I can report for now. Please let me know what you think 
about the idea or if you need some more information. I might have missed 
something in this writeup.

Best regards,

Moritz



Re: [creduce-dev] OS X Configure-time Error (was Re: Run-time Perl Warnings from C-Reduce bc92ff4)

2016-04-18 Thread John Regehr

I have latest of everything incl xcode 7.3.

John


On 4/18/16 5:15 PM, Eric Eide wrote:

Eric Eide  writes:


clang: warning: no such sysroot directory:
'/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk'


Indeed, there seems to be something odd about the Xcode installed on my laptop.

I am running OS X 10.10.5, but I only have the 10.11 SDK installed?

Eric.



Re: [creduce-dev] OS X Configure-time Error (was Re: Run-time Perl Warnings from C-Reduce bc92ff4)

2016-04-18 Thread John Regehr

Curiously, I don't get quite the same error, but not very different either.

John



configure:15953: checking can compile with and link with LLVM(engine)
configure:15979: g++ -o conftest -g -O2 
-I/Users/regehr/clang+llvm-3.8.0-x86_64-apple-darwin/include -isysroot \
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk 
-fPIC -fvisi\
bility-inlines-hidden -std=c++11 -DNDEBUG -fno-exceptions -fno-rtti 
-D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MAC\
ROS -D__STDC_LIMIT_MACROS  conftest.cpp  -lLLVMX86Disassembler 
-lLLVMX86AsmParser -lLLVMX86CodeGen -lLLVMSelecti\
onDAG -lLLVMAsmPrinter -lLLVMCodeGen -lLLVMScalarOpts -lLLVMInstCombine 
-lLLVMInstrumentation -lLLVMProfileData \
-lLLVMTransformUtils -lLLVMBitWriter -lLLVMX86Desc -lLLVMMCDisassembler 
-lLLVMX86Info -lLLVMX86AsmPrinter -lLLVM\
X86Utils -lLLVMMCJIT -lLLVMExecutionEngine -lLLVMTarget -lLLVMAnalysis 
-lLLVMRuntimeDyld -lLLVMObject -lLLVMMCPa\
rser -lLLVMBitReader -lLLVMMC -lLLVMCore -lLLVMSupport 
-L/Users/regehr/clang+llvm-3.8.0-x86_64-apple-darwin/lib \
-Wl,-search_paths_first -Wl,-headerpad_max_install_names -lcurses 
-lpthread -lz -lm >&5
clang: warning: no such sysroot directory: 
'/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform\

/Developer/SDKs/MacOSX10.10.sdk'
In file included from conftest.cpp:24:
In file included from 
/Users/regehr/clang+llvm-3.8.0-x86_64-apple-darwin/include/llvm/IR/LLVMContext.h:18:
In file included from 
/Users/regehr/clang+llvm-3.8.0-x86_64-apple-darwin/include/llvm/Support/CBindingWrapping.h\

:17:
In file included from 
/Users/regehr/clang+llvm-3.8.0-x86_64-apple-darwin/include/llvm/Support/Casting.h:20:

/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/cassert\
:21:10: fatal error: 'assert.h' file not found
#include 
 ^
1 error generated.


On 4/18/16 5:10 PM, Eric Eide wrote:

John Regehr  writes:


Also I still can't successfully configure C-Reduce against the LLVM 3.8
binaries distributed from the LLVM web site on OS X.


I tried this just now.  My attempt failed to configure, because the LLVM test
program failed to compile, because some header file seems to be missing, or
path misconfigured, or something (as yet undiagnosed).

Attached is the error that I get, copied from "config.log".  Is is the same as
the error that you see, John?

Eric.

-
configure:15953: checking can compile with and link with LLVM(engine)
configure:15979: g++ -o conftest -g -O2  
-I/z/cr/clang+llvm-3.8.0-x86_64-apple-darwin/include -isysroot 
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk
 -fPIC -fvisibility-inlines-hidden -std=c++11 -DNDEBUG -fno-exceptions -fno-rtti 
-D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS  conftest.cpp  
-lLLVMX86Disassembler -lLLVMX86AsmParser -lLLVMX86CodeGen -lLLVMSelectionDAG 
-lLLVMAsmPrinter -lLLVMCodeGen -lLLVMScalarOpts -lLLVMInstCombine 
-lLLVMInstrumentation -lLLVMProfileData -lLLVMTransformUtils -lLLVMBitWriter 
-lLLVMX86Desc -lLLVMMCDisassembler -lLLVMX86Info -lLLVMX86AsmPrinter -lLLVMX86Utils 
-lLLVMMCJIT -lLLVMExecutionEngine -lLLVMTarget -lLLVMAnalysis -lLLVMRuntimeDyld 
-lLLVMObject -lLLVMMCParser -lLLVMBitReader -lLLVMMC -lLLVMCore -lLLVMSupport 
-L/z/cr/clang+llvm-3.8.0-x86_64-apple-darwin/lib -Wl,-search_paths_first 
-Wl,-headerpad_max_install_names -lcurses -lpthread -lz -lm >&5
clang: warning: no such sysroot directory: 
'/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk'
In file included from conftest.cpp:24:
In file included from 
/z/cr/clang+llvm-3.8.0-x86_64-apple-darwin/include/llvm/IR/LLVMContext.h:18:
In file included from 
/z/cr/clang+llvm-3.8.0-x86_64-apple-darwin/include/llvm/Support/CBindingWrapping.h:17:
In file included from 
/z/cr/clang+llvm-3.8.0-x86_64-apple-darwin/include/llvm/Support/Casting.h:19:
In file included from 
/z/cr/clang+llvm-3.8.0-x86_64-apple-darwin/include/llvm/Support/type_traits.h:17:
In file included from 
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/type_traits:205:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/__config:23:10:
 fatal error: 'unistd.h' file not found
#include 
  ^
1 error generated.
-



Re: [creduce-dev] Run-time Perl Warnings from C-Reduce bc92ff4

2016-04-18 Thread John Regehr

Should be fixed, thanks.

John


On 4/18/16 4:59 PM, Eric Eide wrote:

John Regehr  writes:


Also I still can't successfully configure C-Reduce against the LLVM 3.8
binaries distributed from the LLVM web site on OS X.


Thanks for the reminder.  I haven't gotten to testing on OS X yet.



Re: [creduce-dev] Run-time Perl Warnings from C-Reduce bc92ff4

2016-04-18 Thread John Regehr

I'll look at it now.

Also I still can't successfully configure C-Reduce against the LLVM 3.8 
binaries distributed from the LLVM web site on OS X.


John


[creduce-dev] paper of interest to C-Reduce users

2016-04-12 Thread John Regehr

http://www.doc.ic.ac.uk/~afd/homepages/papers/pdfs/2016/IWOCL_CL-Reduce.pdf

John


Re: [creduce-dev] New planned release

2016-04-05 Thread John Regehr

Thanks Eric!  I have time to help in the near future.

There was some C++ hacking I was trying to get done for this release but 
since I'm having trouble getting back to it, let's push this out soon.


We could use testing on Windows!  Anyone have a machine handy?  I have 
access to zero Windows machines right now.  I think MS has free VM 
images these days but I probably don't have time to get one setup to the 
point where it can run C-Reduce.


John


On 4/5/16 4:44 PM, Eric Eide wrote:

Martin Liška  writes:


I would like to ask you when do you plan a next release? I would like
to make a new package for openSUSE that incorporates a commit that
implements multiple input files.


"Very soon."  Let me say, in the next two weeks.

We are working to make a new release that is compatible with LLVM 3.8.  It
will include other work to date, including multi-file reduction.

I am working on the release now --- see recent commits --- but I haven't had
much time to work on it in the last little while.  (FWIW, I was hoping that if
I waited a bit, the current problem the Travis-CI build would fix itself, but
that hasn't worked out.)  Most of what remains to be done is testing on various
platforms/configs and a review of the included documentation, so we are close.

Thanks ---

Eric.



Re: [creduce-dev] towards a release

2016-03-15 Thread John Regehr

Ugh, maybe mail the maintainer or the list?

John


On 3/15/16 1:33 PM, Eric Eide wrote:

Let's work towards a release sometime in the next few weeks.


FWIW, today I discovered that the instructions for getting the nightly 3.8
packages, found at , do not work.

Probably ("hopefully") I should say, do not work yet.

This is annoying, because that is how I've been getting LLVM stuff for the
Travis-CI-hosted test builds.

Just sayin'.

I would like to resolve this before doing the release, but it's not strictly
necessary.  This email is just to report what I've discovered about the current
state of the world.



Re: [creduce-dev] towards a release

2016-03-11 Thread John Regehr
On Ubuntu 14.04, our current repo configures/builds/installs using both 
clang++ and g++ while linking against the distributed LLVM 3.8 binaries.


When linking against an LLVM 3.8 that I compiled myself, I get "cannot 
compile and link test program with selected LLVM".


So that is fun-- one thing works on OS X and the other works on Linux. 
Yay autotools!


John



On 3/10/16 10:21 PM, Eric Eide wrote:

John Regehr  writes:


Let's work towards a release sometime in the next few weeks.


It's on my list.  Note that I will be in Germany next week, so I likely won't
get to work on this much then.  Maybe on the long plane rides.


Eric, I noticed that while I can build C-Reduce against an LLVM 3.8 that I
compiled myself, I cannot build (on OS X) against the released clang+llvm
tarball from the LLVM web site.  configure fails when it tries to build a
trivial program using LLVM.


Yeah, I will be sure to test this.



Re: [creduce-dev] Nit

2016-03-10 Thread John Regehr
In general I agree but sometimes people move to the branch when close to 
a release since the new version becomes more important than the old one.


Sorry, I find it hard to get worked up about this sort of hygiene.  I'll 
be more professional once I'm being paid to write C-Reduce.


John


On 3/10/16 10:11 PM, Eric Eide wrote:

Looking at some recent git history, I gather that some recent "general" bug
fixes have happened on the llvm-svn-compatible branch, not the master branch.
E.g., 440b6ae.

Is that right?  Or are these fixes specific to the llvm-svn-compatible branch?

At some level it doesn't really matter, since the branches come back together
when we make a release.  But it might be a good idea to do all the dev/bug
fixing on master, and save llvm-svn-compatible for versionitis stuff.

My $0.02.



[creduce-dev] towards a release

2016-03-10 Thread John Regehr
Now that LLVM 3.8 is out I've merged the llvm-svn-compatible into 
master, and deleted llvm-svn-compatible to help people remember not to 
use it for now.


Let's work towards a release sometime in the next few weeks.

Eric, I noticed that while I can build C-Reduce against an LLVM 3.8 that 
I compiled myself, I cannot build (on OS X) against the released 
clang+llvm tarball from the LLVM web site.  configure fails when it 
tries to build a trivial program using LLVM.


Thanks everyone!

John


[creduce-dev] templates

2016-02-16 Thread John Regehr
It's great that people have been submitting pull requests lately (and 
that Yang has been accepting them)!


I was thinking about sitting down and writing a clang_delta pass that 
would take the example below (from C-Reduce issue #66) and push first 
the char * template parameter and second the int parameter through the code.


Actually we probably want two new passes: one that propagates parameters 
around, and another that eliminates unused template arguments.


But, I haven't hacked clang before and am not even sure how to get 
started.  Is there anyone here who hacks on clang who could give me a 
quick outline of what the implementation to these would probably look 
like?  What APIs to use, basically?


Thanks,

John



#include 
template  class cls {
public:
  static void foo(T);
};
template  void cls::foo(T a) { std::cout 
<< a; }

int main() { cls::foo("hello"); }


Re: [creduce-dev] a few crashes in llvm-svn-compatible

2016-01-24 Thread John Regehr
To everyone else: now is a good time to report and C-Reduce bugs since 
we'll be releasing a new version not too long after LLVM 3.8 comes out.


John



On 1/24/16 11:06 AM, John Regehr wrote:

Awesome, thanks Yang!

I wonder what's going on on your magical Mac?

John



On 1/24/16 10:11 AM, Yang Chen wrote:

OK. Fixed in master and llvm-svn-compatible.

- Yang

On 2016-01-19 02:22, John Regehr wrote:

I've updated the llvm-svn-compatible branch and am doing some testing
against the LLVM-3.8 release branch.  Attached are a few clang_delta
crashes, if someone has time to fix.

Thanks,

John






Re: [creduce-dev] a few crashes in llvm-svn-compatible

2016-01-24 Thread John Regehr

Awesome, thanks Yang!

I wonder what's going on on your magical Mac?

John



On 1/24/16 10:11 AM, Yang Chen wrote:

OK. Fixed in master and llvm-svn-compatible.

- Yang

On 2016-01-19 02:22, John Regehr wrote:

I've updated the llvm-svn-compatible branch and am doing some testing
against the LLVM-3.8 release branch.  Attached are a few clang_delta
crashes, if someone has time to fix.

Thanks,

John




Re: [creduce-dev] a few crashes in llvm-svn-compatible

2016-01-23 Thread John Regehr

Ok let's just make sure of a couple more things.

Can you make sure you are also building C-Reduce using Clang?  I do this 
via autoconf:


  ./configure CC=clang CXX=clang++ --prefix=$HOME/creduce-install

I doubt this matters but we might as well make sure.

Then run this on the attached file:

$ "/Users/regehr/creduce-install/libexec/clang_delta" 
--transformation=local-to-global --counter=1 main.cpp
Assertion failed: (RWBuf && "Empty RewriteBuffer!"), function 
outputTransformedSource, file Transformation.cpp, line 101.

Abort trap: 6
$

What is the output when you run this?  Does it do a useful transformation?

Eric can you follow these same steps to provide an extra data point?

John
#include 
struct Nothing;

   template   class cls {
  public: static void foo(T a);
};
 
template  void cls::foo(T a) {
 std::cout<::foo(b);
 }
int main(int argc, char**argv) {
   bar("hello");
 }


Re: [creduce-dev] a few crashes in llvm-svn-compatible

2016-01-22 Thread John Regehr
Yang, let's try to eliminate some possible differences in our setups. 
Let's talk Mac for now.  I'm running 10.11.3 with I suppose the latest 
Xcode since there aren't any updates waiting to install.


What options are you passing to C-Reduce's configure?  I pass no 
arguments since I've put clang 3.8 as the first clang in my PATH.


You have built this LLVM, right?
  URL: http://llvm.org/svn/llvm-project/llvm/branches/release_38

What options are you passing to LLVM cmake?  I build LLVM like this:

cmake -DLLVM_TARGETS_TO_BUILD=host 
-DCMAKE_INSTALL_PREFIX=${HOME}/llvm-38-install 
-DLLVM_ENABLE_ASSERTIONS=1 -DCMAKE_C_COMPILER=clang 
-DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Release -G Ninja ..

ninja install

I wonder if your LLVM 3.8 is built without assertions and this allows 
clang_delta sneak past some bugs?


John



On 1/22/16 7:17 AM, Yang Chen wrote:

Hmm, I couldn't reproduce the failures on my local systems, Ubuntu
14.04.3 LTS and Mac (Yosemite Darwin 14.4.0), with LLVM 3.8.

Seems something is strange. I recall that it's the third time at which I
couldn't reproduce the crashes for Mac...

- Yang

On 2016-01-19 02:22, John Regehr wrote:

I've updated the llvm-svn-compatible branch and am doing some testing
against the LLVM-3.8 release branch.  Attached are a few clang_delta
crashes, if someone has time to fix.

Thanks,

John




Re: [creduce-dev] a few crashes in llvm-svn-compatible

2016-01-22 Thread John Regehr
Weird.  I got the crashes on a Mac, I'll try Linux and let you know what 
happens.


John


On 1/22/16 7:17 AM, Yang Chen wrote:

Hmm, I couldn't reproduce the failures on my local systems, Ubuntu
14.04.3 LTS and Mac (Yosemite Darwin 14.4.0), with LLVM 3.8.

Seems something is strange. I recall that it's the third time at which I
couldn't reproduce the crashes for Mac...

- Yang

On 2016-01-19 02:22, John Regehr wrote:

I've updated the llvm-svn-compatible branch and am doing some testing
against the LLVM-3.8 release branch.  Attached are a few clang_delta
crashes, if someone has time to fix.

Thanks,

John




[creduce-dev] a few crashes in llvm-svn-compatible

2016-01-19 Thread John Regehr
I've updated the llvm-svn-compatible branch and am doing some testing 
against the LLVM-3.8 release branch.  Attached are a few clang_delta 
crashes, if someone has time to fix.


Thanks,

John


reduce.tar.gz
Description: GNU Zip compressed data


[creduce-dev] next release

2016-01-04 Thread John Regehr
Looks like LLVM 3.8 is scheduled for sometime in Feb.  Let's do a new 
C-Reduce release a few days after that.  I've been keeping the 
llvm-svn-compatible branch update up to date so this should be hardly 
any work.


In the meantime if anyone else wants to use --die-on-pass-bug in 
day-to-day usage, this is a good way to help find problems.


John



[creduce-dev] work to do

2015-11-17 Thread John Regehr
I wrote a quick blog post about reducing non-preprocessed code, the 
example at the bottom shows that we still have plenty of low-hanging 
fruit for clang_delta to clean up.  Yang has long been heroically 
attacking these problems, does anyone else have time to take a stab?


John


http://blog.regehr.org/archives/1278


Re: [creduce-dev] parallel tuning

2015-11-17 Thread John Regehr

Yes. I run creduce on a 2x10x8 ppc64le machine often and I definitely
don't want to use 160 tests in parallel...


Aw come on, that would be awesome.  You should try it once just to see 
what happens.


But anyway I just pushed some code implementing the change I mentioned 
earlier today that (1) tries to detect physical instead of hyperthreaded 
cores and (2) backs off from the full N, maxing out at 4.  I think these 
are good defaults and Eric can pull out all the stops on the big iron 
using "-n".


John



Re: [creduce-dev] parallel tuning

2015-11-17 Thread John Regehr
I'll see what I can do.  Systems don't exactly expose this stuff in a 
nice way.


There's this but it doesn't work on my Mac:

http://search.cpan.org/dist/Unix-Processors-2.042/Processors.pm

John


On 11/17/15 4:44 PM, Eric Eide wrote:

John Regehr  writes:


But also, I think it's most friendly to provide a good default for the common
case of a single-socket Core-i7, not some quad-socket monster Xeon thing.


Oh sure, I agree.

But we can do both?  If C-Reduce detects that it is running on a monster, it
can use a monsterish default.



Re: [creduce-dev] parallel tuning

2015-11-17 Thread John Regehr

Thanks!

John


On 11/17/15 4:38 PM, Markus Trippelsdorf wrote:

On 2015.11.17 at 15:08 +0100, John Regehr wrote:

Thanks Markus!  If you have time to run 2,3,5 I'd be curious to see those
too.


creduce -n 1 --backup ./check.sh bug244.cc  2576.49s user 300.02s system 100% 
cpu 47:47.16 total

creduce -n 2 --backup ./check.sh bug244.cc  2950.81s user 374.22s system 171% 
cpu 32:17.60 total

creduce -n 3 --backup ./check.sh bug244.cc  3383.26s user 416.05s system 216% 
cpu 29:15.88 total

creduce -n 4 --backup ./check.sh bug244.cc  3714.57s user 480.69s system 243% 
cpu 28:46.14 total

creduce -n 5 --backup ./check.sh bug244.cc  4199.84s user 523.13s system 258% 
cpu 30:25.24 total

creduce -n 6 --backup ./check.sh bug244.cc  4759.06s user 578.17s system 270% 
cpu 32:54.00 total



Re: [creduce-dev] parallel tuning

2015-11-17 Thread John Regehr

Eric I would love to see updated numbers.

But also, I think it's most friendly to provide a good default for the 
common case of a single-socket Core-i7, not some quad-socket monster 
Xeon thing.


John


On 11/17/15 4:20 PM, Eric Eide wrote:

John Regehr  writes:


- parallelism 2 on a dual core
- 3 on a 4-core
- 4 on a >4 core
How does this match with your experience?


I haven't really experimented with this.

I do think, though, that limiting parallelism to 4 is probably not a good
idea.

I used to run C-Reduce on Emulab's d820's, which have 32 physical cores and
128GB RAM.  With 32 tests in parallel, C-Reduce was plenty snappy.  I can't
imagine it would run as fast with only 4 tests in parallel, but I admit I did
not test this.

(Of course, I *could* test this, for some given set of reductions, but not
today.  Today I need to focus on something else.)

Eric.



Re: [creduce-dev] pass_ints.pm: Nit?

2015-11-17 Thread John Regehr

Hexadecimal number > 0x non-portable at 
/disk2/randtest/obj/creduce/pass_ints.pm line 56.


I wouldn't go quite so far as to say this is "fixed" but we won't be 
seeing the warning any longer.


John



Re: [creduce-dev] parallel tuning

2015-11-17 Thread John Regehr

I've flipped the default back to making a backup, sorry about that!

On 11/17/15 3:15 PM, Markus Trippelsdorf wrote:

On 2015.11.17 at 15:09 +0100, John Regehr wrote:

creduce -n 1 --backup ./check.sh bug244.cc  2576.49s user 300.02s system 100% 
cpu 47:47.16 total


Do we want --backup turned on by default?  Eric and I were debating this.


Well, I certainly do, because it saves time when one tweaks the
interestingness test (no need to run --save-temps again).



Re: [creduce-dev] parallel tuning

2015-11-17 Thread John Regehr

creduce -n 1 --backup ./check.sh bug244.cc  2576.49s user 300.02s system 100% 
cpu 47:47.16 total


Do we want --backup turned on by default?  Eric and I were debating this.

John



Re: [creduce-dev] parallel tuning

2015-11-17 Thread John Regehr
Thanks Markus!  If you have time to run 2,3,5 I'd be curious to see 
those too.


Earlier we had observed speedup peaking at around 8 cores, but that was 
on a burly multi-socket Xeon.  I expect most people will do better with 
a smaller degree of parallelism.


John


On 11/17/15 3:01 PM, Markus Trippelsdorf wrote:

On 2015.11.17 at 11:00 +0100, John Regehr wrote:

C-Reduce's strategy of querying the number of CPUs and running that many
parallel reduction attempts is bad in some cases, such as on my Macbook
where it runs with concurrency 8, where 3 would be a better choice.

We did a bunch of benchmarking of this a few years ago but I'm afraid that
the results are very specific to not only the platforms but also the
interestingness tests.  Some of those have very light cache footprints
whereas others (for example those that invoke static analyzers) tend to blow
out the shared cache.

My current idea is that first we need to detect real cores instead of
hyperthreaded cores, which is sort of a pain but we can special-case Mac OS
and Linux I guess.  Then maybe something like:

- parallelism 2 on a dual core
- 3 on a 4-core
- 4 on a >4 core

How does this match with your experience?


I've tested creduce on a real 6 core machine without hyperthreading with
a 2MB C++ testcase:

creduce -n 1 --backup ./check.sh bug244.cc  2576.49s user 300.02s system 100% 
cpu 47:47.16 total

creduce -n 4 --backup ./check.sh bug244.cc  3714.57s user 480.69s system 243% 
cpu 28:46.14 total

creduce -n 6 --backup ./check.sh bug244.cc  4759.06s user 578.17s system 270% 
cpu 32:54.00 total

So your idea looks good to me.



[creduce-dev] parallel tuning

2015-11-17 Thread John Regehr
C-Reduce's strategy of querying the number of CPUs and running that many 
parallel reduction attempts is bad in some cases, such as on my Macbook 
where it runs with concurrency 8, where 3 would be a better choice.


We did a bunch of benchmarking of this a few years ago but I'm afraid 
that the results are very specific to not only the platforms but also 
the interestingness tests.  Some of those have very light cache 
footprints whereas others (for example those that invoke static 
analyzers) tend to blow out the shared cache.


My current idea is that first we need to detect real cores instead of 
hyperthreaded cores, which is sort of a pain but we can special-case Mac 
OS and Linux I guess.  Then maybe something like:


- parallelism 2 on a dual core
- 3 on a 4-core
- 4 on a >4 core

How does this match with your experience?

John


Re: [creduce-dev] pass_ints.pm: Nit?

2015-11-16 Thread John Regehr
Thanks for fixing those scripts!  I just pushed a tiny additional change 
running the tests with --die-on-pass-bug.



Hexadecimal number > 0x non-portable at 
/disk2/randtest/obj/creduce/pass_ints.pm line 56.


I'll look into it.  I think this problem has been there for a while but 
I had forgotten about it.  Thanks,


John



Re: [creduce-dev] puzzling clang_delta crash

2015-11-16 Thread John Regehr
I can't speak for Yang but if we have any brave users who want to use 
--die-on-pass-bug and report any remaining bugs in my part of C-Reduce 
(everything except clang_delta, basically) I'll be happy to fix.  I know 
some bugs remain but I can't trigger them offhand.


John


On 11/16/15 8:37 AM, John Regehr wrote:

Great, C-Reduce now reduces this largish pile of non-preprocessed C code
(across ~180 files) without any pass failures!  I think that is a first.

John


On 11/16/15 5:03 AM, Yang Chen wrote:

Fixed. Thanks.

- Yang

On 11/15/2015 01:51 PM, John Regehr wrote:

Thanks Yang!

Recent patches have made clang_delta much more stable and a difficult
C++ reduction that I have sitting around now almost completes in
C-Reduce's new --die-on-pass-bug mode, but at the very end it trips
over this:

$ "/Users/regehr/creduce-install/libexec/clang_delta"
--transformation=rename-class --counter=1 hello.cpp
Segmentation fault: 11

John






Re: [creduce-dev] puzzling clang_delta crash

2015-11-15 Thread John Regehr
Great, C-Reduce now reduces this largish pile of non-preprocessed C code 
(across ~180 files) without any pass failures!  I think that is a first.


John


On 11/16/15 5:03 AM, Yang Chen wrote:

Fixed. Thanks.

- Yang

On 11/15/2015 01:51 PM, John Regehr wrote:

Thanks Yang!

Recent patches have made clang_delta much more stable and a difficult
C++ reduction that I have sitting around now almost completes in
C-Reduce's new --die-on-pass-bug mode, but at the very end it trips
over this:

$ "/Users/regehr/creduce-install/libexec/clang_delta"
--transformation=rename-class --counter=1 hello.cpp
Segmentation fault: 11

John




Re: [creduce-dev] puzzling clang_delta crash

2015-11-15 Thread John Regehr

Thanks Yang!

Recent patches have made clang_delta much more stable and a difficult 
C++ reduction that I have sitting around now almost completes in 
C-Reduce's new --die-on-pass-bug mode, but at the very end it trips over 
this:


$ "/Users/regehr/creduce-install/libexec/clang_delta" 
--transformation=rename-class --counter=1 hello.cpp

Segmentation fault: 11

John


creduce_bug_001.tar.gz
Description: GNU Zip compressed data


  1   2   3   >