<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en" xmlns="http://www.w3.org/2005/Atom"><title>Recent changes to feature-requests</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/" rel="alternate"/><link href="https://sourceforge.net/p/dacapobench/feature-requests/feed.atom" rel="self"/><id>https://sourceforge.net/p/dacapobench/feature-requests/</id><updated>2012-08-30T12:00:15Z</updated><subtitle>Recent changes to feature-requests</subtitle><entry><title>Consolidate and document harness exit codes</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/14/" rel="alternate"/><published>2012-08-30T12:00:15Z</published><updated>2012-08-30T12:00:15Z</updated><author><name>Andreas Sewe</name><uri>https://sourceforge.net/u/sewe/</uri></author><id>https://sourceforge.net1f5ab891e9852f81a2c45b471e1fec8b46810db8</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;Currently, there are numerous calls to System.exit with magic numbers strewn about the harness's code. There should be proper constants defined for those in a single place.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>patching/modification and incremental building</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/13/" rel="alternate"/><published>2010-09-27T12:45:48Z</published><updated>2010-09-27T12:45:48Z</updated><author><name>Sergey Vorobyev</name><uri>https://sourceforge.net/u/sergeyvorobyev/</uri></author><id>https://sourceforge.net95b9e0814ae3239bc486b6e9eb9a01c7c815aa04</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;We use daCapo benchmark in open source java race detector develop process. &lt;br /&gt;
I fix your ant build scripts and now I can change apache tomcat source and it doesn't override.&lt;br /&gt;
I often make minor changes and exec 'ant tomcat'.&lt;br /&gt;
Build process take about 10 min.&lt;/p&gt;
&lt;p&gt;I think it's good idea and useful for developers allow patching/modification and/or incremental building.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Distribute a policy file with the benchmark suite</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/12/" rel="alternate"/><published>2010-08-30T13:57:30Z</published><updated>2010-08-30T13:57:30Z</updated><author><name>Andreas Sewe</name><uri>https://sourceforge.net/u/sewe/</uri></author><id>https://sourceforge.net6d2e472d6b31cf0e07e215577581382fb5a882a6</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;Such a policy file is useful when developing a benchmark (bugs &amp;lt;https://sourceforge.net/tracker/?func=detail&amp;amp;atid=861957&amp;amp;aid=3056006&amp;amp;group_id=172498&amp;gt; and &amp;lt;https://sourceforge.net/tracker/?func=detail&amp;amp;atid=861957&amp;amp;aid=3056019&amp;amp;group_id=172498&amp;gt; were found this way); if a benchmark reads files outside the scratch directory, for example, this manifests itself in a permission check.&lt;/p&gt;
&lt;p&gt;Attached is a policy file that grants the harness the necessary permissions. It does not yet grant any permissions to a benchmark, however, as it is only meant as a template. Running the benchmark with such a policy is straight-forward:&lt;br /&gt;
java -Djava.security.manager -Djava.security.policy=dacapo.policy -Djava.security.debug=failed -jar dacapo-9.12-bach.jar&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>option to print supported sizes per benchmark</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/11/" rel="alternate"/><published>2010-03-11T17:06:53Z</published><updated>2010-03-11T17:06:53Z</updated><author><name>Eric Bodden</name><uri>https://sourceforge.net/u/ericbodden/</uri></author><id>https://sourceforge.netdd6e6176c35f33ffdc0210ef593a3a9593dfed20</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;Right now, for a shell script, it is not easy to figure out which input sizes are valid for each benchmark. You only find out that the input size is invalid when executing the benchmark and seeing it fail. It would be nice to have an option that returns the list of valid sizes when given a benchmark. Something like...&lt;/p&gt;
&lt;p&gt;$java -jar dacapo.jar -sizes fop&lt;br /&gt;
small default&lt;br /&gt;
$&lt;/p&gt;
&lt;p&gt;That would make scripting much easier.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Move much of daytrader.patch into files</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/10/" rel="alternate"/><published>2010-02-20T05:18:24Z</published><updated>2010-02-20T05:18:24Z</updated><author><name>Steve Blackburn</name><uri>https://sourceforge.net/u/steveb-oss/</uri></author><id>https://sourceforge.neta479e5b647c920776124d2d0f6c89509676b695d</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;daytrader.patch is large and contains a lot of entirely new files.   Would be nice if they could be moved out of the patch and into (editable) files.   This would make it easier to understand and easier to edit.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>clojure and other JVM-targeting languages as workloads?</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/9/" rel="alternate"/><published>2010-01-09T03:37:06Z</published><updated>2010-01-09T03:37:06Z</updated><author><name>Steve Blackburn</name><uri>https://sourceforge.net/u/steveb-oss/</uri></author><id>https://sourceforge.netc68ff8c45e11ee2e4b02758ee82b522c14790dbf</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;We should seriously consider adding clojure and jruby (among others) to the suite, just as we currently have jython.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://clojure.org/" rel="nofollow"&gt;http://clojure.org/&lt;/a&gt;&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Provide better information upon validation failure</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/8/" rel="alternate"/><published>2009-12-04T04:53:51Z</published><updated>2009-12-04T04:53:51Z</updated><author><name>Steve Blackburn</name><uri>https://sourceforge.net/u/steveb-oss/</uri></author><id>https://sourceforge.netdcea5c405e80d1f1105116c56a58e5cce076596e</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;Currently when a benchmark fails validation, the user just gets a message stating that validation failed, and the observed and expected checksums.&lt;/p&gt;
&lt;p&gt;Ideally we would have captured the expected output and when validaton fails, we output the diff between expected and observed outputs.&lt;/p&gt;
&lt;p&gt;This feature requested by Alexei Svitkine.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Command line option for system.gc() between iterations</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/7/" rel="alternate"/><published>2009-11-27T03:36:17Z</published><updated>2009-11-27T03:36:17Z</updated><author><name>Steve Blackburn</name><uri>https://sourceforge.net/u/steveb-oss/</uri></author><id>https://sourceforge.net89b233d4b31e1f035d1f2ef09b8f79c5be9e23ab</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;Add a command line option that forces system.gc() between iterations.   We can have this turned on by default.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Terse summary of what bm is doing</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/6/" rel="alternate"/><published>2009-11-07T01:05:27Z</published><updated>2009-11-07T01:05:27Z</updated><author><name>Steve Blackburn</name><uri>https://sourceforge.net/u/steveb-oss/</uri></author><id>https://sourceforge.net4315e53a481d673fabbd4418ffeaef8111b0a6ac</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;Users often want to know what the difference is between diffefrent "sizes" of each benchmark.   This can be established via the .cnf file, but even then is often obtuse.&lt;/p&gt;
&lt;p&gt;We should document the affect of size choice in the cnf file as intelligible prose, and allow this to be printed (via the -i command line option probably)&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Be explicit about threading model</title><link href="https://sourceforge.net/p/dacapobench/feature-requests/5/" rel="alternate"/><published>2009-10-20T02:54:32Z</published><updated>2009-10-20T02:54:32Z</updated><author><name>Steve Blackburn</name><uri>https://sourceforge.net/u/steveb-oss/</uri></author><id>https://sourceforge.nete58ffcb15631851cdcd759ec217fad4849b9e246</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;The adaptive threading model (N per h/w thread) may appear mysterious.   We should probably output an explicit message at the start of each execution of each benchmark stating how many threads are running (and in the case of N per h/w thread, state the reason why (ie that there are N h/w threads detected)).&lt;/p&gt;&lt;/div&gt;</summary></entry></feed>