Hi, I have a performance problem with the following setup:

I use a Ruby script to run a Java stress-test tool (JMeter) using
Kernel.system(). The Ruby part is responsible for building configuration
files for JMeter, looking at the response from the test, and building a
new test configuration for JMeter before it launches the test again.
Essentially:

1. Create Initial XML configuration for JMeter (~1sec)
2. Run JMeter using system() for this configuration (~2 mins)
3. system() returns, JMeter ends run, the script is read (~20 secs)
4. A new XML configuration is created for JMeter, goto '2' (~1sec)

The JMeter task spawns 400 threads very quickly, and stresses a remote
server for about 2 minutes.

I am seeing performance and reproducability problems.

For example, I set the script to generate the exact same configuration
for JMeter every time, and I get wildly different results back from the
test. Interestingly, the first test run gives me back the exact results
I expected, however, the second test always reports dramatically poorer
performance for exactly the same test. I have checked on the server side
(I am stressing an Apache setup) and the logs show no problems - the
problem is to do with my test setup of Ruby + JMeter.

Here are some things that I think are giving anomalous results for the
second reading (these anomalie are reproducable and always happen on the
second reading):

a. JMeter spawns 400 threads per test. The overhead of cleaning up after
these threads lasts until the second run is active, slowing it down. Or
perhaps the OS hasn't cleaned up the first 400 threads by the time the
second JMeter instance is launched.
b. Ruby is garbage collecting in the middle of the second run, causing a
dramatic slowdown on the client side.
c. JMeter, being Java, is garbage collecting for some reason (although
JMeter is started anew with system() for each run, so should be virgin?)
d. Some strange interaction between Ruby and Java?

If anyone has any advice, particularly on preventing Ruby GC for a
certain time, that would be fantastic. I've run a lot of tests, and
added some sleep() calls for ~20 seconds between runs to try and let
things settle wrt threads and memory being cleaned up etc. and this
**appears** to have some effect, but still the results are far from
stable across runs when they should be.

Thanks for reading this far, I appreciate this is a messy request with
no clear answer,
-- 
Posted via http://www.ruby-forum.com/.