Results 51 to 61 of 61

03262011, 07:56 PM #51Guest Vendor
 Join Date
 Feb 2010
 Location
 Boston
 Posts
 877
 Rep Points
 891.7
 Mentioned
 63 Post(s)
 Rep Power
 0
Problem spu4dea still isn't seeing after being proven wrong at many counts is that
1) none of those sample tunes are on a dyno dynamics.
2) none of those sample tunes are of an oe v3 tune on another dyno.
Sometimes intelligent people who make nerd graphs in their free time to prove an agenda also shortsight the simple things. Scientific method.
BG

03262011, 08:21 PM #52Member
 Join Date
 Feb 2010
 Posts
 293
 Rep Points
 226.0
 Mentioned
 0 Post(s)
 Rep Power
 3

03282011, 02:27 AM #53

03282011, 09:17 AM #54Member
 Join Date
 Feb 2010
 Posts
 293
 Rep Points
 226.0
 Mentioned
 0 Post(s)
 Rep Power
 3
Sure. Each posttune dyno graph has variations in the power gained at various rpms points over the baseline run. If a tuner had exactly the average amount of variation, they would be at 0 on the graph. GIAC has almost average variation, ESS has a little less than GIAC. OE on dynojet and AA both have slightly more variation than average. OE on Gintani's dyno had much, much less variation.
What normal distribution looks like... 95.4% of the data should fall within + 2 standard deviations of the mean. 99.7% should fall within + 3 standard deviations, and 99.99% should fall within 4, and 99.9999% should fall within 5. Gintani fell outside of 5 standard deviations. (6 is the statistical limit  anything outside of that is essentially impossible rather than just really really really really $#@!ing unlikely).
Potential criticism:
1) Sample size is too small...
Response: The calculation of standard deviation factors in sample size (small samples will have a larger standard deviation than larger samples of the same distribution).
2) This assumes a normal distribution
Response: It does assume a normal distribution... However, a normal distribution seems to hold when I add additional tuners to the existing graph:
3) There is a dynojet bias to the data
Response: Possible, but as you can see above Evolve's DD data fit the normal distribution  as did GIAC's loadbearing mustang dyno. However, both of those graphs also included an aftermarket air filter and Evolve's before/after was done on 2 separate cars/days so they are imperfect comparisons (I would of course welcome additional data)Last edited by spdu4ea; 03282011 at 09:40 AM.

03292011, 12:56 AM #55
I think you have a few assumptions in your data so I can't see this as complete proof.
Plus, are you basing this all on 1 graph? As in, have you collected all the OE graphs and seen if they all hold true?
You stated it was unlikely to have uniform deviation but however unlikely I take it that it could happen, correct? So this is really a theory, not proof.
However, I think if you wish to support it further why not take a look at other OE graphs as well?

03292011, 06:19 PM #56Member
 Join Date
 Feb 2010
 Posts
 293
 Rep Points
 226.0
 Mentioned
 0 Post(s)
 Rep Power
 3
I've looked at at least a dozen OE graphs but only these 2 (OE vs stock @ Gintani, and OE vs PC @ Gintani) were statistically unbelievable.

03292011, 06:34 PM #57Guest Vendor
 Join Date
 Feb 2010
 Location
 Boston
 Posts
 877
 Rep Points
 891.7
 Mentioned
 63 Post(s)
 Rep Power
 0
Put all the graphs (of the larger sample) and redo the bell curve. Then you can dissect and call on the dates of the ones you didn't like.

03292011, 07:25 PM #58Member
 Join Date
 Feb 2010
 Posts
 293
 Rep Points
 226.0
 Mentioned
 0 Post(s)
 Rep Power
 3
that bell curve is only for a tune on a stock S65. When I recalculated using all of the remaining samples available from the dyno db, the standard of deviation got smaller and decreased the probability of the OE/Gintani data being legitimate from 0.00003 to 0.00000000443
From (n=4):
To (n=4):
To (n=7):
If you have more before/after tune graphs of stock S54s feel free to send them my way: (myusername)@hotmail.com

03292011, 07:52 PM #59
Let's assume you are right for a moment.
And I would like to say I respect your opinion and don't believe you engage in any pointless witch hunts.
Why would only these graphs have it and others would not?
I think a larger sample size is necessary. Additionally, saying this is proof of OE fixing graphs when actual proof and a much larger sample of them clearly not fixing graphs exists does present tremendous evidence to the contrary.
So, perhaps there is some other explanation to this possibly also what Bren hinted at?

03302011, 12:06 AM #60Member
 Join Date
 Feb 2010
 Posts
 293
 Rep Points
 226.0
 Mentioned
 0 Post(s)
 Rep Power
 3
Thanks  I do like to keep vendors honest, but in this case I didn't start out looking for any wrong doing  I just noticed the remarkable similarities between the PC/OE graphs and went from there.
Why would only these graphs have it and others would not?
I think a larger sample size is necessary.
Additionally, saying this is proof of OE fixing graphs when actual proof and a much larger sample of them clearly not fixing graphs exists does present tremendous evidence to the contrary.
Back to your point, a large sample of data that passes this test doesn't invalidate a small sample that fails. That logic is like saying "but barry bonds only failed 2 drug tests." That appeal may carry some emotional influence, but does not have statistical merit.
So, perhaps there is some other explanation to this possibly also what Bren hinted at?

03302011, 03:19 AM #61
If there is a motive established it would strengthen your argument. As of right now it is inconsistent because other graphs exist which seem to conform with your standard of deviation even if they are Powerchip before and after graphs.
You can check my dynos in the garage if you like. Those are with Gintani tuning on a dynojet and look good to me.
You are right, you should have titled it differently as you stated. A large sample of data that passes this "test" doesn't invalidate a single sample. However, it does make it more of an anomaly than a consistency. It is possible, extremely unlikely, that what is going on is natural there. So, you have evidence that tamping could be going on yet we have proof that tampering in 99.9% of OE graphs is not. I would be more inclined to think this falls into the anomaly category.
The Barry Bonds example is not apples to apples because there are factors like HGH which can not be tested for with standard drug tests. In this instance, your test works with all graphs and the test itself is not inadequate as in the case of Bonds.
I would as well and it would be interesting to try to figure out what else could explain this.
Bookmarks