Jump to content

Image Stabilization Testing


  • Please log in to reply
41 replies to this topic

#1
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80
This discussion thread is a place to discuss the Image Stabilization testing methodology that we're just rolling out today (3 April, 2009).

I'll try to check in here regularly over the next week or so, to answer questions readers might have about how we test IS systems.

You can read the results of our first IS test here: Canon 70-200mm f/4L IS test

We've also prepared:
- A guide to interpreting our IS test results
- A White Paper on how we do IS testing

- Dave E.

#2
beols069

beols069

    Newbie

  • Members
  • Pip
  • 8 posts
  • Gender:Male
  • Location:The Hague NL
  • Camera:Canon EOS 350D
I'm a bit disappointed.
You use two test persons and for sure you know that "human shake" or tremor differs from hour to hour and from day to day for each individual.
It depends mainly on life style and age.

To measure IS-quality I suggest using a moter driven tilt head while shooting test pictures.
One series with up/down movement and one with left/right movement.

Good article: http://www.enginova....utter Speed.htm

Edited by beols069, 03 April 2009 - 05:11 PM.


#3
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700
well, this is..."interesting"...

I won't comment on the motor-driven head, there are all sorts of issues with that. I see various "holes" in this, but perhaps the best comment would be to simply mention what I care about with regards to this topic. Feel free to fit this information into the overall discussion as you will.

First, I do a lot of low-light handheld shooting. A whole lot, I've shot a lot of IS lenses and IS bodies. Taken more than my fair share. Thinking back over it all it seems to me that what matters most is having a good idea of what shutter-speed I can expect to get a good shot, shooting handheld. Of course this depends a lot on technique, but if your goal is to get good shots handheld in low light, you're going to develop good technique. You're also willing to take a lot of shots in the hopes of getting a few keepers. So what matters most to me is not "how much it improves performance over a lens without IS", but how slow can I go and still expect to get some keepers. So what determines this...for me? Well, I do most of my viewing fullscreen on my laptop, 17" diagonal 16:10 format. I can pretty-much ignore FL because I find that there is a huge drop in IS effectiveness below about 1/FL, but still even shooting 450mm effective at 1/20sec or so I can get keepers. I find that my practical limit is about 1/13s handheld and sometimes I will get keepers down to 1/4s. This is not a function of FL, I'm shooting WAY under 1/FL. I see a big dropoff once I get below about 1/200s regardless of FL as long as I'm out beyond about 50mm. So, for me, there's a gap. A sizable gap. Not this smooth power-law continuous function, which in my experience is not a characteristic of IS, it's a characteristic of *non*-IS. And I don't really care about the percentages and I don't really care about the blur...some shots are going to have more blur than others, and I'm going to look through them at 100% and throw out the ones that look blurry when viewed at 100%. This almost always results in a stack of good shots when viewed at full-screen. Maybe they are a little soft...but I'm generally shooting wide-open here so "soft" is not a big problem.

What I *care* about...is that I get a keeper. Ideally a few keepers, for other reasons.
And again, 9 times out of 10 I will get one as long as I keep the camera fast enough, regardless of the focal length.
Say 1/13s out to 300mm handheld and below that I'm trading softness for exposure.
I also find that the bigger & heavier the camera & lens are, the higher this "corner-speed" is.

I also always shoot brackets when I'm doing this because the first shot is almost always going to be trash. I want to take 3 to 5 shots with the shutter down, I'm holding my breath the whole time. I think that it is important to throw out the obviously-shaky shots because you're just not going to use them. Why include them in the data?

What I would focus on would be the ones that don't look bad when viewed at 100% at each shutter speed, because in practice those are the ones that I'm going to keep. I will only keep shots that look soft/bad at 100% if I don't have anything else & the shot is impressive, otherwise, impressive enough & rare enough to make me overlook the obvious softness when viewed at full-image. But mostly I don't keep those shots [so I'm guessing that anything above say 3bu, maybe even 2bu, I'd toss]. THEN I would look to see what the slowest shutter-speed (the best exposure/lowest noise) was, and that's ALL I would care about, in terms of the IS system.

I wouldn't attempt to rate gear based on what poor shooters do with it. The only case that matters is the best case. If you can't get good shots out of a lens, that's as much your fault as the lens.

Last but not least, all of this data means nothing if the camera doesn't get a sharp focus, or if it isn't sharp enough across the frame at that Fstop. You have to look at how bad the blur is relative to center, because the motion induces more blur away from center, and overall, just looking at IS effectiveness means nothing if the image is only sharp near the center. I would be concerned with how well the camera/lens combination focuses as much as the IS "stability". And then on top of that I would care about how accurate the ISO is and how much noise there is and whether there are any ISO related phenomenon that reduce IQ.

So I see this as a nice "try" but not realistic. I'm looking at all that data above, say, 3 blur units and saying "I don't even care about that".  Just for a start. Get rid of it. It's like evaluating a session-player by how many times he messes-up the song so badly that you would never make a tape with that attempt, or a batter by how badly he strikes out. What you care about is how good he is when he hits. And for me that's the big difference between body-IS and lens-IS, is that you don't get that rock-solid stability that you can get with lens-IS when everything is just right. Body-IS shots, on IS, always looks a little soft, it's just a question of how soft. I would pick a threshold beyond which I would just dismiss the shot entirely, and evaluate the IS system from there. If you want to fit the data, at least use reasonable data. A good scientist never includes the results from experiments that he screwed-up. Just holding up the camera & pressing the shutter isn't enough. If you've got *really* good technique IS doesn't help you at all, because you're using a tripod ;)

Edited by touristguy87, 03 April 2009 - 06:25 PM.


#4
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700
...one other thing you should test is settling-time.  Swing the lens through some distance, wait X seconds and repeat the test. Also the precise axis of oscillation matters. Some systems are 2-axis stable, some 3. Maybe the orientation matters too.

but in any case Motor Trend doesn't publish 1/4-mile times for cars based on what their secretary can do in them. They hire pro drivers. And even then, if the guy dumps the clutch and loses traction for so long that the run is a second slower than normal, they just dump that run. Work with the best 3 or 5 results at each speed. Relative to 1/FL, your shooters should be seeing at *least* 3 stops benefit at the longer FLs (50mm & up) from lens-based IS and maybe 1 stop at most from body-IS with the occasional 2 or 3 stop shot. Lens-IS should be a no-contest winner.

Edited by touristguy87, 03 April 2009 - 06:42 PM.


#5
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View Postbeols069, on Apr 3 2009, 06:00 PM, said:

You use two test persons and for sure you know that "human shake" or tremor differs from hour to hour and from day to day for each individual.
It depends mainly on life style and age.
Yup, it varies a lot between people, and age is definitely a factor. Some variation with the same person. Amount of sleep appears to be a significant factor, coffee probably is too. Testing to see how consistent our shooters were, we found that they were *quite* consistent across multiple shooting sessions, spread across several weeks. Amazingly so, in fact. It is key to make sure that they're not in an unusual mental/physical state (eg, extra-tired, extra-stressed) before a test run. But reasonable care has produced very consistent results, times that we've checked it. (We've run a couple of such tests, were actually surprised how close the numbers came out.)

Quote

To measure IS-quality I suggest using a moter driven tilt head while shooting test pictures.
One series with up/down movement and one with left/right movement.
Yes, we're looking at that. The key issue then would be whether you're really mimicking the combination of displacement and frequency that's characteristic of humans holding the cameras. - And which humans. It's a far from trivial problem to come up with something that really replicates what a human would be doing. (Not impossible, just not easy: Simple oscillatory systems really don't come close to modeling what's going on.)

Quote

Thanks, I'll check that out!

#6
Mark Buxton

Mark Buxton

    Newbie

  • Members
  • Pip
  • 1 posts
  • Camera:pentax wpi
Great job guys, this is really impressive work.

I really appreciate this discipline; I think it's truly valuable.   I'm not going to complain about the small number of testers (N=2) given the large shot-shot variation in the current technology (i.e. your 70-200 f/4 L review).

The mouse-over on the graphs is a nice feature.  But personally, I found the graphs with the raw data (and the actual curve fit) more interesting.  The offset (on vs. off) at high shutter speeds is very interesting! Is it statistically significant?  Overall, having an error measure incorporated into the presentation might be helpful.

Overall, this is the most beneficial innovation in camera reviews I've seen recently.

#7
beols069

beols069

    Newbie

  • Members
  • Pip
  • 8 posts
  • Gender:Male
  • Location:The Hague NL
  • Camera:Canon EOS 350D

View PostDave Etchells, on Apr 4 2009, 12:52 AM, said:

The key issue then would be whether you're really mimicking the combination of displacement and frequency that's characteristic of humans holding the cameras. - And which humans. It's a far from trivial problem to come up with something that really replicates what a human would be doing. (Not impossible, just not easy: Simple oscillatory systems really don't come close to modeling what's going on.)
The question is: are you testing humans or IS  ;)
With testing both at the same time the significance of the outcome is less than when testing IS alone.
The latter gives you figures of what IS is capable of.

Just got this one: http://www.image-eng...stabilizers.pdf

Edited by beols069, 04 April 2009 - 02:04 AM.


#8
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700
What I don't like about this is the emphasis on modeling the data (and validating or rebutting marketing claims) instead of focusing on the fact that you can get the shots at the lower right corner.

The average, and the variance, are important, sure. But in the end what matters is success. Not the rate of failure. As long as the failure-rate is low-enough for practical use. Especially when that failure-rate depends as much on shooting-technique as the hardware itself.

So fine, focus on blur, since that can be measured. But all I really care about is the lower right corner.

Edited by touristguy87, 04 April 2009 - 12:09 PM.


#9
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View PostMark Buxton, on Apr 3 2009, 11:45 PM, said:

I really appreciate this discipline; I think it's truly valuable.   I'm not going to complain about the small number of testers (N=2) given the large shot-shot variation in the current technology (i.e. your 70-200 f/4 L review).
Yes, without an enormously greater numbers of shots, the statistics are such that it wouldn't make a lot of difference to have a wider range of testers. I do think we have the likely range of users covered fairly well. Our "Steady" tester is indeed really, really good at holding the camera steady, while our "shaky" tester is a gentleman in his early 70s, and is probably pretty representative of others in his general age bracket. So I think more testers would most likely fall in between the two extremes represented by our two testers.

Quote

The mouse-over on the graphs is a nice feature.  But personally, I found the graphs with the raw data (and the actual curve fit) more interesting.  The offset (on vs. off) at high shutter speeds is very interesting! Is it statistically significant?  Overall, having an error measure incorporated into the presentation might be helpful.
Thanks for the feedback on the graphs. It's tough to figure what the majority of people will want to see. I personally get a lot out of the graphs showing the data points for the individual shots: As another poster suggested, most of the time I'm using IS, I don't mind taking a few shots to make sure I got one that's sharp. - So seeing how many dots fall into the "sharp" area is useful information for me. Our decision of what to show directly in the page vs in a separate window via a link followed the philosophy of showing the overview information by default, so average readers wouldn't be overwhelmed by a huge stack of graphs, knowing that the more advanced readers would be able to find the links and click on them to see the more complete data.

I don't think the slight on/off offset is statistically significant: It's more a measure of the underlying jitter in the data: The numerical extrapolations of IS performance are probably only good to +/- 0.2 to 0.3 blur units, and the baseline offset just reflects that, at least in most cases. In the course of developing these tests, we did encounter at least one lens that had a very odd "bump" in its blur data around 1/100 second with our Steady tester. My interpretation of that result was that the lens' IS system had a high-frequency limit that caused it to have a hard time dealing with the impulse caused by firing the shutter. (Just a theory, it might not be that at all. - But that lens definitely had a "bump" in its data at that shutter speed.) We'll take care to note any such unusual characteristics, but for the most part, you'll see slight differences in the baseline values that don't really mean anything. Sometimes the IS-on baseline will be a bit higher, but sometimes it will be the other way around. - All a long way of saying that no, I don't think those minor shifts correspond to anything other than noise in the data.

Quote

Overall, this is the most beneficial innovation in camera reviews I've seen recently.
Thanks! I really appreciate the kind words, this was an enormous amount of work. It's certainly not the be-all, end-all for IS testing (we're continuing to work on that  ;) , but I do think it gives consumers quite a bit more information than they've had to date in this important area.

#10
arn

arn

    Newbie

  • Members
  • Pip
  • 1 posts
  • Gender:Male
It's great that you are constantly trying to improve, develop and standardize your testing methods. I like the way that you publish the methods in detail, I like that very much - it certainly gives credibility to your tests. For example, the 'fallibility of focus' article lends a lot of trust to your lens tests.

It's certainly important to develop methods for testing IS-systems and I'm glad that you have gone through the trouble. Well, as for the reliability of IS-testing, I can see one obvious danger in the method that you use. The testers will probably learn camera holding techniques as you test more and more lenses and bodies. They will also learn to control other factors that affect the testing (body control, daily rutines, concentration, etc). No matter how accomplished photographers they are, the way that they handle equipment in your test setup will probably improve over time and this results in the following:

- they will make fewer "errors" as time goes by. There will probably be less variation in the amount of shake between shots taken.

thus -> the first lenses tested will have different results compared to lenses that tested in the future. The testing behaviour of the testers will change / evolve. I'm quite sure, that this is unavoidable, no matter how much the testers will try to consciously try to keep their behaviour unchanged.

- as has been pointed out previously in this thread, the amount of sleep, etc will also be a factor, presenting the possiblity of "surprise results". If some day a tester has shakier hands than usual or doesn't concentrate as much he usually does, the test will be less reliable for that lens / camera body.

So, I think the thing that should be resolved is, how to keep the variables in the testing unchanged as time passes. I don't have an answer to that right now.

As for the test benches that have been suggested, it's obviously very hard to build a mechanical tester that actually behaves like a human being. It will behave too predictably and the movement patterns will not totally resemble those of people (for one thing, they will usually be too simple and predictable). Therefore, I don't think that the bench is an aswer  - at least, not an easy answer. Maybe a test bench could be a "third" party in testing, along with the "shaky" and the "steady" tester.

#11
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700
Overall, this is the most beneficial innovation in camera reviews I've seen recently.

Thanks! I really appreciate the kind words, this was an enormous amount of work. It's certainly not the be-all, end-all for IS testing (we're continuing to work on that wink.gif , but I do think it gives consumers quite a bit more information than they've had to date in this important area.

...

...that just goes to show how sad the camera-reviews have been.

When I see a consistent review of true ISO in these reviews, then I'll be impressed.
People are paying almost $3k for cameras that are 50%-70% "ISO-optimistic".

Next the images at high ISO might not have a lot of *chroma* noise, but they can sure have a lot of *streak* noise, and luminance noise, that is much easier to see and much uglier than chroma-noise.

That's as significant to me as anything else....that's why IS is such a big factor...that's why it's important for lenses to be reasonably-sharp and fast near wide-open. That's why people buy DSLRs in the first place. It's all tied together.

#12
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700
one other thing it seems odd...

http://www.slrgear.c...00mm_f4L_IS.htm

...that there are red dots along the lower right edge.
You got handheld shots like that with the IS off? Really? :)

It's also interesting that the fit implies a blur-error resulting from the use of IS in and of itself...just how significant is this? Is this perhaps an artifact of the analysis? How much of it is due to the analysis, and how much to the hardware?

But there's one good thing about this graph (ignoring the absence of labels ;) you didn't make the mistake of marking the x-axis in stops relative to 1/FL. I see that you guys learned something from all those tests :)  I'd keep it that way. It should become clear with more testing that, at least with lens-IS, there's a weak correlation between 1/FL and the "money-shots" that you get at the bottom right corner. What we'd call the corner-point of the fit. But those handheld shots with no blur and no IS at 1/13s with a 70mm lens are not possible. I would be lucky to get a decent handheld shot at 1/FL without IS, normally without IS I wouldn't want to shoot handheld under 2/FL.

Edited by touristguy87, 04 April 2009 - 04:07 PM.


#13
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View Postbeols069, on Apr 4 2009, 03:02 AM, said:

The question is: are you testing humans or IS  ;)
With testing both at the same time the significance of the outcome is less than when testing IS alone.
The latter gives you figures of what IS is capable of.
Well, the issue is how IS systems respond to humans, so you can't take the humans entirely out of the equation: You could certainly characterize the IS systems' performance in some sort of physical space, say frequency vs amplitude vs the amount of stabilization achieved, but I don't know that that would tell you much about how the systems would actually perform for people using them.

Clearly, the ideal would be to study the characteristics of human-induced camera shake as produced by a wide range of subjects, boil that down to characteristic patterns, and then test the systems against a range of those patterns, always feeding exactly the same vibratory patterns to the systems every time. - That's where I'd like to head with this, but it's quite a ways down the road. (I'd love it if I could spend, say 6 months full time doing nothing but developing and constructing such a system, but in any likely reality, it'll have to be spread out over a number of years, given the sites to run, etc.) - But I think any testing has to be tied back to the sorts of shaking that humans produce.

Quote

Wow! That's a great reference, thanks for calling it to my attention! It's too bad that the author gives no details of his "Steve" device, that might save me some time going down a similar road. It looks like his primary focus was building that device, as he just relied on data published in patents and other sources for information on what the human generated shaking looked like. Still, an interesting article, and a good source of other references. Thanks again!

#14
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View Postarn, on Apr 4 2009, 03:14 PM, said:

It's great that you are constantly trying to improve, develop and standardize your testing methods. I like the way that you publish the methods in detail, I like that very much - it certainly gives credibility to your tests. For example, the 'fallibility of focus' article lends a lot of trust to your lens tests.
Thanks - I'm glad to hear the positive feedback on our publishing of our test methods: I agree entirely, think that tests without this sort of information available are really of limited value.

Quote

I can see one obvious danger in the method that you use. The testers will probably learn camera holding techniques as you test more and more lenses and bodies. They will also learn to control other factors that affect the testing (body control, daily rutines, concentration, etc). No matter how accomplished photographers they are, the way that they handle equipment in your test setup will probably improve over time and this results in the following:

- they will make fewer "errors" as time goes by. There will probably be less variation in the amount of shake between shots taken.
(snip...)
So, I think the thing that should be resolved is, how to keep the variables in the testing unchanged as time passes. I don't have an answer to that right now.
You're absolutely right, and we've indeed encountered that. Our Shaky tester has actually learned quite a bit about how to improve his holding of the camera, and if he employs all that he's learned, his results do get better than they were. It's interesting though, that some of the things he's learned are pretty specific, in terms of how he holds the camera, how tightly he grips it, how he holds his hands and arms, how he aims it during the shot sequence, etc. He's been able to identify several specific things that he's done to get better, and so can choose to not employ those techniques for the test runs. Doing this, he's able to duplicate his earlier results surprisingly well.

(One obvious thing that could come from all our experimentation is an article on what works best for holding a camera steady. That'll be an article for another day, but I'll give you one hint now: The old advice about "making a tripod" of your arms, bracing them on your chest, is dead wrong: That couples vibrations from your heartbeat very strongly into the camera/lens system. Holding your arms away from your body works much better.)

There's still some variation from test to test, of course, but that's generally been of a smaller magnitude. Beyond all that, though, I've observed that some camera/lens combinations are easier to hold steady than others. For instance (at least until fatigue sets in), a heavier camera/body system will tend to give better results than a much lighter one, because the mass and rotational inertia of a big, heavy lens tends to reduce the amplitude of the vibratory motion. We don't try to correct for that, since we are indeed testing how individual lenses perform. These variations are part of why we show the 1/FL lines on the plots, so readers can judge just how steady or shaky each tester was with that particular lens system.

Quote

As for the test benches that have been suggested, it's obviously very hard to build a mechanical tester that actually behaves like a human being. It will behave too predictably and the movement patterns will not totally resemble those of people (for one thing, they will usually be too simple and predictable). Therefore, I don't think that the bench is an aswer  - at least, not an easy answer. Maybe a test bench could be a "third" party in testing, along with the "shaky" and the "steady" tester.
Well, I think the *right* test apparatus would be the answer, but "right" means one that could pretty precisely mimic the behavior of a human. I have some ideas on that, I just don't know if I'll be able to find the time to investigate and implement them.  ;)

#15
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View Posttouristguy87, on Apr 4 2009, 04:55 PM, said:

one other thing it seems odd...

http://www.slrgear.c...00mm_f4L_IS.htm

...that there are red dots along the lower right edge.
You got handheld shots like that with the IS off? Really? :)
Oops! Thanks for that catch! Those were shots taken after Rob had passed out, hence the camera was perfectly stationary.  ;)  - Actually, those were just some zero values in the spreadsheet that got plotted by accident, and I didn't notice them. Thanks for pointing them out, I'll correct them and update the graph come Monday. (Gotta run now, time for dinner, tomorrow I'm off.) Thanks for pointing that out, they're a flat-out error!

#16
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700
"Oops! Thanks for that catch! Those were shots taken after Rob had passed out, hence the camera was perfectly stationary. wink.gif - Actually, those were just some zero values in the spreadsheet that got plotted by accident, and I didn't notice them."

Ah so, so Mr. Steady gets his steadiness from PEDs! :)

Just replace those dots with asterisks, then ;)

I guess that I should also point out that at 200mm they both beat a tripod?
Either that or there's significant variation in focus-quality...

anyway you could plot the fits for the steady shooter with solid lines, and the shaky shooter with dashed lines.

One *other* problem: for all the graphs, the mouseovers seem to always cause the fits to move northwest.

Hm. One more thing...it doesn't seem that you guys are pushing the lenses hard enough. There's not a lot of data at the lowest shutter-speed. I think that one hit out of 3 at 1/6s is pretty good, I'd take it down farther until the hit rate is near zero, with at least 10 shots at each speed. That might mean that you have to change the fit, though. The trick is that at the lower right corner you're getting the most out of the lens, shooting handheld, but sure you can just push the faster shots. But if you're seeing 1 hit out of *two* there's still some speed left in this thing, meaning that you're missing some potentially-excellent exposures simply for lack of trying.

Also it would be interesting to see just how far to the right the tripod helps. There should be *some* oscillation in the tripod and that should affect the results at the really-slow speeds, the longer the delay, the more movement is integrated over the shot. So you should see a linear increase in the blur-floor, or a quadratic floor in a log chart. If you're taking the blur relative to the tripod results, that would flatten the curve artificially. Probably a small error but still...meaning that you probably want to compute a baseline blur from the faster times and use that instead of the measured tripod results at slower times.

Edited by touristguy87, 04 April 2009 - 08:48 PM.


#17
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700
And what about the variation in blur with focal-length? Some lenses are simply going to get dull at the longer focal lengths. So if the lens is naturally dull at 200mm vs sharp at 70mm how does this affect the results?

#18
Greg Copeland

Greg Copeland

    Newbie

  • Members
  • Pip
  • 1 posts
  • Camera:KonicaMinolta Maxxum 5D

View PostDave Etchells, on Apr 4 2009, 06:18 PM, said:

(One obvious thing that could come from all our experimentation is an article on what works best for holding a camera steady. That'll be an article for another day, but I'll give you one hint now: The old advice about "making a tripod" of your arms, bracing them on your chest, is dead wrong: That couples vibrations from your heartbeat very strongly into the camera/lens system. Holding your arms away from your body works much better.)

Good to know -- this bears out my own experience as well, even though I always felt somehow that I was "doing it wrong".  (^_*)

View PostDave Etchells, on Apr 4 2009, 06:18 PM, said:

I've observed that some camera/lens combinations are easier to hold steady than others. For instance (at least until fatigue sets in), a heavier camera/body system will tend to give better results than a much lighter one, because the mass and rotational inertia of a big, heavy lens tends to reduce the amplitude of the vibratory motion.

Are you sure you didn't say "amplitude" when you really meant "frequency"?  For a camera holding system with a given set of stiffness and damping characteristics (a particular person, in this case, but it wouldn't necessarily have to be), a larger/heavier camera body/lens system actually would have the tendency to increase the amplitude of the vibrations, while at the same time reducing the frequency of those vibrations.  As an analogy, imagine a small weight -- a paperweight or something -- hanging from a screen door spring.  Then imagine a somewhat larger weight (a DSLR with kit lens, perhaps?) hanging from this same spring.  The second case will result in oscillations with higher amplitude/lower frequency, vis-a-vis the first case.  But I agree nonetheless that the heavier camera will tend to give better test results than a much lighter one, as the IS system should be able to cope much more easily with the lower-frequency vibrations.

And finally, one small quibble regarding the results table at the top of the 70-200mm f/4L IS test article:
In the 70mm table, the "Improvement (Stops)" column shows an improvement of 2.3 stops for the "Shaky" tester, with shutter speeds of 1/24 sec vs. 1/112 sec.  This result should have been presented as 2.2 stops (a typo, maybe, since all the other results appeared to be rounded correctly?)  In other words, the Base 2 logarithm of 112/24 is equal to approximately 2.2224, which would round to 2.2.

Minor niggles aside, I believe these stabilization tests have the promise of being another great service to your readers, as the other aspects of your lens tests have been over the last few years.  Thanks for providing them!

#19
CharlesH

CharlesH

    Newbie

  • Members
  • Pip
  • 3 posts
Dave & Co.,

First off, thanks for all of the work that you and your team have been doing.  I've used Imaging Resource to select digital cameras for myself, friends and family since December 2008, and really appreciate the information that can't be found anywhere else.  

I wanted to point out what I think is an error in your white paper (http://www.slrgear.c..._1iswp/iswp.htm).

In this paragraph:

"For reference, we've also drawn-in a line showing the shutter speed corresponding to the inverse of the effective focal length. In the case above (with the lens attached to a Canon body with a 1.6x crop factor), this corresponds to 1/(1.6 x 270) = 1/432 second."

I think you meant to say:

"this corresponds to 1/(1.6 x 70) = 1/112 second."

The case above was for 70mm, or 112mm effective.

#20
CharlesH

CharlesH

    Newbie

  • Members
  • Pip
  • 3 posts

View Posttouristguy87, on Apr 3 2009, 05:09 PM, said:

What I don't like about this is the emphasis on modeling the data (and validating or rebutting marketing claims) instead of focusing on the fact that you can get the shots at the lower right corner.

The average, and the variance, are important, sure. But in the end what matters is success. Not the rate of failure. As long as the failure-rate is low-enough for practical use. Especially when that failure-rate depends as much on shooting-technique as the hardware itself.

You have described your particular use of your camera system and therefore what matters to you.  (e.g. getting one good shot out of 10)  However, other people use their cameras differently and have different needs.  How about someone shooting sports.  Are they going to say to the athletes, "Hey, could you throw that pass or do that ski jump 9 more times so I can make sure I get a good shot???"  Sometimes you want as high a percentage of good shots as possible and not just one good one in ten.  Sometimes you wan almost every shot to come out as good as the equipment can give you.  And some of those situations don't allow for "good technique":  Photo journalism in a war zone, riding in a bumpy or vibrating vehicle, sports photography where you have to move (or run!) to follow the action, on location shooting involving children in any way--any situation where you have to move around and quickly take a shot or it is lost.

View Posttouristguy87, on Apr 3 2009, 05:09 PM, said:

If you can't get good shots out of a lens, that's as much your fault as the lens.

Maybe so, but does that mean that people who are not perfect shooters should be banned from buying cameras and taking advantage of image stabilization??  Only catering to the needs of photographers with the best technique and plenty of time to retake a shot is not realistic and cuts out a majority of users.  Imaging Resource and SLRgear try to provide information useful to a wide range of users.  Perhaps they will find a good way of addressing your special case needs as well.

#21
CharlesH

CharlesH

    Newbie

  • Members
  • Pip
  • 3 posts

View PostGreg Copeland, on Apr 5 2009, 12:08 PM, said:

Are you sure you didn't say "amplitude" when you really meant "frequency"?  For a camera holding system with a given set of stiffness and damping characteristics

Greg, a human is not a holding system with a given set of stiffness and damping characteristics.  Rather, the characteristics are constantly changing while the person is attempting to hold the camera steady.  And the person is the SOURCE of the motion, i.e. a driver, rather than a damping system that will dampen outside stimulation.  A human is a dynamic system with feedback that permits it to respond to slow frequency motion very well, but which introduces some higher frequency stimulation.  A heavier camera will move less in response to high frequency stimulation, and the human system can more easily compensate for that movement because it will be lower in frequency.

Also, when it comes to blurring an image, we don't care about the amplitude of oscillations.  We care about the amplitude of the camera movement during the time that the shutter speed is open.  So as long as the frequency of movement is less than half of 1/shutter-speed, the lower the frequency of movement, the lower the amplitude of movement during the open shutter.  That's the amplitude that matters.

#22
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700

View PostCharlesH, on Apr 5 2009, 07:35 PM, said:

However, other people use their cameras differently and have different needs. How about someone shooting sports. Are they going to say to the athletes, "Hey, could you throw that pass or do that ski jump 9 more times so I can make sure I get a good shot???" Sometimes you want as high a percentage of good shots as possible and not just one good one in ten. Sometimes you wan almost every shot to come out as good as the equipment can give you. And some of those situations don't allow for "good technique": Photo journalism in a war zone, riding in a bumpy or vibrating vehicle, sports photography where you have to move (or run!) to follow the action, on location shooting involving children in any way--any situation where you have to move around and quickly take a shot or it is lost.

And how much do you think that IS is going to help you in those situations?

All situations allow for "good technique". There's absolutely nothing to stop you from using "good technique"...in fact, that's what you *should* use. Always. The issue is what is "good technique" for the situation.

View PostCharlesH, on Apr 5 2009, 07:35 PM, said:

Maybe so, but does that mean that people who are not perfect shooters should be banned from buying cameras and taking advantage of image stabilization??  Only catering to the needs of photographers with the best technique and plenty of time to retake a shot is not realistic and cuts out a majority of users.  Imaging Resource and SLRgear try to provide information useful to a wide range of users.  Perhaps they will find a good way of addressing your special case needs as well.


...of course not. No one is saying that they shouldn't be allowed to buy cameras and/or lenses with IS. No one is saying that we should cut-out users. There's not a thing that I said that means that, insinuates that, results in that, anything like that.

Edited by touristguy87, 05 April 2009 - 09:10 PM.


#23
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View PostCharlesH, on Apr 5 2009, 07:04 PM, said:

In this paragraph:

"For reference, we've also drawn-in a line showing the shutter speed corresponding to the inverse of the effective focal length. In the case above (with the lens attached to a Canon body with a 1.6x crop factor), this corresponds to 1/(1.6 x 270) = 1/432 second."

I think you meant to say:

"this corresponds to 1/(1.6 x 70) = 1/112 second."

The case above was for 70mm, or 112mm effective.
Hi Charles -

You're absolutely right! That was a carry-over from an earlier version of the article: We were going to publish data from the Tamron 18-270mm VC first, but decided we wanted to get that lens back in and run a few more data points for it before we published our results for it. - The error you refer to occurred because I didn't update that sentence to reflect using the 70-200mm for the examples. I'll get that fixed right away, thanks(!) for calling it to my attention!

- Dave E.

#24
touristguy87

touristguy87

    Member

  • Members
  • PipPip
  • 48 posts
  • Camera:Nikon d300/Sony a700
"Are you sure you didn't say "amplitude" when you really meant "frequency"? For a camera holding system with a given set of stiffness and damping characteristics (a particular person, in this case, but it wouldn't necessarily have to be), a larger/heavier camera body/lens system actually would have the tendency to increase the amplitude of the vibrations, while at the same time reducing the frequency of those vibrations. As an analogy, imagine a small weight -- a paperweight or something -- hanging from a screen door spring. Then imagine a somewhat larger weight (a DSLR with kit lens, perhaps?) hanging from this same spring. The second case will result in oscillations with higher amplitude/lower frequency, vis-a-vis the first case. But I agree nonetheless that the heavier camera will tend to give better test results than a much lighter one, as the IS system should be able to cope much more easily with the lower-frequency vibrations."

...I think it's a mistake to try to make too much of this with a model, all it takes is one bad assumption to ruin the model.
In the end all that matters are the results. And that's what they should focus on...in the process keeping the experiment at simple as possible.

You want to make models, then you have to do experiments to validate the model...and then you raise the potential for errors in the analysis as well as the experimental method. The good thing about not using a machine mount is that erroneous assumptions aren't designed into the machine-mount. Simple statistics will resolve the problem with the shooters. Just take enough shots, and the statistics will become clear. The one problem I have beyond that is that the same # of samples should be taken at each shutter-speed and the shutter-speeds should be taken low enough to make it clear that no good shots can be taken at the right edge. The problem is that the fit is driving the experiment, not the data.

#25
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View Posttouristguy87, on Apr 4 2009, 08:53 PM, said:

Hm. One more thing...it doesn't seem that you guys are pushing the lenses hard enough. There's not a lot of data at the lowest shutter-speed. I think that one hit out of 3 at 1/6s is pretty good, I'd take it down farther until the hit rate is near zero, with at least 10 shots at each speed. That might mean that you have to change the fit, though. The trick is that at the lower right corner you're getting the most out of the lens, shooting handheld, but sure you can just push the faster shots. But if you're seeing 1 hit out of *two* there's still some speed left in this thing, meaning that you're missing some potentially-excellent exposures simply for lack of trying.
Yes, from the discussion here, it's clear that it'd be good to include our data from slower shutter speeds on the graphs as well. We do have data extending a good bit further than what's currently plotted. It's in what we call the "chaos" region, so isn't included in the power-law fit for the transition region, but I can see that it'd be useful for people to be able to see that data as well. Generally, there aren't any shots in that region that are photographically useful (eg, they're all badly blurred, there are no "money" shots there), but it'd be worthwhile for people to be able to see it, if for nothing else than to see that there weren't any usable shots there.

Quote

Also it would be interesting to see just how far to the right the tripod helps. There should be *some* oscillation in the tripod and that should affect the results at the really-slow speeds, the longer the delay, the more movement is integrated over the shot. So you should see a linear increase in the blur-floor, or a quadratic floor in a log chart. If you're taking the blur relative to the tripod results, that would flatten the curve artificially. Probably a small error but still...meaning that you probably want to compute a baseline blur from the faster times and use that instead of the measured tripod results at slower times.
Because we're measuring blur rather than purely camera motion, other factors can affect the blur numbers, including aperture setting and ISO. As we step through the shutter speed ranges, we're having to adjust both aperture and ISO to get the shutter speeds we need, which causes the "tripod" blur value to vary as well - So we need to subtract it out as we go along, rather than just taking whatever blur value the tripod gave at highest shutter speed. We could perhaps avoid the need for this with a more powerful/flexible lighting system, but would need quite a bit more wattage than we currently have to be able to get to the very high shutter speeds we need for the left side of the graph: And we're already pushing about 4 KW of lighting on the target at maximum brightness. That's on the project list for upgrading the system; perhaps switching to some sort of a large array of HF fluorescent bulbs would be workable. I've been avoiding it thus far because it'll require a good bit of construction to put it together (meaning a big chunk of my personal time, of which there's precious little to go around for everything that demands it), and several thousand dollars' worth of bulbs to run it. - And upgrading the camera test platforms for our standard lens testing is a higher priority overall. (I need in fairly short order to buy a 50D, a 5DmarkII, and am waiting for the "D700x" whenever it comes out, to take care of the Nikon full-frame platform. It seems there's always something else calling for spending a couple $K of capital on, and the tough economic times has meant a good bit less revenue to fund it all with.)

#26
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View PostGreg Copeland, on Apr 5 2009, 02:08 PM, said:

Are you sure you didn't say "amplitude" when you really meant "frequency"?  For a camera holding system with a given set of stiffness and damping characteristics (a particular person, in this case, but it wouldn't necessarily have to be), a larger/heavier camera body/lens system actually would have the tendency to increase the amplitude of the vibrations, while at the same time reducing the frequency of those vibrations.  As an analogy, imagine a small weight -- a paperweight or something -- hanging from a screen door spring.  Then imagine a somewhat larger weight (a DSLR with kit lens, perhaps?) hanging from this same spring.  The second case will result in oscillations with higher amplitude/lower frequency, vis-a-vis the first case.  But I agree nonetheless that the heavier camera will tend to give better test results than a much lighter one, as the IS system should be able to cope much more easily with the lower-frequency vibrations.
Actually, it's both amplitude and frequency that are reduced. Charles H summed it up pretty well in his earlier reply to this post: The human's hands/eyes/brain constitute a feedback-guided support system, with excellent response in the very low frequencies, but with a fair bit of noise at the higher frequencies. A given impulse will produce less motion with more mass and rotational inertia, so it results in lower amplitude of movement. For the same reasons, higher-frequency components will be damped more as well, so you'll see a shift towards lower frequencies. (The higher-frequency components will still be there, of course, they'll just be proportionately smaller.)

Quote

And finally, one small quibble regarding the results table at the top of the 70-200mm f/4L IS test article:
In the 70mm table, the "Improvement (Stops)" column shows an improvement of 2.3 stops for the "Shaky" tester, with shutter speeds of 1/24 sec vs. 1/112 sec.  This result should have been presented as 2.2 stops (a typo, maybe, since all the other results appeared to be rounded correctly?)  In other words, the Base 2 logarithm of 112/24 is equal to approximately 2.2224, which would round to 2.2.
I think you're right, unless there was some other rounding at work in the derivation of the characteristic shutter speeds. Senior lens tech Jim and I will look at the spreadsheet tomorrow, and see if an adjustment is required. - Jim developed the spreadsheet, so I'd want him to go over it with me. I suspect the issue may be that the shutter speeds were rounded slightly, so the log of the exact ratio might be something like 2.26, which got rounded to 2.3 instead of 2.2. Thanks for the note, though, we'll take a look at it tomorrow.

Quote

Minor niggles aside, I believe these stabilization tests have the promise of being another great service to your readers, as the other aspects of your lens tests have been over the last few years.  Thanks for providing them!
Thanks! As noted, this was an enormous amount of work, to get things to this point, so it's gratifying to hear that you find it useful. All the feedback and discussion has been great too; has already helped refine some of our presentation going forward somewhat.

#27
Simen1

Simen1

    Newbie

  • Members
  • Pip
  • 7 posts
  • Gender:Male
  • Location:Norway
  • Interests:Photo
  • Camera:Pentax K200D
Great testing methodic. I agree that human movement is far more relevant then a mechanical movement rig. I would like to read more about Mr. Steady and Shakys age, daily coffee intake and how long experience they have with using slrs. IE. If mr. shaky is very new to slr, he might be improving rapidly.


I hope you test the stabilisation in some popular cameras soon. These will represent a huge number of lenses and i hope that will make it prioritized. I also hope SLR Gear can bust some myths (make an FAQ) about stabilisation effectiveness:
- lens stabilisation vs sensor stabilisation
- effectiveness vs focal length (from extreme wide angle to extreme tele)
- effectiveness vs aperture setting
- effectiveness vs focus distance
- side effects like increased corner shading and corner CA
- in what axis/angle a human is more likely to be shaky
- will stabilisation do more harm then good if its totally unneeded? (I.E. use of tripod or 1/1000s)
- monopod effectiveness

Is it possible to upload my own photos somewhere to get an automatic analysis of who i am? (a mr. shaky or mr. steady)

Edited by Simen1, 06 April 2009 - 03:39 AM.


#28
Jan

Jan

    Newbie

  • Members
  • Pip
  • 1 posts
  • Camera:EOS 450D
I really liked the whitepaper because you stated your assumptions (which people may or may not agree with, as seen in some posts), and then drew conclusions on how best to measure IS. There is not much I can poke a hole at.

There is one section where the assumption->conclusion step is not completely explained. When a trend is non-linear (as it obviously is in the case of blur-log shutter speed), this simply means that it can be any of dozens of other trends. I am familiar with mathematical functions, and the first thing that came to mind was an exponential (a + b*c^x). I am curious how you decided to use a power law.

Without looking at any graphs, for faster shutter speeds, I would assume that blur was directly proportional to exposure time (i.e. equivalent to the camera moving at constant speed in one direction). This would give an exponential relationship when plotted against the log of exposure time. For slower shutter speeds (i.e. still faster than the human feedback control system), however, blur is likely better described by Brownian motion. Is it this Brownian motion that the power law relationship (a + b * x^c) tries to model?

#29
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View PostSimen1, on Apr 6 2009, 04:11 AM, said:

Great testing methodic. I agree that human movement is far more relevant then a mechanical movement rig. I would like to read more about Mr. Steady and Shakys age, daily coffee intake and how long experience they have with using slrs. IE. If mr. shaky is very new to slr, he might be improving rapidly.
Actually, Mr. Shaky has been around cameras a *long* time. He did discover some things about how he was holding the camera that were working against him. Also, he's found that he generally needs to do his IS testing in the mornings, rather than the afternoons (when he gets *really* shaky). But he's also been able to "unlearn" some of the improvements in his technique, to mimic his earlier performance and help keep his performance more consistent over time. I'll talk to both Shaky and Steady, see if they'd be comfortable with a more personal profile posted for each of them.

FWIW, I think that we'll ultimately be able to create a mechanical system that will be able to mimic human responses very closely; "play them back," as it were. That's the ultimate goal, but as I've shared, given the amount of time everything else about running the sites and the business takes, it's likely to be years before I can develop such a system. That's the ultimate, "someday" goal, though.

Quote

I hope you test the stabilisation in some popular cameras soon. These will represent a huge number of lenses and i hope that will make it prioritized.
We'll certainly continue testing, as much as we can. It's *enormously* time-consuming though, many hundreds of shots and a lot of data crunching required for each system. I don't want to promise more than we can deliver, but think we might be able to manage one new system every other week or so.

Quote

I also hope SLR Gear can bust some myths (make an FAQ) about stabilisation effectiveness:
- lens stabilisation vs sensor stabilisation (we expected lens-based to kill sensor-based, but that doesn't appear to be the case. Some sensor-based systems seem to be quite effective.)

- effectiveness vs focal length (from extreme wide angle to extreme tele) (definitely more benefit at telephoto than wide angle, but then wide angle needs less help than telephoto shots anyway)

- effectiveness vs aperture setting (not really a factor, it's a function of shutter speed, which aperture obviously affects. The systems themselves don't care what aperture you're shooting at)

- effectiveness vs focus distance (not really an independent issue, except as relates to tele vs wide angle)

- side effects like increased corner shading and corner CA (Interesting - I hadn't thought of that. Not sure how we could test reliably, but it would make sense that with IS enabled, the optical path through the lens might be non-optimal for some other characteristics.)

- in what axis/angle a human is more likely to be shaky (That's going to require some additional technology to order. (A 6-axis accelerometer/gyro sensor), but I just ordered one plus an eval board for it the other day.)

- will stabilisation do more harm then good if its totally unneeded? (I.E. use of tripod or 1/1000s) (So far, we haven't seen this, but that doesn't mean that it might not be there.)

- monopod effectiveness (On the "someday" list of things to check out.)

Is it possible to upload my own photos somewhere to get an automatic analysis of who i am? (a mr. shaky or mr. steady) (No, but you can get a pretty good sense of that by looking at your average performance at the 1/FL shutter speed with a similar system, and comparing that to how Shaky and Steady did. Note, though, that you'll want to compare both similar focal lengths and similar lens masses: You'll likely do worse with a lightweight 300mm lens than a heavy 300mm f/2.8 monster. Increased mass does seem to help.)

#30
Dave Etchells

Dave Etchells

    Staff

  • Staff
  • PipPipPipPip
  • 101 posts
  • Camera:Nikon D80

View PostJan, on Apr 6 2009, 11:08 AM, said:

I really liked the whitepaper because you stated your assumptions (which people may or may not agree with, as seen in some posts), and then drew conclusions on how best to measure IS. There is not much I can poke a hole at.
Thanks, Jan - That was the intent. Any tests involve assumptions, especially (!) ones involving as much statistical analysis as these. I think it's important to be very forthright about our methods and assumptions; that's the only way people will be able to decide how to weigh the results.

Quote

There is one section where the assumption->conclusion step is not completely explained. When a trend is non-linear (as it obviously is in the case of blur-log shutter speed), this simply means that it can be any of dozens of other trends. I am familiar with mathematical functions, and the first thing that came to mind was an exponential (a + b*c^x). I am curious how you decided to use a power law.

Without looking at any graphs, for faster shutter speeds, I would assume that blur was directly proportional to exposure time (i.e. equivalent to the camera moving at constant speed in one direction). This would give an exponential relationship when plotted against the log of exposure time. For slower shutter speeds (i.e. still faster than the human feedback control system), however, blur is likely better described by Brownian motion. Is it this Brownian motion that the power law relationship (a + b * x^c) tries to model?
Much though we might have wished it to be otherwise, we realized fairly early on that we weren't going to be able to divine the correct mathematical model from a priori analysis of the human/camera system: It's just way too complex to model (or at least for us to model). We did consider an exponential relationship as a possibility, but it just didn't fit the data as well in the critical "transition region," where the results are photographically interesting. I suspect that someone with a lot more physiological and mathematical insight than we had could come up with an analysis proving that one model or another should be the best fit, but we just had to go with what produced the best fits over the region we were most interested in. Power law seemed to do that. (I always had this nagging feeling that someone who knew more math and had deeper insights than we did would be able to look at this and say "oh, of course; that would best be modeled by a two-term exponential series, and here's why..." I suspect there may be some polymath out there who could do just that. For our part, just trying a few common equations led us to settle on the power law relationship as the one that fit the data the best. A key realization, though, was that we didn't need to (and indeed *shouldn't try to*) model the behavior outside the regime that was photographically relevant. Once we decided to ignore the "chaos region" and came up with a good heuristic for identifying it, the fits tightened up very nicely, and repeatability improved noticeably as well.)