Hugh Sasse wrote:
> I think the following may be a badly formed question, but if you'd
> bear with me....
> 
> I have a large application (which is actually a Rails app) which is
> behaving oddly (I can change items in a DB twice, but 4 times
> fails), and using all the conventional approaches I have learned for
> debugging (printing things out, logging to files, ...) it is taking
> me an age to track the problem down.  I have no good reason to assert
> that the database or Rails is at fault, it is more likely to be my
> code, but the interactions with the other code make debugging more
> difficult.

A couple of questions:

1. How large is "large"? Is there some kind of "code size metric" that
the Ruby community uses, and a tool or tools to measure it?

2. You say "your code". How much of this application have you personally
written, how much is "Rails and Ruby and the rest of the
infrastructure", and how much is "the rest of it"?

> 
> So, my question is this: Given that since I started working in
> computing there have been major strides in software development,
> such as Object Oriented programming becoming mainstream, development
> of concepts like refactoring, development of practices such as the
> Agile methodologies, not to mention developments in networking and
> databases, what are the parallel developments in debugging large
> systems?  By large, I mean sufficiently large to cause problems in
> the mental modelling of the dynamic nature of the process, and
> involving considerable quantities of other people's code.

The "traditional CASE tools" -- IDEs, software configuration and project
management tool sets, the waterfall model, the CMM levels, and of course
prayer, threats, outsourcing and pizza. :)

> The experience I have gained seems to be insufficient to meet the
> kinds of demands that cannot be unique to my situation, so there
> must be better approaches out there already if others are meeting
> such demands.

Again, without knowing both the scope of your specific project nor the
size of the team that built/is building it, it's difficult to answer.
There are bazillions of failed silver bullets to choose from. My
personal opinion is that you're being too hard on yourself and that
anybody who claims to have a tool or a process or a programming language
that is *significantly* better than today's common practices is either
deceived, deceiving or both.

> Given the prevalence of metaprogramming in Ruby, I'll phrase this
> another way, as a meta-question: what are good questions to ask to
> progress along the road of improving one's ability to debug large
> systems?

I think first of all, you have to *want* to debug large chunks of other
peoples' code. It's an acquired taste. I acquired it at one point in my
career but found it unsatisfying. If you *don't* want to debug large
chunks of other peoples' code, there are ways you can structure your
team and processes to minimize how much of it you have to do.

And I would caution you that, although testing and test-driven
development are certainly important and worthwhile, testing can only
show the *presence* of defects, not the absence of defects.

At the point in my career when I was at the peak of my ability to debug
other peoples' code, I came up with a simple rule. You'll probably need
to adjust the time scales to suit your situation, but in my case, the
rule was: If I don't find the problem in one day, it's not my mistake,
but a mistake in someone else's work. And if it takes more than a week,
it's not a software problem, it's a hardware problem. :)

Good luck ... may the source be with you. :)