I think the following may be a badly formed question, but if you'd
bear with me....

I have a large application (which is actually a Rails app) which is
behaving oddly (I can change items in a DB twice, but 4 times
fails), and using all the conventional approaches I have learned for
debugging (printing things out, logging to files, ...) it is taking
me an age to track the problem down.  I have no good reason to assert
that the database or Rails is at fault, it is more likely to be my
code, but the interactions with the other code make debugging more
difficult.

So, my question is this: Given that since I started working in
computing there have been major strides in software development,
such as Object Oriented programming becoming mainstream, development
of concepts like refactoring, development of practices such as the
Agile methodologies, not to mention developments in networking and
databases, what are the parallel developments in debugging large
systems?  By large, I mean sufficiently large to cause problems in
the mental modelling of the dynamic nature of the process, and
involving considerable quantities of other people's code.

The experience I have gained seems to be insufficient to meet the
kinds of demands that cannot be unique to my situation, so there
must be better approaches out there already if others are meeting
such demands.

Given the prevalence of metaprogramming in Ruby, I'll phrase this
another way, as a meta-question: what are good questions to ask to
progress along the road of improving one's ability to debug large
systems?

        Thank you,
        Hugh