Here's some code I wrote for fun.  I'd be interested to hear
whether it actually IS faster than stat-ing 3300 files ...

Of course, I'd also love any comments on how to make it
cleaner; I find I tend to write very literal code that
can often be made into better Ruby !!


def parse_ls_line(text)

    # Array for conversion of month names to numbers ...

    months = %w(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec)

    # Pull out the timestamp and the file name ...

    time  = text[43, 24]
    name  = text[68, text.length - 68]

    # Split the timestamp into its components ...

    month = months.index(time[4, 3]) + 1
    day   = time[8, 2].to_i
    hour  = time[11, 2].to_i
    min   = time[14, 2].to_i
    sec   = time[17, 2].to_i
    year  = time[20, 4].to_i

    [year, month, day, hour, min, sec, name]

end

def compare_times(t1, t2)
    comparison = 0

    # We'll only compare up to the number of elements in t2,
    # so we can use this to compare both dates and times, by
    # passing different sized arrays ...

    t2.each_index do |i|
        comparison = t1[i] - t2[i]

        if comparison != 0
            break
        end
    end

    comparison
end

# Grab the output from ls ...

ls = IO.popen("ls -lt --full-time").readlines

# Get rid of the total file count line ...

ls.delete_at(0)

# Convert the text times to something we can
# compare more easily ...

times = ls.collect { |line| parse_ls_line line }

# Search through the array for the last file that
# is newer than a specific date/time ...

cutoff = [2001, 12, 1] # December 1st, 2001

oldest = nil

times.each_index do |i|
    if compare_times(times[i], cutoff) < 0
        oldest = i - 1
        break
    end
end

#puts "Oldest acceptable file is #{times[oldest][6]}"

puts "Newer files are ...\n"

(0 .. oldest).each do |i|
    puts "   #{times[i][6]}"
end