On 12/2/06, William James <w_a_x_man / yahoo.com> wrote:

> It's easy to handle all cases since CSV is a simple format;
> no pompous prolixity is needed:
>
> puts ['x',' y ','He said, "No!"'].map{|x| x=x.to_s
>   x =~ /["\n]|^\s|\s$/ ? '"' + x.gsub(/"/,'""') + '"' : x }.join(',')
> x," y ","He said, ""No!"""
>
> If that won't handle 100k rows, then fasterCsv probably won't either.

It is indeed faster by a long shot, but it doesn't conform to the CSV
spec.  (See JEG2's response)
Also, even in these trivial examples, I sure think that the code which
uses FasterCSV is pretty, even if I'm using the likely-to-be-slowest
form of generating rows in the library...

I'd be interested in seeing a pure ruby CSV implementation which
conforms to the spec and does better than FasterCSV, though I think
James has it pretty fine tuned, given the edge cases he considers and
the strictness of the library.

seltzer:~ sandal$ time ruby -rubygems fcsv.rb

real    0m11.111s
user    0m10.970s
sys     0m0.078s
seltzer:~ sandal$ cat fcsv.rb
require "fastercsv"
a = %w[some row data]
100000.times { a.to_csv }
seltzer:~ sandal$ time ruby william.rb

real    0m0.525s
user    0m0.515s
sys     0m0.007s
seltzer:~ sandal$ cat william.rb

a = %w[some row data]

100000.times {
 a.map{|x| x=x.to_s
 x =~ /["\n]|^\s|\s$/ ? '"' + x.gsub(/"/,'""') + '"' : x }.join(',')
}