James Edward Gray II wrote:
> On Aug 16, 2007, at 2:35 PM, William James wrote:
>
> > This is the best I've come up with so far.  It should handle any CSV
> > record
> > (i.e., fields may contain commas, double quotes, and newlines).
> >
> > class String
> >   def csv
> >     if include? '"'
> >       ary =
> >         "#{chomp},".scan( /\G"([^"]*(?:""[^"]*)*)",|\G([^,"]*),/ )
> >       raise "Bad csv record:\n#{self}"  if $' != ""
> >       ary.map{|a| a[1] || a[0].gsub(/""/,'"') }
> >     else
> >       ary = chomp.split( /,/, -1)
> >       ##   "".csv ought to be [""], not [], just as
> >       ##   ",".csv is ["",""].
> >       if [] == ary
> >         [""]
> >       else
> >         ary
> >       end
> >     end
> >   end
> > end
>
> You are pretty much rewriting FasterCSV here.  Why do that when we
> could just use it instead?


That is a dishonest comment.

What if someone had said to you when you released "FasterCSV":
"You are pretty much rewriting CSV here.  Why do that when we
could just use it instead?"

Parsing CSV isn't very difficult.
"FasterCSV" is too slow and far too large.  People don't need
to be installing it on their systems when a few lines of code
will do the job.

Why do you want people to be dependent on your slow, bloated
code?  You perhaps think that if there is an alternative,
you won't be paid any more money.