Hmmm... not a general solution, but depending on the specific
requirements there may be a higher performance method.

IFF:
records are fixed size
record order isn't fixed

Swap record to be deleted with valid record in file (don't even need
to do a full swap, just overwrite the bad record) - repeat until all
records to be deleted are at the end of the file and truncate the file
before them.

With proper seeking, this is going to have minimal reading and writing.

(if all deletions were block aligned, i'd start looking into direct
filesystem manipulation for pure performance, but I don't know how
that would work -- and i don't think it would have nice effects on
fragmentation)

-Charlie

On 12/18/05, Thomas Dutch <rubyforum / ikwisthet.net> wrote:
> Harpo wrote:
> > Thomas Dutch wrote:
> >
> >> Hello,
> >>
> >> I'm relatively new to Ruby and I have a question:
> >>
> >> Is it possible to remove one or more lines from a file, without
> >> reading the whole file and writing it away again? This because I'll
> >> have to do this with files of 1 gigabyte and larger... Is there a high
> >> performance solution for this?
> >>
> >> Thank you!
> >
> > Said like this, i don't think it is possible as it is not related to the
> > language but to the file structure.
> > It depends on the programs which read the file, can they be fixed to
> > accept lines which begin with somethinq that says 'skip me', such as a
> > '#' ?
>
> No not really... It's a large amount of text, all under eachother.
> Another file contains an index where a part of the file starts and where
> it ends. The code should remove a part that starts at the position
> specified in the second file, and the ends at the other position
> specified.
>
> --
> Posted via http://www.ruby-forum.com/.
>
>