I've noticed what appears to be an inconsistency in the behaviour of
Net::Telnet's waitfor() method.

If a timeout occurs while waiting for a match, a TimeoutError is raised (and
any data received so far is lost).

However, if the remote end disconnects while waiting for a match, no error
is generated, and the data received so far is returned. It explicitly
rescues EOFError to do this.

This leads to difficult usage, because if you do

    @telnet.waitfor(/prompt/)

and it returns, it could mean one of two things: either the string you were
waiting for was matched, or it wasn't matched and the far end disconnected.
The first is what you expect, and the second is likely to be an error
condition.

So I find that every waitfor call has to be wrapped, e.g.

    res = @telnet.waitfor(/prompt/)
    unless /prompt/ =~ res
      raise EOFError  # or handle this situation some other way
    end

Now, perhaps the current behaviour is useful in some situations - you telnet
to a device, issue a command, and it returns some data and disconnects. But
I'd like to be able to say explicitly this is what should happen, e.g.

    @telnet.waitfor(nil)
or
    @telnet.waitfor(:eof)

However you'd have to have an option to Net::Telnet.new to enable this
behaviour, to maintain backwards compatibility.

Does anyone have any thoughts on this?

Thanks,

Brian.