Skip to content
Commit 81de916f authored by Linus Torvalds's avatar Linus Torvalds
Browse files

tty_buffer: get rid of 'seen_tail' logic in flush_to_ldisc



The flush_to_ldisc() work entry has special logic to notice when it has
seen the original tail of the data queue, and it avoids continuing the
flush if it sees that _original_ tail rather than the current tail.

This logic can trigger in case somebody is constantly adding new data to
the tty while the flushing is active - and the intent is to avoid
excessive CPU usage while flushing the tty, especially as we used to do
this from a softirq context which made it non-preemptible.

However, since we no longer re-arm the work-queue from within itself
(because that causes other trouble: see commit a5660b41 "tty: fix
endless work loop when the buffer fills up"), this just leads to
possible hung tty's (most easily seen in SMP and with a test-program
that floods a pty with data - nobody seems to have reported this for any
real-life situation yet).

And since the workqueue isn't done from timers and softirq's any more,
it's doubtful whether the CPU useage issue is really relevant any more.
So just remove the logic entirely, and see if anybody ever notices.

Alternatively, we might want to re-introduce the "re-arm the work" for
just this case, but then we'd have to re-introduce the delayed work
model or some explicit timer, which really doesn't seem worth it for
this.

Reported-and-tested-by: default avatarGuillaume Chazarain <guichaz@gmail.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Felipe Balbi <balbi@ti.com>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent cb0a02ec
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment