Bernie Cosell <bernie @
> The last two are OK, but I'm not convinced about the first two: in
> 'normal' delivery, there's just *one* copy of the message and it gets
> routed all over hell and gone... is the CPU to make and manage all those
> extra copies trivial?
Basically, yes. :) Compared to the amount of CPU that it takes to
analyze bounces, figure out what address bounced, etc. it's pretty
trivial. And that analysis doesn't even always work.
> even if trivial, how does it end up being _reduced_ utilization.
Because your bounce handling suddenly becomes trivial, and as a result
your mailing lists get cleaned of bad addresses *much* faster and more
thoroughly than any process that requires human invention (as bounce
handling without VERP does with depressing frequency).
> And doesn't there have to be CPU activity and server activity behind
> handling those extra 40% of messages? And -one- down server will now
> not result in _one_ message in your queue, but bunches (implies more
On the other hand, those can be scheduled and retried with more
flexibility, which may even out your load (good in general). This could
go either way.
> And similarly for 'server efficiency' --- how does making 600 SMTP
> connetions to mail.aol.com instead of one result in 'efficiency'
VERP doesn't have to be implemented that way; you can open a single
connection and send all the messages in serial. You can even use
pipelining if the remote server supports it, which would regain part of
the message transmission delay.
> That is, it strikes me as the difference, in usenet terms, between
> multiple-newsgroup-posting and crossposting,
Well, if the SMTP protocol supported return paths that varied by
recipient, one wouldn't have to do it that way. The difference between
this and Usenet multiposting is that one actually gains a very significant
feature from varying return paths, whereas there's really nothing gained
from Usenet multiposting.
Russ Allbery (rra @