On 13 Mar 2000, at 4:05, Tim Pierce wrote:
Pardon the dumb question [I don't even know what the acronym 'VERP'
stands for, although I understood what you were talking about so perhaps
it didn't matter]:
> Because VERP delivery requires delivering a separate message body
> for each recipient, ...
> So I calculated this percentage for each of the thousand lists, then
> added them all together and took the average percentage. The result
> was an average volume increase of 40%.
> I dream about the benefits that we could get from VERP delivery --
> reduced CPU utilization, increased server efficiency, less listowner
> confusion, better word-of-mouth, and so on ....
The last two are OK, but I'm not convinced about the first two: in
'normal' delivery, there's just *one* copy of the message and it gets
routed all over hell and gone... is the CPU to make and manage all those
extra copies trivial? even if trivial, how does it end up being
_reduced_ utilization. And doesn't there have to be CPU activity and
server activity behind handling those extra 40% of messages? And -one-
down server will now not result in _one_ message in your queue, but
bunches (implies more overhead/load). And similarly for 'server
efficiency' --- how does making 600 SMTP connetions to mail.aol.com
instead of one result in 'efficiency' -- all of the protocol, ident,
lookups, etc, all have to be done iteratively and for each copy, instead
of just once.
That is, it strikes me as the difference, in usenet terms, between
multiple-newsgroup-posting and crossposting, and few folks have good
'efficiency' things about multiple-posts. So unless I'm really
misunderstanding what VERP is [which is possible, since I'm kind-of
guessing], it would be better for the -users-, it isn't clear right off
that it'll be a win for the _server_ (even beyond the extra net bandwidth
eaten by the extra copies).
Bernie Cosell Fantasy Farm Fibers
com Pearisburg, VA
--> Too many people, too few sheep <--