At 02:33 PM 10/11/99 -0700, Jeremy Blackman wrote:
>As I said, it's just one of my personal quirks that I don't like running
>large-volume mailing lists as interpreted scripts. I watched someone run
>a Majordomo list with 1900 users that got 80 posts a day, and it flattened
>his machine... that was one of the reasons I decided to write Listar in C.
Let's think about this for a second. How does Majordomo work? The
delivery path is....The MTA gets the message and processes it through a
perl script, resend. The perl script processes in, (let's be real
generous, here) at most five-six seconds of CPU (that is being pretty
generous for resend) and then hands it back to the MTA at a different
alias. At that point, the Perl script is *completely out of the process*.
The MTA then expands the 1900 user list and does all of the delivery.
glock /root]# time /usr/lib/majordomo/wrapper resend -l turkey
2.77user 0.08system 0:03.49elapsed 81%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (496major+384minor)pagefaults 0swaps
I actually tried this several times - resend, on my pitiful P-100 that I do
all of the majordomo stuff from pretty consistently runs in about the above
times. Under three seconds, on a small test list.
Would the time taken by resend be much larger on a list with 1600 users?
The only thing that would be extended would be the time taken to insure
that the user was a member of the list. Frankly, that is just not that
much of Majordomo's admittedly bloated processing, but the loop is bad and
complex. My measurement shows that the the total processing time (for a
name that is not found in the first list, worst case) is entended to about
6 seconds on a 1600 member list. Generally, the poster would be found
early and this would not take six seconds, let's just play the game with
worst case numbers.
So, let's extend this: If majordomo took 90*6 seconds of CPU to process
those messages, that was 9 minutes out of 1440 minutes of the say.
Assuming that the day was prime shift only, 8 hours, that was 1.9% of the
time. Flattened? Yes, probably, but not by Majordomo. Let me guess, they
were running sendmail, right? That was where, oh, probably 98.1% of the
overhead was, and the entended times to deliver each message insured that
this poor user had lots of copies of sendmail running, each working through
an individual message. In other words, the fact that the list manager was
written in an interpreted language had pretty much nothing to do with the
If you decided not to write listar in an interpreted language to save
processing overhead, you can save no more than the interpreted pieces use,
which would seem to be about 2% of total prime shift time for your example
Listar happens to fix the real bloat problem, Sendmail, by doing its own
delivery. Using Postfix with Majordomo fixes that as well, by using a
better delivery methodology. Frankly, I'd rather only have one delivery
agent on my machine.
But my point is that the machine was flattened by C code, not by Perl
scripts. You can write bad code in many languages. You can write good
code in many languages. And, of course, the excessive use of objects makes
code written in any language bloated and bad. :-) In this case, the use of
interpreted languages had nothing to do with the "flattening" of the machine.
Last time I took a look at delivery statistics, Postfix had delivered
110,000 messages in a week, from that same P-100. And rc5des still gets
more than 90% of the CPU.
We will fight for bovine freedom, And hold our large heads high.
We will run free, with the buffalo or die! Cows with Guns. - Dana Lyons,
Cows With Guns
Nick Simicich mailto:njs @
com or (last choice)
http://scifi.squawk.com/njs.html -- Stop by and Light Up The World!