On Sun, 07 Jul 2002 23:57:58 -0700
Chuq Von Rospach <chuqui @
> On 7/7/02 10:46 PM, "J C Lawrence" <claw @
>> Without full biometric and willing consent verification yada yada
>> across the transmission you can't guarantee full veracity, and even
>> then there are cheesy little holes along with the big one that really
>> can't ever be verified: intent.
> The big one is this. It's easy to deal with e-mail from people you
> The tough part is this -- how do you deal with email of people you
> don't know? If you don't know them, all the verified ID in the world
> means nothing. I can show you my driver's license, my passport and
> three credit cards, and none of that does a thing to solving the
> question of "is this guy going to hit me over the head and take my
Precisely. That's the question of intent I mention.
> Nick is running up the strawman that if we can't do everything, all
> the time, then don't do anything. That obviously fails, but it's a
> wonderful rhetoric.
Which is most of my points in previous posts:
I believe I have to do something.
I know it won't be perfect.
I suspect I can make it "good enough" that I mostly don't care any
more, and even better, can make it good enough that most attempts to
work around it can be trivially and automatically detected.
I believe that if I can even close to that "good enough" point that:
I can claim "due diligence" with a clear conscience (even tho that
may not map to lack of legal liability).
That I can do better than the standard 90/10 ratio. I wouldn't be
surprised if I could even get it close to a 99.xxx% solution.
I'll be prepared to settle for a "good enough" approximation.
Abortive SPAM handling has already inculcated us to accept "good
> My counter-argument is that we have a responsibility to do what we can
> safely and reasonably, help users understand the risks where we can't
> provide that safe harbor, but at the same time, we have to be very
> careful about what things we choose to put into our purview of
Yeah, especially if the lawyers start to get involved.
> But when you start talking about HTML and web bug issues, it gets a
> lot less clearcut. YOU may feel strongly about privacy issues, but
> does running a mail list give you the right to force your privacy
> views on your users? With viruses, there's a clear "protection of the
> commons" need here. You can't have someone with mumps running around
> the pregnant women. But that is far from clear on privacy. If the user
> doesn't care about web bugs, what gives you the right to force your
> view of that on them? Where does that privacy issue become one of the
> commons, where failing to protect users causes damage to that commons?
Oooooh, ewwww. Good point. Much of those answers derive to how you
subjectively define "the commons" in terms of social contracts and their
relevance to personal privacy. While there has been a fair bit of
dialog on this sort of area in the west, it hasn't been common, or per
the little research I've done (mostly into the legal dialog) terribly
thorough or insightful (eg camera monitoring of public places, phone
> I just don't believe it's there. I do believe list admins can
> evangelize their views, but where virus fighting is an attempt to
> mitigate damage caused ot the commons we all use, this privacy stuff
> is instead an attempt to force a personal agenda on the users of the
> list, where you effectively are telling the users what they have to
> believe -- and that coercion doesn't come with any justification of
> common need like the virus hacks do.
That's actually a political statement that presupposes a comparatively
deconstructionist definition of the commons and its principles.
Depending on your political agenda and views on social contracts and the
rest its not difficult to argue that privacy definitions and strictures
are integral to the base definition and operation of the commons. Start
for instance with a David Brin-esque model of ubiquitous public
surveillance as a comparative prime assumption of the "public commons"
and its fairly easy to work backward from the negative reaction to that
to conclude that more restrictive privacy expectations and social
contracts are implicit in our current constructions of the commons.
> So in one case you're taking action for common good and protecting
> users who may be incapable of that action themselves. But in another,
> it's effectively saying "you have to do it my way", but without the
> damage to the commons that comes from inaction. One is the health
> department locking up people with active TB so others don't get
> it. The other is Greenpeace blockading an Esso station because they
> feel you shouldn't be buying gas there.
A public webcam viewing a socially/politically sensitive location (eg
the door to a recovery center (eg rape victims) or abortion clinic).
Do the people going to a rape victim support group have a reasonable
right at a social level (not legal at this point) to privacy and
protection from monitoring and public identification in their partaking
of those services?
Assuming a "Yes" answer, in moving to less socially sensitive areas than
rape victims, at what point does it become acceptable or not?
Basic point: There is a scale. There are no clearly defined values on
the scale outside of the end points, and even those are debatable.
> Do you, as list admin, have the right to act as greenpeace? I don't
> believe so.
I'm not clear that it is a question of rights, mostly due to the old
free speech and printing press arguments.
> JC and Nick, I'm sure, disagree. And wombat is probably ready to kill
> me.... (grin)
Think of it as dialectic tension.
J C Lawrence
---------(*) Satan, oscillate my metallic sonatas.
nu He lived as a devil, eh?
http://www.kanga.nu/~claw/ Evil is a name of a foeman, as I live.