SED data model v0.92
Gilles DUVERT
Gilles.Duvert at obs.ujf-grenoble.fr
Mon Nov 22 07:38:59 PST 2004
Ed Shaya wrote:
>
>> 1) Ed Shaya wrote:
>>
>>> Hopefully, it is rare that one only has upper limit info. The
>>> observers certainly should provide the measured value even if it is
>>> below the noise. There is non-zero information in that value.
>>
>>
>>
>> Well, although having only upper limits may seem pitiful, a number of
>> sound scientific results have been drawn in the past from
>> undetectablilty results. Upper limits are really a result for SEDs,
>> and used as such by astronomers. Since they are used, they are
>> published and they will go in the VO...
>
>
> Let me rephrase my first sentence:
> Hopefully, it is rare that one only has the upper lmit but no info on
> the actual readout or measured value of the upper limit point.
> So, we agree that upper limits are important scientifically.
Yes for the last sentence "we agree that upper limits are important
scientifically."
>> I just wanted to comment that a value *measured* cannot be *below*
>> the noise. Only the noise level is meaningfull in this case
>
>
> No! Even a negative measurement on a property that is non-negative is
> scientifically meaningful. See below.
>
Well, but... (see below).
>> (the text notes, and this is customary indeed, that authors usually
>> "choose to render measurements as upper limits if the flux value is
>> less than some multiple (e.g. 3) of the lower error" (note:
>> shouldn't be *Upper* error? ) ).
>
>
> You want the lower error bar to reach from the upper limit to the x-axis.
>
I think, in view of what you object, that I would prefer upper limits to
have a different status than measure. Besides, I'm ill at ease thinking
measure in geometrical terms (or, rather, in "plot+axis" terms), this
looks too linked with our day to day external representation of data, (a
mathematician would perhaps read "measure" as a "norm" for topology )
>> The risk would be, if any *measured* value is set, that is is taken
>> at face value, when only the noise level has sense. Could you use
>> some kind of blanking value for the measured value in this? (or is
>> there a general concept of upper limit that would go in the
>> Quantity::Accuracy data model?)
>>
> All I need say here is that if 4 independent experiments come up
> with 0.9 sigma detections of some measurement that it would be
> awful if they each published only the upper limits of the value.
Here you suppose that the value "detected at 0.9 sigma" exists, or,
rather, that I can pinpoint it in the graph, and put nice big errorbars
on it for each of the 4 measurements. What I say is that this value does
not exist until it is measured, and it is measured only when it is not
an upper limit.
Of the 4 groups coming up with this "0.9 sigma detection", the 3 last in
time are morons: they were not able to devise an experiment with less
uncertainty as the 1st pioneering group. Shall we continue to support
them financially? ;^). Besides, since the experiments are not the same,
and you want to get a measurement at the end by averaging values+errors,
you have to prove that errors and "measure" in those different
experimental setups can be averaged. Unless the 4 experiments are just 4
realizations of the 1st measurement, and, bingo, upper limits are still
upper limits in this case....
Fortunately, for people bold enough to claim (and they are numerous!)
that their 0.9 sigma measurement (was really an upper limit) _is_ a
measurement, they can use the normal value+error scheme.
> The possibility that some moron may not look at the quoted noise
> levels before coming to some silly conclusion on a measurment does not
> compensate for missing out on potentially important real discoveries
> that properly archived data makes possible.
a) The possibility of "some moron..." is huge.
b) I would not place too much faith in discoveries (real, important)
based on a sum of invalid measurements...
Best,
Gilles
More information about the dm
mailing list