From: Wolfram Conen <conen@gmx.de>

Date: Sat, 27 Oct 2001 23:06:02 +0200

Message-ID: <3BDB21BA.84D1C4EA@gmx.de>

To: Vassilis Christophides <christop@ics.forth.gr>, www-rdf-interest@w3.org

Date: Sat, 27 Oct 2001 23:06:02 +0200

Message-ID: <3BDB21BA.84D1C4EA@gmx.de>

To: Vassilis Christophides <christop@ics.forth.gr>, www-rdf-interest@w3.org

Vassilis Christophides wrote: > > Dear Wolfram > > Thanks for your detailed comments. It help us a lot to understand the > subtleties between the the current RDF MT and the previous RDF > semantics. However, I would like to point out some of my objections. > > >If this treatment of range/domain is really useful, is a different > >(but already closed) issue. > > First of all, I am wondering to what extend we can firmly close > semantics issues of RDF, without giving a sufficient justification of > the adopted choices w.r.t. the modeling requirements of Semantic Web > applications, and, the processing requirements of RDF systems (query > languages, inference engines, etc.). I will try mention few examples. Dear Vassilis, thanks for your last, very intersting email. As I am not involved with RDF Core WG and as I also think, that seeing ranges/domains as constraints and not as another way to add types is pretty reasonable (even in the light of possible non-monotonicity), I'am not the right one to comment on your objections (though, let me say that we were used to the interpretation of ranges/domains that you suggest in your first example, and that we will miss this possibility too). However, let me add some clarifications to the 3rd point of your email below because it is closely related to the previous emails. While I share most of your remarks on the other examples, the example there does not seem to be completely precise. I nevertheless think, that now, with a draft of the MT available, it is a good time to discuss what should be in the MT in the end, because it is my feeling that it will be the MT that defines what the core of RDF really is/will be, and the draft MT already allows to give discussions the necessary precision (for example, returning to the constraint interpretation would simply require to remove the closure rules 5 and 6, this would, however, have some impact on s-entailment/schema lemma etc.). > > 3) Consider finally the following schema: > > C1 p1 C2 > C3 p2 C4 > C3 < C1, p2 < p1 and C2 < C4 > > From the following RDF MT interpretation rules: > > 1) <x,y> is in IEXT(I(rdfs:subClassOf)) iff ICEXT(x) is a subset of ICEXT(y) > 2) <x,y> is in IEXT(rdfs:subPropertyOf)) iff IEXT(x) is a subset of IEXT(y) > 3) If <x,y> is in IEXT(I(rdf:range)) and <u,v> is in IEXT(x) then v is in ICEXT(y) > 4) If <x,y> is in IEXT(I(rdf:domain)) and <u,v> is in IEXT(x) then u is in ICEXT(y) > > we have: > > i) I(p2) is a subset of I(p1); (from 2) > ii) I(domain(p2)) is in I(C3); (from 4) > iii) I(range(p2)) is in I(C4); (from 3) > iv) I(domain(p1)) is in I(C1); (from 4) > v) I(range(p1)) is in I(C2); (from 3) > vi) I(C3) is a subset of I(C1); (from 2) > vii) I(C2) is a subset of I(C4); (from 2) > > I am wondering how i), iii), and vii) are all together valid? > I believe that equality of I(C4) and I(C2) should be considered here. Let us assume that we have a graph E that contains the triples from your example (p1 domain C1,...,p2 subProperty p1 etc.). Now we produce a graph C that is the schema-closure of E. (The schema lemma claims that we can forget about s-Entailment now, but let us but this aside for a moment). Assume that we have an interpretation I that satisfies C. We also need a projection for sake of simplicity: For a set of pair S, let P2 be the projection of the second pair members, that is P2(S) = { y | <x,y> in S}. Now, it follows for I from the interpretation rules: (A) P2(IEXT(I(p2))) subset P2(IEXT(I(p1))) subset ICEXT(I(C2)) [subProperty] [range] (compare your (i) and (v) from above) (B) P2(IEXT(I(p2))) subset ICEXT(I(C4)) [from range] (iii above) (C) ICEXT(I(C2)) subset ICEXT(I(C4)) [subclass] (vii above) As you see, it does not follow that the class extensions of the interpretations of C2 and C4 should be equal -- it follows from (A) and (B) that all elements of the projection of the second argument of the extension of the interpretation of p2 must be in both class extensions (that is, ICEXT(I(C4) and ICEXT(I(C2)) share some elements if IEXT(I(p2)) is not empty). Now (C) gives us the additional information, that all elements of ICEXT(I(C2)) are in fact also elements of ICEXT(I(C4)) (but ICEXT(I(C4)) can be bigger, see below, which is not really counterintuitive, see below). > However, the RDF MS entailment rules produce different extensions for > I(C4) and I(C2): > > x p2 y => x rdf:type C3 > x rdf:type C1 > y rdf:type C4 It also produces [x p1 y] and, therefore, also [y rdf:type C2]. However, to make ICEXT(I(C4)) larger then ICEXT(I(C2)), you could simply say: [z rdf:type C4], which would, however, not give you a problem with respect to the interpretation rules. > > z p1 w => z rdf:type C1 > w rdf:type C2 > w rdf:type C4 > > How consistent looks all the above? Looks good to me. In the end, the question is whether the schema lemma is true (I perceive the schema lemma as saying that if you have an RDF graph where RDFS vocabulary is used and you compute the schema closure and look for interpretations that satisfy the RDF constraints, then you can forget about the RDFS constraints (interpretation rules, as has been said above)). As I see it, you tried to construct an example where the schemma-lemma would be violated, but, I think, there is no such example. Best regards WolframReceived on Saturday, 27 October 2001 17:02:29 UTC

*
This archive was generated by hypermail 2.4.0
: Friday, 17 January 2020 22:44:32 UTC
*