On Wed, 10 Apr 2002 15:23:06 -0400 "Thirumale Niranjan"
> I don't understand implicit subscriptions. It sounds like the
> administrative step means that each fetch registers a subscription
> for the corresponding (individual) object, not a content group.
> Subscriptions could be initiated in many ways: (1) server decides that
> a client that requests a certain object wll be notified about
> invalidations to that object -- if we need to keep this in sync with
> the content group notion, we could just say that the server will
> subscribe the client to all content groups to which the object
> belongs. (2) client can initiate the subscription itself -- 2(a)
> clients can run heuristics on access patterns, popularity etc., and
> come up with subscription requests dynamically, or 2(b) administrators
> specify content groups that a client should subscribe to, through a
> (remote) administration API.
Okay, this clears up the implicit subscription concept. I guess the
clients are, as you said above WRT scalability, pretty passive in this
protocol. So if a client starts getting invalidation messages out of
the blue, it will just go ahead and do the right thing. Is there any
problem with the client and server having a different idea of the
consistency mode? The client might be using explicit consistency until
the server start sending invalidation. Does the client need to switch
to weak consistency mode when it receives the first invalidation? Maybe
it doesn't matter what the client's "mode" is. Can the server put the
client in strong consistency mode via an invalidation message? What if
the client doesn't support it?
> In node-level strong consistency, why do servers have to wait for
> clients to retrieve the new content before making it live. This
> seems sure to delay the process (especially if the delayed refresh
> directive is used) but doesn't improve the consistency. Also in
> this case, the client need a special way to get the not-yet-live
> content from the origin server. How do they do this?
> Let's say there is a large object that needs to be updated at two
> clients, in a strongly consistent manner. That is, we expect the
> clients to expose the object to external requests at approximately the
> same time. However, since the object is large, and the speed of the
> network between the clients and the origin could be very different,
> one client could end up retrieving the object much before the
> other. Therefore, all clients should be synchronized as much as
> possible. Doing it in a two-phase manner achieves this. The same
> argument holds for why a server cannot make it live before that -- the
> end user experience will be inconsistent.
The user experience may be difference from a performance perspective if
one of the clients is connected over a slow link. But from a
correctness perspective I don't see that it matters. Also the clients
can immediately, but asynchronously, start fetching the content if they
I see, the key point is that the time between the start of the
invalidation and the receipt of the commit message is when the special
staging name is valid. After the commit, the clients should use the
normal external URL. It would be good to clarify this.
> To get at the data, clients can go to a special staging server that
> has updated content but is not exposed to public clients.
> Alternatively, they can go to the origin, but identify the request as
> a WCDP request, via a special header, so that the origin can serve out
> the not-yet-live content. Our product takes the first approach; the
> latter approach needs to be fleshed out in more detail.
Okay, so the clients "know" about the special staging area on the server
and have a way of mapping obj_invalidation_id to both its external URL
(for normal fetching from the Origin server) and its staging URL (for
> The force option in the invalidate request seems dubious. What
> happens if a client declines to perform the invalidation in
> response response to a force, or an update perhaps due to a network
> outage. What does it "mean" for the server to specify force? How
> does force impact the semantics of the consistency guarantees?
> I guess we haven't *clearly* defined the meaning of the "force"
> option. I don't understand what you mean by a "client declines" -- if
> we say that a force is not declinable, then the client cannot
> decline. For an update, we could say that if a client is not able to
> update the content, perhaps due to a network outage, an invalidate
> would preserve correctness. In any case, we need to specify the exact
> response codes and such.
Srikanth mentioned that force means that the client should delete, not
just invalidate local content. Thus force would mean, not that the old
content is invalid, but that attempts to revalidate it by fetching would
fail because the object is gone. Presumably a user wouldn't request it
(because ideally the pointer to the object is now gone too), but a
client prefetching objects would benefit from this extra hint.
> Explicit consistency is typically much weaker than strong
> consistency, unless the HTTP cache directives are all set
> appropriately (i.e. "do not cache"). Thus it would seem that
> falling from strong back to explicit consistency rules during a
> server outage would be unsafe. Maybe reverting to uncached (as
> while waiting for a commit) would be safer for content groups
> that desire strong consistency?
> Since there will always be caches that are not RUP-compliant,
> administrators MUST use HTTP headers in a responsible way, so that
> unsafe behavior is not seen at caches. Therefore, it is not a stretch
> to assume that administrators will continue to use these
> headers. Since a WCDP cache will overrule the cache-control headers if
> the object is being subscribed to, administrators can set a
> conservative (small) value to the TTL, so that safe behavior is
> exhibited by other caches, browsers etc. Therefore, falling back to
> explicit consistency is not unsafe.
I guess I don't buy the suggestion that strong consistency and a small
TTL are compatible. But if the admin is serious about strong
consistency he can set the cache control directives to "nocache", right?
> Servers do detect failed clients when a message (including heartbeats)
> do not make it to the clients, or an acknowledgement is not received
> within a timeout period.
Good. Perhaps the spec should say this explicitly.
> The refresh directive is indeed a performance optimization for a
> cache. But since we intend this to be an update protocol that is
> used to notify web servers (by that, I mean an authoritative source
> of content) in rehosting sense, or mirrored servers, an "update"
> can have a mandatory feel to it.
> Also, it would help to think of multiple sources of content, not
> just one "origin". If one fails, the client could go to any of the
> other nodes which has the updated content.
Okay. It would help if the refresh directive stuff were explained a bit
better. The rationale at the beginning of section 3.4 seems misleading.
It implies that the server will send the object instead of or in
addtiion to the invalidation message. Then it goes on to comment on the
messages required for invalidating object. But the protocol does not
send the new (small) object with its invalidation, nor reduce the
-- Ted Anderson -- <firstname.lastname@example.org> (nee email@example.com) PGP KeyID: 5EFF0C81, Fingerprint: A9FF AD11 8485 A933 ...
This archive was generated by hypermail 2b29 : Thu Nov 18 2004 - 11:23:01 MST