On Thu, Apr 26, 2012 at 9:28 AM, J. Bruce Fields bfields@fieldses.org wrote:
On Thu, Apr 26, 2012 at 02:45:54PM +0100, David Howells wrote:
Steve French smfrench@gmail.com wrote:
I also would prefer that we simply treat the time granularity as part of the superblock (mounted volume) ie returned on fstat rather than on every stat of the filesystem. For cifs mounts we could conceivably have different time granularity (1 or 2 second) on mounts to old servers rather than 100 nanoseconds.
The question is whether you want to have to do a statfs in addition to a stat? I suppose you can potentially cache the statfs based on device number.
That said, there are cases where caching filesystem-level info based on i_dev doesn't work. OpenAFS springs to mind as that only has one superblock and thus one set of device numbers, but keeps all the inodes for all the different volumes it may have mounted there.
I don't know whether this would be a problem for CIFS too - say on a windows server you fabricate P:, for example, by joining together several filesystems (with junctions?). How does this appear on a Linux client when it steps from one filesystem to another within a mounted share?
In the NFS case we do try to preserve filesystem boundaries as well as we can--the protocol has an fsid field and the client creates a new mount each time it sees it change. And the protocol defines time_delta as a per-filesystem attribute (though, somewhat hilariously, there's also a per-filesystem "homogeneous" attribute that a server can clear to indicate the per-filesystem attributes might actually vary within the filesystem.)
Thank you for reminding me, I need to look at this case more ... although cifs creates implicit submounts (as we traverse DFS referrals) there are probably cases where we need to do the same thing as NFS and look at the fsid so we don't run into a Windows server exporting something with a "junction" (e.g. directory redirection to a DVD drive for example) and thus cross file system volume boundaries.