From arun at sdsc.edu Thu Mar 8 09:54:16 2007 From: arun at sdsc.edu (Arun Jagatheesan) Date: Thu, 8 Mar 2007 09:54:16 -0800 Subject: [Federated-fs] Test mail - please ignore Message-ID: <016c01c761aa$cd179d80$1f12fea9@sanjaslpmbp> Let there be light! (Test mail to initiate mail archiving of federated-fs list). Arun ~~~~~~~~~ Luck is what happens when preparation meets opportunity. Arun Jagatheesan http://www.sdsc.edu/~arun/ San Diego Supercomputer Center. (858)822.5452 From ellard at netapp.com Mon Mar 19 12:51:13 2007 From: ellard at netapp.com (Ellard, Daniel) Date: Mon, 19 Mar 2007 15:51:13 -0400 Subject: [Federated-fs] Draft of requirements for a federated filesystems Message-ID: The following draft is submitted for review. Our goal is to jump-start discussion of federated file system protocols by articulating what we believe are the functional requirements of such a system. We welcome input and discussion from everyone. Questions or comments intended soley for the authors should be sent to: xdl-glamour at netapp.com General discussion should be sent to this list (federated-fs at sdsc.edu). If and when appropriate, updated drafts will be mailed to this list. We would like for the review period to end on Friday, April 6. We will host conference calls for discussion of the draft, starting next week (schedule TBD). If you wish to participate in the conference call(s), please send email directly to me (ellard at netapp.com). After April 6, our intent is to prepare a final draft of the requirements document and begin drafting proposals for protocols and other mechanisms that satisfy the requirements. Please let me know if you would like to help prepare these drafts. Thanks, -Dan 1 DRAFT DATE 2007-03-19 15:23:48 (-0400) 2 3 Title: REQUIREMENTS FOR FEDERATED FILESYSTEMS 4 5 Contributors: 6 7 Daniel Ellard, Network Appliance; 8 Craig Everhart, Network Appliance; 9 Manoj Naik, IBM Research; 10 Renu Tewari, IBM Research 11 12 PURPOSE 13 14 This draft describes and lists the functional requirements of a 15 federated file system and defines related terms. Our intent is to use 16 this draft as a starting point and refine it, with input and feedback 17 from the file system community and other interested parties, until we 18 reach general agreement. We will then begin, again with the help of 19 any interested parties, to define standard, open federated file system 20 protocols that satisfy these requirements and are suitable for 21 implementation and deployment. 22 23 We do not describe the mechanisms that might be used to implement this 24 functionality except in cases where specific mechanisms, in our 25 opinion, follow inevitably from the requirements. Our focus is on the 26 interfaces between the entities of the system, not on the protocols or 27 their implementations. 28 29 For the first version of this document, we are focused on the 30 following questions: 31 32 - Are any "MUST" requirements missing? 33 34 - Are there any "MUST" requirements that should be "SHOULD" or "MAY"? 35 36 - Are there any "SHOULD" requirements that should be "MAY"? 37 38 - Are there better ways to articulate the requirements? 39 40 OVERVIEW 41 42 Today, there are collections of fileservers that inter-operate to 43 provide a single namespace comprised of filesystem resources provided 44 by different members of the collection, joined together with 45 inter-filesystem junctions. The namespace can either be assembled at 46 the fileservers, the clients, or by an external namespace service -- 47 the mechanisms used to assemble the namespace may vary depending on 48 the filesystem access protocol used by the client. 49 50 These fileserver collections are, in general, administered by a single 51 entity. This administrator builds the namespace out of the filesystem 52 resources and junctions. There are also singleton servers that export 53 some or all of their filesystem resources, but which do not contain 54 junctions to other filesystems. 55 56 Current server collections that provide a shared namespace usually do 57 so by means of a service that maps filesystem names (FSNs) to 58 filesystem locations (FSLs). We refer to this service as a namespace 59 database (NSDB). In some systems, this service is referred to as a 60 volume location database (VLDB), and may be implemented by LDAP, NIS, 61 or any number of other mechanisms. 62 63 The primary purpose of the NSDB is to provide a level of indirection 64 between the filesystem names and the filesystem locations. If the 65 NSDB permits updates to the set of mappings, then the filesystem 66 locations may be changed (e.g., moved or replicated) in a manner that 67 is transparent to the referring filesystem and its server. 68 69 Our objective is to specify a set of interfaces (and corresponding 70 protocols) by which such fileservers and collections of fileservers, 71 with different administrators, can form a federation of fileservers 72 that provides a namespace composed of the filesets hosted on the 73 different fileservers and fileserver collections. 74 75 It should be possible, using a system that implements the interfaces, 76 to share a common namespace across all the fileservers in the 77 federation. It should also be possible for different fileservers in 78 the federation to project different namespaces and enable clients to 79 traverse them. 80 81 Such a federation may contain an arbitrary number of NSDBs, each 82 belonging to a different administrative entity, and each providing the 83 mappings that define a part of a namespace. Such a federation may 84 also have an arbitrary number of administrative entities, each 85 responsible for administering a subset of the servers and NSDBs. 86 Acting in concert, the administrators should be able to build and 87 administer this multi-fileserver, multi-collection namespace. 88 89 Each singleton servers can be presumed to provide its own NSDB 90 service, for example with a trivial mapping to local FSLs. 91 92 GLOSSARY 93 94 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 95 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 96 document are to be interpreted as described in RFC 2119. 97 98 The phrase "USING THE FEDERATION INTERFACES" implies that the 99 subsequent requirement must be satisfied, in its entirety, via the 100 federation interfaces. 101 102 Administrator: A user with the necessary authority to initiate 103 administrative tasks on one or more servers. 104 105 Admin entity: A server or agent that administers a collection of 106 fileservers and persistently stores the namespace information. 107 108 Client: Any client that accesses the fileserver data using a 109 supported filesystem access protocol. 110 111 Federation: A set of server collections and singleton servers that 112 use a common set of interfaces and protocols in order to 113 provide to their clients a common namespace. 114 115 Fileserver: A server exporting a filesystem via a network filesystem 116 access protocol. 117 118 Fileset: The abstraction of a set of files and their containing 119 directory tree. A fileset is the fundamental unit of data 120 management in the federation. 121 122 Filesystem: A self-contained unit of export for a fileserver, and the 123 mechanism used to implement filesets. The fileset does not 124 need to be rooted at the root of the filesystem, nor at the 125 export point for the filesystem. 126 127 A single filesystem MAY implement more than one fileset, if 128 the client protocol and the fileserver permit this. 129 130 Filesystem access protocol: A network filesystem access protocol such 131 as NFSv2, NFSv3, NFSv4, or CIFS. 132 133 FSL: The location of the implementation of a fileset at a particular 134 moment in time. A FSL MUST be something that can be 135 translated into a protocol-specific description of a resource 136 that a client can access directly, such as a fs_location (for 137 NFSv4), or share name (for CIFS). Note that not all FSLs need 138 to be explicitly exported as long as they are contained within 139 an exported path on the fileserver. 140 141 FSN: A platform-independent and globally unique name for a fileset. 142 Two FSLs that implement replicas of the same fileset MUST have 143 the same FSN, and if a fileset is migrated from one location 144 to another, the FSN of that fileset MUST remain the same. 145 146 Junction: A filesystem object used to link a directory name in the 147 current fileset with an object within another fileset. The 148 server-side "link" from a leaf node in one fileset to the root 149 of another fileset. 150 151 Junction key: The key to lookup a junction within an NSDB or a local 152 table of information about junctions. 153 154 Namespace: A filename/directory tree that a sufficiently-authorized 155 client can observe. 156 157 NSDB: A namespace database; a service that maps FSNs to FSLs. The 158 NSDB may also be used to store other information, such as 159 annotations for these mappings and their components. 160 161 Referral: A server response to a client access that directs the 162 client to evaluate the current object as a reference to an 163 object at a different location (specified by an FSL) in 164 another fileset, and possibly hosted on another fileserver. 165 The client reattempts the access to the object at the new 166 location. 167 168 Replica: A replica is a redundant implementation of a fileset. Each 169 replica shares the same FSN, but has a different FSL. 170 171 Replicas may be used to increase availability or performance. 172 Updates to replicas of the same fileset MUST appear to occur 173 in the same order, and therefore each replica is 174 self-consistent at any moment. We do not assume that updates 175 to each replica occur simultaneously -- if a replica is 176 offline or unreachable, the other replicas may be updated. 177 178 Server Collection: A set of fileservers administered as a unit. A 179 server collection may be administered with vendor-specific 180 software. 181 182 Singleton Server: A server collection containing only one server; a 183 stand-alone fileserver. 184 185 PROPOSED REQUIREMENTS 186 187 Note that the requirements are described in terms of correct behavior 188 by all entities. We do not address the requirements of the system in 189 the presence of faults. 190 191 BASIC ASSUMPTIONS 192 193 Several of the requirements are so fundamental that we treat them as 194 basic assumptions; if any of these assumptions are violated, the rest 195 of the requirements must be reviewed in their entirety. 196 197 A1. The federation protocols do not require any changes to existing 198 client-facing protocols, and MAY be extended to incorporate new 199 client-facing protocols. 200 201 A2. The client SHOULD be oblivious to the federation composition. 202 203 With the possible exception of knowing the location of a root 204 fileset, clients can traverse the namespace and use the federation 205 protocols without any prior knowledge of how the namespace is 206 mapped onto filesets or FSLs. 207 208 A3. All requirements MUST be satisfiable in a platform-oblivious 209 manner. 210 211 If a federation operation requires an interaction, USING THE 212 FEDERATION INTERFACES, between two (or more) entities that are 213 members of a federation, then this interaction MUST NOT require 214 any interfaces other than the federation interfaces and the 215 underlying standard protocols used by the fileservers (i.e., NFS, 216 CIFS, DNS, etc). 217 218 A4. All fileservers in the federation MUST operate within the same 219 authentication/authorization domain. 220 221 All principals (clients, users, administrator of a singleton or 222 server collection, hosts, NSDBs, etc) that can assume a role 223 defined by the federation protocol can identify themselves to each 224 other via a shared authentication mechanism. This mechanism is 225 not defined or further described in this document. 226 227 The authority of a principal to request that a second principal 228 perform a specific operation is ultimately determined by the 229 second. For example, if a user has administrative privileges on 230 one server in the federation, this does not imply that they have 231 administrative privileges (or, for that matter, any privileges 232 whatsoever) on any other server in the federation. 233 234 In order to access the functionality provided by the federation 235 interfaces, it may be necessary to have elevated privileges or 236 authorization. The authority required by different operations may 237 be different. An operation attempted by an unauthorized entity 238 must fail. This document does not enumerate the authorization 239 necessary for any function. 240 241 Authorization may be partitioned by server collection or set of 242 servers as well as by operation. For example, a particular 243 administrator may have complete authority over a single server 244 collection and, at the same time, no authority to perform any 245 operations whatsoever on any other servers in the federation. 246 247 A5. In a federated system, we assume that a FSN MUST express, or can 248 be used to discover, the following two pieces of information: 249 250 1. The location of the NSDB that is responsible for knowing the 251 filesystem location(s) of the named fileset. 252 253 The NSDB must be specified because there may be many of NSDBs 254 in a federation. We do not assume that any single entity 255 knows the location of all of the NSDBs, and therefore 256 exhaustive search is not even an option. 257 258 2. The junction key. 259 260 The junction key is the index used by the NSDB to identify the 261 FSN of the target fileset. 262 263 REQUIREMENTS 264 265 R1. USING THE FEDERATION INTERFACES, and given an FSL, it MUST be 266 possible for an entity to discover, from the server specified in 267 the FSL, the globally unique and platform-independent name (FSN) 268 of the fileset, if any, associated with that FSL at that time. 269 270 R1a. Each FSN MUST be globally unique. 271 272 R1b. The FSN MUST be sufficiently descriptive to locate an 273 instance of the fileset it names within the federation at any 274 time. 275 276 R1c. The FSN is the name of the fileset, not the FSLs. 277 278 - If a FSL is moved to a new location, it will have the same 279 FSN in the new location. 280 281 - If an instance of a different fileset is placed at the old 282 location, and that fileset has an FSN, then the FSL will 283 be associated with a different FSN from the previous. 284 285 R1d. If a FSL is migrated to another server using the federation 286 interfaces, the FSN remains the same in the new location. 287 288 R1e. If the fileset is replicated using the federation 289 interfaces, then all of the replicas have the same FSN. 290 291 Not all filesets in the federation are required to have a FSN or 292 be reachable by some FSL. Only those filesets that are the target 293 of a junction (as described in R3) are required to have an FSN. 294 295 R2. USING THE FEDERATION INTERFACES, it MUST be possible to "promote" 296 a directory hierarchy exported by a federation server to become an 297 FSL and bind that FSL to a FSN. 298 299 It is the responsibility of the entity performing the promotion to 300 ensure that the directory hierarchy can, indeed, be used as an FSL 301 (and remove the mapping from the FSN to this FSL if this is no 302 longer true). 303 304 R2a. USING THE FEDERATION INTERFACES, the administrator MUST 305 specify the identity of the NSDB responsible for managing the 306 mappings between the FSN and the FSL before the FSL can be 307 bound to a FSN. 308 309 R2b. An administrator may specify the entire FSN (including both 310 the NSDB location and the junction key) of the newly-created 311 FSL, or the administrator may specify only the NSDB and have 312 the system choose the junction key. 313 314 The admin may choose to specify an FSN explicitly in order to 315 recreate a lost fileset with a given FSN (for example, as part 316 of disaster recovery). It is an error to assign an FSN that 317 is already in use by an active fileset. 318 319 Note that creating a replica of an existing filesystem is NOT 320 accomplished by assigning the FSN of the filesystem you wish 321 to replicate to a new filesystem. 322 323 R2c. USING THE FEDERATION INTERFACES, it MUST be possible to 324 create a federation FSL by specifying a specific local volume, 325 path, export path, and export options. 326 327 R3. USING THE FEDERATION INTERFACES, and given the FSN of a target 328 fileset, it MUST be possible to create a junction to that fileset 329 at a named place in another fileset. 330 331 After a junction has been created, clients that access the 332 junction transparently interpret it as a reference to the FSL(s) 333 that implement the FSN associated with the junction. 334 335 R3a. It SHOULD be possible to have more than one junction whose 336 target is a given fileset. In other words, it SHOULD be 337 possible to mount a fileset at multiple named places. 338 339 R3b. If the fileset in which the junction is created is 340 replicated, then the junction MUST appear in all of its 341 replicas. 342 343 R4. USING THE FEDERATION INTERFACES, it MUST be possible to delete a 344 specific junction from a fileset. 345 346 If a junction is deleted, clients who are already viewing the 347 fileset referred to by the junction after traversing the junction 348 MAY continue to view the old namespace. They might not discover 349 that the junction no longer exists (or has been deleted and 350 replaced with a new junction, possibly referring to a different 351 FSN). 352 353 After a junction is deleted, another object with the same name 354 (another junction, or an ordinary filesystem object) may be 355 created. 356 357 R5. USING THE FEDERATION INTERFACES, it MUST be possible to 358 invalidate an FSN. 359 360 R5a. If a junction refers to an FSN that is invalid, attempting 361 to traverse the junction MUST fail. 362 363 An FSN that has been invalidated MAY become valid again if the FSN 364 is recreated (i.e., as part of a disaster recovery process). 365 366 R6. USING THE FEDERATION INTERFACES, it MUST be possible to 367 invalidate a FSL. 368 369 R6. An invalid FSL MUST NOT be returned as the result of 370 resolving a junction. 371 372 An FSL that has been invalidated MAY become valid again if the FSL 373 is recreated (i.e., as part of a disaster recovery process). 374 375 R7. It MUST be possible for the federation of servers to provide 376 multiple namespaces. Each fileset MUST NOT appear in more than 377 one namespace. 378 379 R8. USING THE FEDERATION INTERFACES, it MUST be possible to perform 380 queries about the state of objects relevant to the implementation 381 of the federation namespace: 382 383 R8a. It SHOULD be possible to query a fileserver to get a list of 384 exported filesystems and the export paths. 385 386 This information is necessary to bootstrap the namespace 387 construction. 388 389 R8b. It MUST be possible to query the fileserver named in a FSL 390 to get attributes, such as the appropriate mount options, for 391 the underlying filesystem for that FSL. 392 393 This information is necessary for the client to properly 394 access the FSL. 395 396 R8c. It MUST be possible to query the fileserver named in an FSL 397 to discover whether a junction exists at a given path within 398 that FSL. 399 400 R9. The projected namespace MUST be accessible to clients via at 401 least one standard filesystem access protocol. 402 403 R9a. The namespace SHOULD be accessible to clients via the CIFS 404 protocol. 405 406 R9b. The namespace SHOULD be accessible to clients via the NFSv4 407 protocol. 408 409 R9c. The namespace SHOULD be accessible to clients via the NFSv3 410 protocol. 411 412 R9d. The namespace SHOULD be accessible to clients via the NFSv2 413 protocol. 414 415 R10. USING THE FEDERATION INTERFACES, it MUST be possible to modify 416 the NSDB mapping from an FSN to a set of FSLs to reflect the 417 migration from one FSL to another. 418 419 R11. FSL migration SHOULD have little or no impact on the clients, 420 but this is not guaranteed across all federation members. 421 422 Whether FSL migration is performed transparently depends on 423 whether the source and destination servers are able to do so. It 424 is the responsibility of the administrator to recognize whether or 425 not the migration will be transparent, and advise the system 426 accordingly. The federation, in turn, MUST advise the servers to 427 notify their clients, if necessary. 428 429 For example, on some systems, it may be possible to migrate a 430 fileset from one system to another with minimal client impact 431 because all client-visible metadata (inode numbers, etc) are 432 preserved during migration. On other systems, migration might be 433 quite disruptive. 434 435 R12. USING THE FEDERATION INTERFACES, it MUST be possible to modify 436 the NSDB mapping from an FSN to a set of FSLs to reflect the 437 addition/removal of a replica at a given FSL. 438 439 R13. Replication SHOULD have little or no negative impact on the 440 clients. 441 442 Whether FSL replication is performed transparently depends on 443 whether the source and destination servers are able to do so. It 444 is the responsibility of the administrator initiating the 445 replication to recognize whether or not the replication will be 446 transparent, and advise the federation accordingly. The 447 federation MUST advise the servers to notify their clients, if 448 necessary. 449 450 For example, on some systems, it may be possible to mount any FSL 451 of an FSN read/write, while on other systems, there may be any 452 number of read-only replicas but only one FSL that can be mounted 453 read-write. 454 455 R14. USING THE FEDERATION INTERFACES, it SHOULD be possible to 456 annotate the objects and relations managed by the federation 457 protocol with arbitrary name/value pairs. 458 459 These annotations are not used by the federation protocols -- they 460 are intended for use by higher-level protocols. For example, an 461 annotation that might be useful for a system administrator 462 browsing the federation would be the "owner" of each FSN (i.e., 463 "this FSN is for the home directory of Joe Smith."). As another 464 example, the annotations may express hints used by the clients 465 (such as priority information for NFSv4.1). 466 467 Example objects and relationships to annotate: 468 469 - FSN properties (i.e., "Joe Smith's home directory.") 470 471 - FSL properties (i.e., "Is at the remote backup site.") 472 473 R14a. USING THE FEDERATION INTERFACES, it MUST be possible to 474 query the system to find the annotations for a junction. 475 476 F14b. USING THE FEDERATION INTERFACES, it MUST be possible to 477 query the system to find the annotations for a FSN. 478 479 F14c. USING THE FEDERATION INTERFACES, it MUST be possible to 480 query the system to find the annotations for a FSL. 481 482 NON-REQUIREMENTS 483 484 N1. It is not necessary for the namespace to be shadowed within a 485 fileserver. 486 487 The projected namespace can exist without individual fileservers 488 knowing the entire organizational structure, or, indeed, without 489 knowing exactly where in the projected namespace the filesets they 490 host exist. 491 492 Fileservers do need to be able to handle referrals from other 493 fileservers, but they do not need to know what path the client was 494 accessing when the referral was generated. 495 496 N2. It is not necessary for updates and accesses to occur in 497 transaction or transaction-like contexts. 498 499 One possible requirement that is omitted from our current list is 500 that updates and accesses to the state of the system be made 501 within a transaction context. We were not able to agree whether 502 the benefits of transactions are worth the complexity they add 503 (both to the specification and its eventual implementation) but 504 this topic is open for discussion. 505 506 Below is the the draft of a proposed requirement that provides 507 transactional semantics: 508 509 There MUST be a way to ensure that sequences of operations, 510 including observations of the namespace (including find the 511 locations corresponding to a set of FSNs) and changes to the 512 namespace or related data stored in the system (including the 513 creation, renaming, or deletion of junctions, and the 514 creation, altering, or deletion of mappings between FSN and 515 filesystem locations), can be performed in a manner that 516 provides predictable semantics for the relationship between 517 the observed values and the effect of the changes. 518 519 It MUST be possible to protect Sequences of operations by 520 transactions with NSDB or server-wide ACID semantics. 521 522 EXAMPLES AND DISCUSSION 523 524 CREATE A FILESET AND ITS FSL(s): 525 526 Export a given fileset (and its replicas) to become FSL(s). 527 528 There are many possible variations to this procedure, 529 depending on how the FSN that binds the FSL is created, and 530 whether other replicas of the fileset exist, are known to the 531 federation, and need to be bound to the same FSN. 532 533 It is easiest to describe this in terms of how to create the 534 initial implementation of the fileset, and then describe how 535 to create add replicas. 536 537 CREATING A FILESET AND A FSN 538 539 1. Choose a NSDB that will keep track of the FSL(s) and 540 related information for the fileset. 541 542 2. Request that the NSDB register a new FSN for the 543 fileset. 544 545 The FSN may either be chosen by the NSDB or by the 546 server. The latter case is used if the fileset is 547 being restored, perhaps as part of disaster recovery, 548 and the server wishes to specify the FSN in order to 549 permit existing junctions that reference that FSN to 550 work again. 551 552 At this point, the FSN exists, but its location is 553 unspecified. 554 555 3. Send the FSN, the local volume path, the export path, 556 and the export options for the local implementation of 557 the fileset to the NSDB. Annotations about the FSN or 558 the location may also be sent. 559 560 The NSDB records this info and creates the initial FSL 561 for the fileset. 562 563 ADDING A REPLICA OF A FILESET 564 565 Adding a replica is straightforward: the NSDB and the FSN 566 are already known. The only remaining step is to add 567 another FSL. 568 569 Note that the federation interfaces do not include methods 570 for creating or managing replicas: this is assumed to be 571 a platform-dependent operation (at least at this time). 572 The only interface required is the ability to register or 573 remove the registration of replicas for a fileset. 574 575 JUNCTION RESOLUTION: 576 577 Given a junction, find the location(s) of the object to which 578 the junction refers. 579 580 There are many possible variations to this procedure, 581 depending on how the junctions are represented and how the 582 information necessary to perform resolution is represented by 583 the server. In this example, we assume that the only thing 584 directly expressed by the junction is the junction key; its 585 mapping to FSN can be kept local to the server hosting the 586 junction. 587 588 Step 5 is the only step that interacts directly with the 589 federation interfaces. The rest of the steps may use 590 platform-specific interfaces. 591 592 1. The server identifies the object being accessed as a 593 junction. 594 595 2. The server finds the junction key for the junction. 596 597 3. Using the junction key, the server does a local lookup to 598 find the FSN of the target fileset. 599 600 4. Using the junction key, the server finds the NSDB 601 responsible for the target object. 602 603 5. The server contacts the NSDB and asks for the set of FSLs 604 that implement the target FSN. The NSDB responds with a 605 set of FSLs. 606 607 6. The server converts the FSL to the location type used by 608 the client (e.g., fs_location for NFSv4). 609 610 7. The server redirects (in whatever manner is appropriate 611 for the client) the client to the location(s). 612 613 JUNCTION CREATION: 614 615 Given a local path, a remote export and a path relative to 616 that export, create a junction from the local path to the path 617 within the remote export. 618 619 There are many possible variations to this procedure, 620 depending on how the junctions are represented and how the 621 information necessary to perform resolution is represented by 622 the server. In this example, we assume that the only thing 623 directly expressed by the junction is the junction key; its 624 mapping to FSN can be kept local to the server hosting the 625 junction. 626 627 Step 1 is the only step that uses the federation interfaces. 628 The rest of the steps may use platform-specific interfaces. 629 630 1. Contact the server named by the export and ask for the FSN 631 for the fileset, given its path relative to that export. 632 633 2. Create a new local junction key. 634 635 3. Insert, in the local junction info table, a mapping from 636 the local junction key to the FSN. 637 638 4. Insert the junction, at the given path, into the local 639 filesystem. 640 From ellard at netapp.com Mon Mar 19 19:33:52 2007 From: ellard at netapp.com (Daniel Ellard) Date: Mon, 19 Mar 2007 22:33:52 -0400 Subject: [Federated-fs] Draft of requirements for a federated filesystems In-Reply-To: Message-ID: Some mailers apparently mangle the text I sent out, so I'm re-sending as an attachment. Hopefully between the two formats, everyone will be able to read it properly. In the worst case, save the attachment, rename it to something that ends in .txt instead of .out, and then open it in your favorite browser. -Dan On 3/19/07 3:51 PM, "Ellard, Daniel" wrote: > > The following draft is submitted for review. Our goal is to jump-start > discussion of federated file system protocols by articulating what we > believe are the functional requirements of such a system. We welcome > input and discussion from everyone. > > ... -------------- next part -------------- A non-text attachment was scrubbed... Name: DistFSReqts.out Type: application/octet-stream Size: 29492 bytes Desc: not available Url : https://lists.sdsc.edu/pipermail/federated-fs/attachments/20070319/a8766506/DistFSReqts.out From Black_David at emc.com Thu Mar 22 12:22:09 2007 From: Black_David at emc.com (Black_David at emc.com) Date: Thu, 22 Mar 2007 15:22:09 -0400 Subject: [Federated-fs] Draft of requirements for a federated filesystems In-Reply-To: References: Message-ID: Dan, It's a good start, but (IMHO) needs significant attention ... I went looking for a crisp statement of the problem that is being solved here, and it wasn't easy to find. I think the paragraph starting at line 69 is trying to do this, but it's not clearly obvious what's broken, and why existing technology doesn't fix it. I think the rationale is roughly that between multiple administrative domains, and the desire to federate existing systems without ripping out and replacing the existing NSDBs, a single NSDB is inadequate, hence the primary goal of a federated FS is to support one or more federated namespaces, each of which can devolve namespace resolution to multiple NSDBs (that may be heterogeneous) for different areas of the namespace. The definition of federation needs to be tightened up accordingly - there's a lot of existing technology that satisfies the current definition (e.g., automounter), which was probably not intended. Jumping from the definitions straight into requirements leaves the reader's head spinning. An architectural/structural overview of what a federation is, how the pieces fit together, and how it functions to provide file access to clients over existing protocols is needed. Moving the examples and discussion section to before the requirements would be a good start on this. Overall, replication, migration and annotation appear to be additions to the basic functionality of federation. There ought to be some rationale for why they make sense as part of the basic specification of federation functionality. --- More detailed comments: Second sentence of filesystem definition looks like it escaped from the fileset definition ;-). Definitions of acronyms (e.g., FSL) should *always* include the expansion of the acronym (e.g., FileSystem Location - note that the "S" is potentially ambiguous without this expansion). Junction key definition needs to be expanded - why is the lookup being done? Requirement A2 should not be stated as "oblivious" - the client is definitely not "oblivious" to what's in the federation. This is really a "dynamic discovery" requirement - the client must be able to discover the composition of the federation on the fly without a priori knowledge of the structure of the federation. Requirement A3 needs to be rewritten - this is *not* platform- oblivious. I think this is about completeness of specified protocols/interfaces, namely that the federation functionality is completely specified by the protocols/interfaces to be developed as part of this effort, and has no dependence on other protocols/interfaces beyond the "underlying standard protocols used by the fileservers (i.e., NFS, CIFS, DNS, etc)." > A4. All fileservers in the federation MUST operate within > the same authentication/authorization domain. I'm not sure, but I need a crisp definition of "authentication/ authorization domain" to understand what's going on here. Also, this text: ... a shared authentication mechanism. This mechanism is not defined or further described in this document. may be in conflict with A3, depending on how "authentication/ authorization domain" is defined. The discussion of junction key in A5 2. is incomplete, as is the definition of junction key in the glossary. I think a paragraph or two is needed after the glossary that explains how a client deals with a junction in a federation when the NSDBs for the source and target of the junction are different. The requirement in A5 2. is probably correct, but I can't check it due to this lack of explanation of how a junction key is used. R1d and R1e send me back to A5. 1., specifically the test saying that "a FSN MUST express, or can be used to discover ... the location of the NSDB ...". The word "express" is dangerous here, as it appears to envision a location independent name being bound to the a specific location of a name resolution service (i.e., the NSDB). I think it would be a good idea to pop up a level and talk about FSN to FSL mapping as a "name resolution service" in the junction discussion that's already needed. This should explain how a name resolution service for a geographically distributed federation is envisioned to work and place requirements on it. As a hint to get started, consider why DNS names do *not* express the location of a name resolution service. R2 is written in terms of "directory hierarchy", not "fileset" - why?? This has implications on fileset behavior/functionality *as viewed from the federation* that need to be explained. Absent this explanation, the second paragraph of R2 ("It is the responsibility ...") may be in conflict with the first. R2a has the "name resolution service" issue - see above discussion of A5. 1. In general R2a-R2c appear to dive into rather low level details, and will need to be re-evaluated once the "name resolution service" architecture and requirements are nailed down. R3b - when MUST the junction appear in all the replicas? The definition of replica is vague about update timing (e.g., "unreachable" can hide a lot of bad behavior). R4 has a *very* important implication - in the presence of junction changes, namespace consistency across client views is *not* guaranteed *because* a client could be wandering around a stale area of a namespace courtesy of a junction change above it. I understand why this is being done, but this discussion of possible dynamic staleness needs to be *much* earlier in the document - somewhere like the overview that describes what a federation is/does and what it isn't/doesn't. R5 needs to deal with the client consequence of invalidation of an FSN that the client is accessing. R6 needs to deal with the NSDB and client consequences of FSL invalidation. R7: "Each fileset MUST NOT appear in more than one namespace." Why is this a requirement?? Unless I've missed something, this is very easy to violate. R8a and R8b talk about filesystems as opposed to filesets. That does not appear to be consistent with the use of the term filesets elsewhere, e.g., in the definitions of fileset and filesystem in the glossary. What's going on here? R9 - is it the namespace that needs to be accessible, or files in that namespace or both? R9a-d say that all fileservers SHOULD implement CIFS, NFSv4, NFSv3, and NFSv2. I predict lively discussion on this one ... Given the escape language provided in R11 and R13, I suspect they should be lower-case "should" requirements, not upper-case, as the "may be possible" language is probably inconsistent with a "SHOULD". The "MUST"s in R14a-c do not appear to be consistent with the overall "SHOULD" in R14. I think a distinction between "the federation interface specification MUST specify" and "implementations SHOULD support" is in order. The whole document needs to be gone over to be specific about who is the target of each requirement (federation specification, federation implementer, or even federation administrator). Also on R14, annotations are potentially dangerous to interoperability if a client looks for an annotation and only traverses the junction if that annotation is present. There should be a requirement prohibiting this sort of bad behavior, particularly on vendor- specific annotations. N1 - Define "shadowed" - I can't parse the non-requirement as currently stated. N2 - Specify what the "updates and access" are to. I agree with the underlying concern that distributed transactions is asking a lot from implementations (e.g., multi-phase commit). The examples and discussion should probably be much earlier in the document - much of the missing explanation of how junctions work is here. Thanks, --David > -----Original Message----- > From: federated-fs-bounces at sdsc.edu > [mailto:federated-fs-bounces at sdsc.edu] On Behalf Of Daniel Ellard > Sent: Monday, March 19, 2007 10:34 PM > To: federated-fs at sdsc.edu > Subject: Re: [Federated-fs] Draft of requirements for a > federated filesystems > > > Some mailers apparently mangle the text I sent out, so I'm > re-sending as an > attachment. Hopefully between the two formats, everyone will > be able to > read it properly. > > In the worst case, save the attachment, rename it to > something that ends in > .txt instead of .out, and then open it in your favorite browser. > > -Dan > > > On 3/19/07 3:51 PM, "Ellard, Daniel" wrote: > > > > > The following draft is submitted for review. Our goal is > to jump-start > > discussion of federated file system protocols by > articulating what we > > believe are the functional requirements of such a system. > We welcome > > input and discussion from everyone. > > > > ... > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: DistFSReqts.out > Type: application/octet-stream > Size: 29492 bytes > Desc: not available > Url : > https://lists.sdsc.edu/pipermail/federated-fs/attachments/2007 0319/a8766506/DistFSReqts.out > > From tewarir at us.ibm.com Mon Mar 26 15:34:46 2007 From: tewarir at us.ibm.com (Renu Tewari) Date: Mon, 26 Mar 2007 15:34:46 -0700 Subject: [Federated-fs] Requirements Doc Discussion: Conference Call on 3/28 Message-ID: To jump start the discussions on the requirements document for federated filesystems we will have a conference call on Date: Wednesday March 28th Time: 2PM (ET) Conference Call Details: Please send a mail to ellard at netapp.com for the call-in number. Based on the feedback from all the participants we plan to revise the original draft and create a final draft by the end of next week. If you are unable to attend please send an email with your comments. regards Renu -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.sdsc.edu/pipermail/federated-fs/attachments/20070326/ec0d6ea3/attachment.html From ellard at netapp.com Mon Mar 26 19:30:49 2007 From: ellard at netapp.com (Daniel Ellard) Date: Mon, 26 Mar 2007 22:30:49 -0400 Subject: [Federated-fs] Draft of requirements for a federated filesystems In-Reply-To: Message-ID: My comments interspersed below. A lot of the feedback was concerned with the style and organization of the document; we'll attempt to address those in the next draft. What I'm going to try to address right now is some of the problems that you identified that make it difficult to understand the points we were trying to make, so that people who are reading the first draft won't run into the same problems. One meta-comment: this really is a draft -- a work in progress. Feedback and suggestions for change (as well as suggestions to improve the exposition) are more than welcome. > Overall, replication, migration and annotation appear to be > additions to the basic functionality of federation. There ought > to be some rationale for why they make sense as part of the > basic specification of federation functionality. Replication and migration are fairly commonplace in cluster protocols, and users quickly become addicted to them. We assume that clusters are going to be federation members, so we think it would be a fatal flaw if we limited the federation protocol to prohibit them, because this would reduce the federation-visible functionality of these members. At the same time, however, note that neither replication or migration is required by the federation protocols -- you don't have to support either one of these in order to be a federation member! The only requirement is that if you support migration/replication (AND you want to be able to tell other federation members that filesets you host have moved or are replicated) then you need to implement the protocols for sharing this information with other members. Annotations are a separate issue, and perhaps they will be omitted. Their justification is that when we worked through some use cases, it was very useful to have a way for admins to annotate FSNs, FSLs, etc and have this info managed by the NSDB makes it easy to access the annotations and keep them in synch with the objects they describe. There are other ways to do this, of course, and we're open to suggestions, but this seems to work well. > Definitions of acronyms (e.g., FSL) should *always* include the > expansion of the acronym (e.g., FileSystem Location - note that > the "S" is potentially ambiguous without this expansion). Yes, it's clearly ambiguous... It's supposed to be Fileset Location! > Junction key definition needs to be expanded - why is the lookup > being done? This is a bit of our preconceptions about the implementation showing through... There are a number of ways this might be accomplished, and the details are not essential to the requirements. The reason we treat this as a key is that this gives us a layer of indirection that allows us to lift a lot of the potential complexity out of the server and into the NSDB. The server just needs to keep track of opaque keys for each junction. All the info about the destination of the junction is kept in the NSDB. So if the fileset is migrated, the junction key stays the same but the entry in the NSDB is updated. Similarly if the fileset is migrated; one junction key might match several entries. The source server doesn't need to know that the destination has changed, it just needs to know to talk to the NSDB that has the info about that junction key. >> A4. All fileservers in the federation MUST operate within >> the same authentication/authorization domain. > > I'm not sure, but I need a crisp definition of "authentication/ > authorization domain" to understand what's going on here. Also, > this text: > > ... a shared authentication mechanism. This mechanism is > not defined or further described in this document. > > may be in conflict with A3, depending on how "authentication/ > authorization domain" is defined. We're postponing defining this scheme -- but we are requiring that such a scheme is specified in the protocol (or some specified set, if one isn't enough). > R2 is written in terms of "directory hierarchy", not "fileset" - > why?? This has implications on fileset behavior/functionality > *as viewed from the federation* that need to be explained. > Absent this explanation, the second paragraph of R2 ("It is > the responsibility ...") may be in conflict with the first. We have a bootstrapping problem -- right now we have a world built out of file systems and directories, but what we want is filesets. So we need a way to get the system started by "promoting" hierarchies into filesets. > R8a and R8b talk about filesystems as opposed to filesets. > That does not appear to be consistent with the use of the term > filesets elsewhere, e.g., in the definitions of fileset and > filesystem in the glossary. What's going on here? Again, this is part of the bootstrapping problem. R8a helps the admin figure out what raw materials are available (filesystems) for he or she to create filesets from. This could be done other ways, so it's a SHOULD. R8b helps the client (or server to help the client) to find out information that might be necessary (or merely useful) to access the filesystem underlying a fileset location. > Also on R14, annotations are potentially dangerous to > interoperability if a client looks for an annotation and only > traverses the junction if that annotation is present. There > should be a requirement prohibiting this sort of bad behavior, > particularly on vendor- specific annotations. I can't think of a way to do this short of prohibiting access to the annotations to non-admin clients... But maybe that's not a bad idea, although it would make it impossible to use annotations to express ideas like priority (from NFSv4.1). Another idea, not explored in the requirements, is to come up with a list of standard annotations that have specific forms and standard interpretations, and make this part of the spec. I don't want to open that discussion until we figure out whether we really want annotations in the first place, however. Thanks, -Dan From manoj at almaden.ibm.com Wed Mar 28 17:39:43 2007 From: manoj at almaden.ibm.com (Manoj Naik) Date: Wed, 28 Mar 2007 17:39:43 -0700 Subject: [Federated-fs] Meeting Minutes: Conference Call on 3/28 Message-ID: <460B0ACF.4090900@almaden.ibm.com> Please respond if there are errors or if I missed anything. Manoj Naik IBM Almaden Research Center. Minutes of Conference Call to discuss Federated Filesystem Requirements Attendees: Dan Ellard, Craig Everhart (NetApp) Renu Tewari, Manoj Naik (IBM) Andy Adamson, Peter Honeyman (CITI) Rob Thurlow, Spencer Shepler (Sun) Arun Jagatheesan (SDSU) Peter asked what the scope is of a common authentication domain in assumption A4. The (limited) scope we're starting with is to assume a single organization (somebody called it "regional") instead of a global world-wide domain, where users have a "common" identify across the federation. Dan mentioned that this is typical in enterprise environments. Peter wondered if we were limiting the scope by not considering user mapping or translations across file servers. Arun said this was a issue faced by grid folks where different admin domains within a university (for example) don't talk to each other well. Dan said that the current proposal, while limited in scope by not addressing cross-domain user authentication, could be expanded if participants understood the issues of user translations well. Arun said A1 limited most of the requirements of the grid folks by not allowing client-facing protocols to change. Currently, the proposal does not permit ordinary users to perform replication, etc. which means we would need add-ons to make that happen. Everybody seemed to agree that not requiring client protocols to change is a good thing. Rob asked if we should consider extending current standards (like fs_locations_info in NFSv4.1) to allow write operations (maybe in a later version) to address some of these issues. Rob agreed with David Black's comment to the mailing list that the document should define the problem more crisply. We need to elaborate on how clients traverse the namespace. W.r.t. A4, Andy proposed that all that is needed is an agreement between the "neighboring" admins that are involved in setting up junctions. Nothing special needs to be done for users who continue to traverse the namespace using their security attributes - which means they may only see parts of the namespace (that they have access to) which is fine. Servers can control access to the junctions just like they do today. Somebody asked if we need special privileges from the target servers to query junctions that point to them. Arun asked how the FSN is different from the local "name" (presumable path or fsid). Dan explained that FSN was just a federation concept. So why is there a separate junction key? Because FSN is essentially a tuple of NSDB and junction key. What's the format of the FSN? Is it a URI or a string name? While the exact format of the FSN is not important for the requirements doc, the current thought is that the junction key could be a UUID (so there are no collisions), whereas the NSDB could just be a DNS name. David Black had commented earlier that DNS names do not express location of NSDB well, but Dan countered that this is the same problem regular clients face as well. Andy asked if the communication between the servers and the NSDB is secure. The answer is yes and should be part of the requirements. Do the queries need to be secure also? Rob said maybe - mandatory to implement, but not mandatory to use. Use RPC_SECGSS? Rob asked if the single admin entity on line 50 could be expanded to include delegated administration which most current environments use. Noted. Arun asked if the proposal could address the requirements of users being able to build a logical namespace with individual files pointing to different servers. This would be difficult to do with current protocols. A discussion of the examples section followed. Dan explained the bootstrapping problem of creating filesets before junction creation. Also, note that the FSN can be either specified by the server (when reused after recovery) or obtained from the NSDB. Arun mentioned that the grid folks have some requirements for federation but most of the them clash with basic assumption A1 (not changing client protocols). Nevertheless, he'll post them to the list. There will be a revision of the draft based on the comments/discussion and another call will follow in about 1.5 weeks. From ellard at netapp.com Wed Mar 28 18:45:30 2007 From: ellard at netapp.com (Daniel Ellard) Date: Wed, 28 Mar 2007 21:45:30 -0400 Subject: [Federated-fs] Meeting Minutes: Conference Call on 3/28 In-Reply-To: <460B0ACF.4090900@almaden.ibm.com> Message-ID: On 3/28/07 8:39 PM, "Manoj Naik" wrote: > Please respond if there are errors or if I missed anything. Manoj -- Thanks for taking these very comprehensive notes. I want to expand on one of the points I tried to make during the call (because I don't think I made it very well, but it's an important thing that has colored our thinking to the extent that we should have stated it as a fundamental assumption): > Arun asked how the FSN is different from the local "name" (presumable > path or fsid). Dan explained that FSN was just a federation concept. So > why is there a separate junction key? Because FSN is essentially a tuple > of NSDB and junction key. What's the format of the FSN? Is it a URI or a > string name? While the exact format of the FSN is not important for the > requirements doc, the current thought is that the junction key could be > a UUID (so there are no collisions), whereas the NSDB could just be a > DNS name. David Black had commented earlier that DNS names do not > express location of NSDB well, but Dan countered that this is the same > problem regular clients face as well. Assumption: DNS names are an acceptable way to identify hosts and locate services within the federation. There may be some extra plumbing required (see below for possible examples) but DNS is Good Enough. Of course, there are many problems with DNS (not to mention IP) that can violate this assumption. The same DNS name can resolve to different IP addresses -- or fail to resolve at all -- depending on which DNS servers are queried (and this is actually a feature, not a bug...). Even if the same DNS name resolves to the same IP number everywhere, that same IP number can identify different hosts -- or fail to identify any hosts at all -- if the hosts trying to route to the given IP number use different gateways with different routing tables, etc (and again, this is a feature...). Despite these potential problems, I believe that we can assume that DNS and IP are good enough for the following two reasons: 1. We've got decades worth of procedures and mechanisms for avoiding these pitfalls. The horrible things that can happen usually don't. 2. The client-facing protocols all use DNS (or something very similar), either directly or indirectly, to name things like shares, fs_locations, exports, etc. Therefore, if we can't make DNS work, then we can't make the clients work, and the whole exercise is doomed unless we change all the client-facing protocols -- which would probably just spell a different doom. One way to satisfy this assumption is to say that all of the federation members need to have the same view of the DNS namespace (at least for the names used by entities in the federation). A more realistic approach is to assume that all of the NSDBs have globally-resolvable names and to let the NSDBs manage translation of FSLs based on the clients on whose behalf FSN resolution they are doing. When a server asks the NSDB to resolve an FSN, it can also tell the NSDB what client is going to be using the result -- and the NSDB might answer differently depending on where the client is. -Dan From arun at sdsc.edu Thu Mar 29 18:19:36 2007 From: arun at sdsc.edu (Arun Jagatheesan) Date: Thu, 29 Mar 2007 18:19:36 -0700 Subject: [Federated-fs] Perspectives from data grids (or) Isn't it time the FS clients evolved for the future? In-Reply-To: <460B0ACF.4090900@almaden.ibm.com> References: <460B0ACF.4090900@almaden.ibm.com> Message-ID: <008f01c77269$7e637bd0$1f12fea9@sanjaslpmbp> Just to introduce my background to the FS community, I've been promoting data grid concepts for the last few years. Data Grids allow a collaborative namespace of data storage resources (mostly files and file servers) to be shared amongst autonomous administrative domains. (Well, there are lot of hype and based on the marketing person or vendor you talk to you will hear different definitions). In the academic world, data grid concepts continue to solve lot of problems and manage petabyte(s) of data - so they are real and not all hype anymore . I agreed/volunteered to post some requirements that are being solved using data grids. These might be useful for standardization communities wrto FS (file systems/servers) such as the Federated-FS. We are trying to standardize these data grid concepts using OGF (Open Grid Forum). However, data grid standards are at a higher layer on the protocol stack (XML, SOAP etc). It might be useful to consider these requirements (or concepts) at a lower level (byte level) too. The following are my "opinion" especially on the non-technical ones. "End-user" could be any person or application using the file system. Major non-technical requirements (change in design perspective): NT-1) File systems are for end-users not administrators NT-2) File systems of the 80s need not be the ones for the 20's - The above are philosophical so I elaborate them below again. Technical requirements (change or design new functionalities): T1) Each administrative domain is an autonomous entity. No cross registration of user-ids or global administrative policies are possible in production (even if admins were very friendly and are from academic world) T2) A logical namespace of data is a MUST for a collaborative or federated filesystem. In a logical namespace the "human readable names and order" of the files might be different from the physical locations or FSN on the file servers. T3) End users SHOULD be able to replicate data T4) End users SHOULD be able to add metadata about data (files) T5) End users SHOULD be able to discover or query data (files) just by knowing attributes about the data (When multiple organizations work together, the hierarchical human readable namespace does not solve the data organization or discovery problem). T6) Mount points of several file systems SHOULD be avoided T7) Users SHOULD be able to see the distribution of physical resources - The concept of logical resources MUST be used along with a logical namespace (this enables T3) I have tried to make sure the above are just functional or user-oriented requirements and not implementation requirements. Now to elaborate first two non-technical or philosophical requirements (this is just my opinion and I know I am stating the obvious which everyone knows).... NT-1) File systems are for end-users not administrators Most of the file-system protocols or improvements on them seem to have the data storage administrator as the target user (e.g.) replication. The end-users will not see any advantage directly. If the end user does not see any advantage... The administrator will not be asked to upgrade (or pushed to upgrade) unless they decide voluntarily... The new products or standards no matter how technically useful they are, will not be required or appreciated. If the standard or products wanted to make a difference, they must be designed for end-users and end-user applications (not for administrators). NT-2) File systems of the 80's need not be the ones for the 20's End-users have to be given more functionalities which they can use them selves. When multiple organizations or teams work together, they know that data is not a single disk or sector. Everyone knows internet and distributed computing except the file system. The new filesystem client protocols must allow data distribution. RPC-style remote execution of user-defined programs on file systems make remote data more usable (The FS will have to provide suitable standard interface to add web-services at runtime, without these there is not much use of the distributed data). The file system in these cases becomes more than just a file system. End-users (not admins alone), can define data-management policies or rules to manage their data. In short, all these can be accomplished if standardization folks focus on the end-user rather than the admin. A newer client-server protocol WILL be required (that might or might not interact/interoperate with existing client-server FS protocols). All these can be done easily as a customized or single-vendor only solution, but users (even the academic community) will prefer a standardized solution for their long-term requirements. Hence, I am sharing my opinion with this community. (OGF): https://forge.gridforum.org/sf/go/doc8271?nav=1 shows a high-level perspective from the grid world. Cheers, Arun ~~~~~~~~~ Luck is what happens when preparation meets opportunity. Arun swaran Jagatheesan http://www.sdsc.edu/~arun/ San Diego Supercomputer Center. (858)822.5452 > -----Original Message----- > From: federated-fs-bounces at sdsc.edu > [mailto:federated-fs-bounces at sdsc.edu] On Behalf Of Manoj Naik > Sent: Wednesday, March 28, 2007 5:40 PM > To: federated-fs at sdsc.edu > Subject: [Federated-fs] Meeting Minutes: Conference Call on 3/28 > > Please respond if there are errors or if I missed anything. > > Manoj Naik > IBM Almaden Research Center. > > Minutes of Conference Call to discuss Federated Filesystem > Requirements > > Attendees: > Dan Ellard, Craig Everhart (NetApp) > Renu Tewari, Manoj Naik (IBM) > Andy Adamson, Peter Honeyman (CITI) > Rob Thurlow, Spencer Shepler (Sun) > Arun Jagatheesan (SDSU) Arun Jagatheesan (SDSC) > <<>> > > Arun mentioned that the grid folks have some requirements for > federation but most of the them clash with basic assumption > A1 (not changing client protocols). Nevertheless, he'll post > them to the list. >