[Federated-fs] Draft of requirements for a federated filesystems
Ellard, Daniel
ellard at netapp.com
Mon Mar 19 12:51:13 PDT 2007
The following draft is submitted for review. Our goal is to jump-start
discussion of federated file system protocols by articulating what we
believe are the functional requirements of such a system. We welcome
input and discussion from everyone.
Questions or comments intended soley for the authors should be sent to:
xdl-glamour at netapp.com
General discussion should be sent to this list (federated-fs at sdsc.edu).
If and when appropriate, updated drafts will be mailed to this list.
We would like for the review period to end on Friday, April 6. We will
host conference calls for discussion of the draft, starting next week
(schedule TBD). If you wish to participate in the conference call(s),
please send email directly to me (ellard at netapp.com).
After April 6, our intent is to prepare a final draft of the
requirements document and begin drafting proposals for protocols and
other mechanisms that satisfy the requirements. Please let me know if
you would like to help prepare these drafts.
Thanks,
-Dan
1 DRAFT DATE 2007-03-19 15:23:48 (-0400)
2
3 Title: REQUIREMENTS FOR FEDERATED FILESYSTEMS
4
5 Contributors:
6
7 Daniel Ellard, Network Appliance;
8 Craig Everhart, Network Appliance;
9 Manoj Naik, IBM Research;
10 Renu Tewari, IBM Research
11
12 PURPOSE
13
14 This draft describes and lists the functional requirements of a
15 federated file system and defines related terms. Our intent is
to use
16 this draft as a starting point and refine it, with input and
feedback
17 from the file system community and other interested parties,
until we
18 reach general agreement. We will then begin, again with the
help of
19 any interested parties, to define standard, open federated file
system
20 protocols that satisfy these requirements and are suitable for
21 implementation and deployment.
22
23 We do not describe the mechanisms that might be used to
implement this
24 functionality except in cases where specific mechanisms, in our
25 opinion, follow inevitably from the requirements. Our focus is
on the
26 interfaces between the entities of the system, not on the
protocols or
27 their implementations.
28
29 For the first version of this document, we are focused on the
30 following questions:
31
32 - Are any "MUST" requirements missing?
33
34 - Are there any "MUST" requirements that should be "SHOULD" or
"MAY"?
35
36 - Are there any "SHOULD" requirements that should be "MAY"?
37
38 - Are there better ways to articulate the requirements?
39
40 OVERVIEW
41
42 Today, there are collections of fileservers that inter-operate
to
43 provide a single namespace comprised of filesystem resources
provided
44 by different members of the collection, joined together with
45 inter-filesystem junctions. The namespace can either be
assembled at
46 the fileservers, the clients, or by an external namespace
service --
47 the mechanisms used to assemble the namespace may vary depending
on
48 the filesystem access protocol used by the client.
49
50 These fileserver collections are, in general, administered by a
single
51 entity. This administrator builds the namespace out of the
filesystem
52 resources and junctions. There are also singleton servers that
export
53 some or all of their filesystem resources, but which do not
contain
54 junctions to other filesystems.
55
56 Current server collections that provide a shared namespace
usually do
57 so by means of a service that maps filesystem names (FSNs) to
58 filesystem locations (FSLs). We refer to this service as a
namespace
59 database (NSDB). In some systems, this service is referred to
as a
60 volume location database (VLDB), and may be implemented by LDAP,
NIS,
61 or any number of other mechanisms.
62
63 The primary purpose of the NSDB is to provide a level of
indirection
64 between the filesystem names and the filesystem locations. If
the
65 NSDB permits updates to the set of mappings, then the filesystem
66 locations may be changed (e.g., moved or replicated) in a manner
that
67 is transparent to the referring filesystem and its server.
68
69 Our objective is to specify a set of interfaces (and
corresponding
70 protocols) by which such fileservers and collections of
fileservers,
71 with different administrators, can form a federation of
fileservers
72 that provides a namespace composed of the filesets hosted on the
73 different fileservers and fileserver collections.
74
75 It should be possible, using a system that implements the
interfaces,
76 to share a common namespace across all the fileservers in the
77 federation. It should also be possible for different
fileservers in
78 the federation to project different namespaces and enable
clients to
79 traverse them.
80
81 Such a federation may contain an arbitrary number of NSDBs, each
82 belonging to a different administrative entity, and each
providing the
83 mappings that define a part of a namespace. Such a federation
may
84 also have an arbitrary number of administrative entities, each
85 responsible for administering a subset of the servers and NSDBs.
86 Acting in concert, the administrators should be able to build
and
87 administer this multi-fileserver, multi-collection namespace.
88
89 Each singleton servers can be presumed to provide its own NSDB
90 service, for example with a trivial mapping to local FSLs.
91
92 GLOSSARY
93
94 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT",
95 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
this
96 document are to be interpreted as described in RFC 2119.
97
98 The phrase "USING THE FEDERATION INTERFACES" implies that the
99 subsequent requirement must be satisfied, in its entirety, via
the
100 federation interfaces.
101
102 Administrator: A user with the necessary authority to initiate
103 administrative tasks on one or more servers.
104
105 Admin entity: A server or agent that administers a collection
of
106 fileservers and persistently stores the namespace
information.
107
108 Client: Any client that accesses the fileserver data using a
109 supported filesystem access protocol.
110
111 Federation: A set of server collections and singleton servers
that
112 use a common set of interfaces and protocols in order to
113 provide to their clients a common namespace.
114
115 Fileserver: A server exporting a filesystem via a network
filesystem
116 access protocol.
117
118 Fileset: The abstraction of a set of files and their containing
119 directory tree. A fileset is the fundamental unit of
data
120 management in the federation.
121
122 Filesystem: A self-contained unit of export for a fileserver,
and the
123 mechanism used to implement filesets. The fileset does
not
124 need to be rooted at the root of the filesystem, nor at
the
125 export point for the filesystem.
126
127 A single filesystem MAY implement more than one fileset,
if
128 the client protocol and the fileserver permit this.
129
130 Filesystem access protocol: A network filesystem access
protocol such
131 as NFSv2, NFSv3, NFSv4, or CIFS.
132
133 FSL: The location of the implementation of a fileset at a
particular
134 moment in time. A FSL MUST be something that can be
135 translated into a protocol-specific description of a
resource
136 that a client can access directly, such as a fs_location
(for
137 NFSv4), or share name (for CIFS). Note that not all
FSLs need
138 to be explicitly exported as long as they are contained
within
139 an exported path on the fileserver.
140
141 FSN: A platform-independent and globally unique name for a
fileset.
142 Two FSLs that implement replicas of the same fileset
MUST have
143 the same FSN, and if a fileset is migrated from one
location
144 to another, the FSN of that fileset MUST remain the
same.
145
146 Junction: A filesystem object used to link a directory name in
the
147 current fileset with an object within another fileset.
The
148 server-side "link" from a leaf node in one fileset to
the root
149 of another fileset.
150
151 Junction key: The key to lookup a junction within an NSDB or a
local
152 table of information about junctions.
153
154 Namespace: A filename/directory tree that a
sufficiently-authorized
155 client can observe.
156
157 NSDB: A namespace database; a service that maps FSNs to FSLs.
The
158 NSDB may also be used to store other information, such
as
159 annotations for these mappings and their components.
160
161 Referral: A server response to a client access that directs the
162 client to evaluate the current object as a reference to
an
163 object at a different location (specified by an FSL) in
164 another fileset, and possibly hosted on another
fileserver.
165 The client reattempts the access to the object at the
new
166 location.
167
168 Replica: A replica is a redundant implementation of a fileset.
Each
169 replica shares the same FSN, but has a different FSL.
170
171 Replicas may be used to increase availability or
performance.
172 Updates to replicas of the same fileset MUST appear to
occur
173 in the same order, and therefore each replica is
174 self-consistent at any moment. We do not assume that
updates
175 to each replica occur simultaneously -- if a replica is
176 offline or unreachable, the other replicas may be
updated.
177
178 Server Collection: A set of fileservers administered as a unit.
A
179 server collection may be administered with
vendor-specific
180 software.
181
182 Singleton Server: A server collection containing only one
server; a
183 stand-alone fileserver.
184
185 PROPOSED REQUIREMENTS
186
187 Note that the requirements are described in terms of correct
behavior
188 by all entities. We do not address the requirements of the
system in
189 the presence of faults.
190
191 BASIC ASSUMPTIONS
192
193 Several of the requirements are so fundamental that we treat
them as
194 basic assumptions; if any of these assumptions are violated, the
rest
195 of the requirements must be reviewed in their entirety.
196
197 A1. The federation protocols do not require any changes to
existing
198 client-facing protocols, and MAY be extended to incorporate
new
199 client-facing protocols.
200
201 A2. The client SHOULD be oblivious to the federation
composition.
202
203 With the possible exception of knowing the location of a
root
204 fileset, clients can traverse the namespace and use the
federation
205 protocols without any prior knowledge of how the namespace
is
206 mapped onto filesets or FSLs.
207
208 A3. All requirements MUST be satisfiable in a
platform-oblivious
209 manner.
210
211 If a federation operation requires an interaction, USING THE
212 FEDERATION INTERFACES, between two (or more) entities that
are
213 members of a federation, then this interaction MUST NOT
require
214 any interfaces other than the federation interfaces and the
215 underlying standard protocols used by the fileservers (i.e.,
NFS,
216 CIFS, DNS, etc).
217
218 A4. All fileservers in the federation MUST operate within the
same
219 authentication/authorization domain.
220
221 All principals (clients, users, administrator of a singleton
or
222 server collection, hosts, NSDBs, etc) that can assume a role
223 defined by the federation protocol can identify themselves
to each
224 other via a shared authentication mechanism. This mechanism
is
225 not defined or further described in this document.
226
227 The authority of a principal to request that a second
principal
228 perform a specific operation is ultimately determined by the
229 second. For example, if a user has administrative
privileges on
230 one server in the federation, this does not imply that they
have
231 administrative privileges (or, for that matter, any
privileges
232 whatsoever) on any other server in the federation.
233
234 In order to access the functionality provided by the
federation
235 interfaces, it may be necessary to have elevated privileges
or
236 authorization. The authority required by different
operations may
237 be different. An operation attempted by an unauthorized
entity
238 must fail. This document does not enumerate the
authorization
239 necessary for any function.
240
241 Authorization may be partitioned by server collection or set
of
242 servers as well as by operation. For example, a particular
243 administrator may have complete authority over a single
server
244 collection and, at the same time, no authority to perform
any
245 operations whatsoever on any other servers in the
federation.
246
247 A5. In a federated system, we assume that a FSN MUST express,
or can
248 be used to discover, the following two pieces of
information:
249
250 1. The location of the NSDB that is responsible for knowing
the
251 filesystem location(s) of the named fileset.
252
253 The NSDB must be specified because there may be many of
NSDBs
254 in a federation. We do not assume that any single
entity
255 knows the location of all of the NSDBs, and therefore
256 exhaustive search is not even an option.
257
258 2. The junction key.
259
260 The junction key is the index used by the NSDB to
identify the
261 FSN of the target fileset.
262
263 REQUIREMENTS
264
265 R1. USING THE FEDERATION INTERFACES, and given an FSL, it MUST
be
266 possible for an entity to discover, from the server
specified in
267 the FSL, the globally unique and platform-independent name
(FSN)
268 of the fileset, if any, associated with that FSL at that
time.
269
270 R1a. Each FSN MUST be globally unique.
271
272 R1b. The FSN MUST be sufficiently descriptive to locate an
273 instance of the fileset it names within the federation
at any
274 time.
275
276 R1c. The FSN is the name of the fileset, not the FSLs.
277
278 - If a FSL is moved to a new location, it will have the
same
279 FSN in the new location.
280
281 - If an instance of a different fileset is placed at the
old
282 location, and that fileset has an FSN, then the FSL
will
283 be associated with a different FSN from the
previous.
284
285 R1d. If a FSL is migrated to another server using the
federation
286 interfaces, the FSN remains the same in the new
location.
287
288 R1e. If the fileset is replicated using the federation
289 interfaces, then all of the replicas have the same FSN.
290
291 Not all filesets in the federation are required to have a
FSN or
292 be reachable by some FSL. Only those filesets that are the
target
293 of a junction (as described in R3) are required to have an
FSN.
294
295 R2. USING THE FEDERATION INTERFACES, it MUST be possible to
"promote"
296 a directory hierarchy exported by a federation server to
become an
297 FSL and bind that FSL to a FSN.
298
299 It is the responsibility of the entity performing the
promotion to
300 ensure that the directory hierarchy can, indeed, be used as
an FSL
301 (and remove the mapping from the FSN to this FSL if this is
no
302 longer true).
303
304 R2a. USING THE FEDERATION INTERFACES, the administrator
MUST
305 specify the identity of the NSDB responsible for
managing the
306 mappings between the FSN and the FSL before the FSL can
be
307 bound to a FSN.
308
309 R2b. An administrator may specify the entire FSN (including
both
310 the NSDB location and the junction key) of the
newly-created
311 FSL, or the administrator may specify only the NSDB and
have
312 the system choose the junction key.
313
314 The admin may choose to specify an FSN explicitly in
order to
315 recreate a lost fileset with a given FSN (for example,
as part
316 of disaster recovery). It is an error to assign an FSN
that
317 is already in use by an active fileset.
318
319 Note that creating a replica of an existing filesystem
is NOT
320 accomplished by assigning the FSN of the filesystem you
wish
321 to replicate to a new filesystem.
322
323 R2c. USING THE FEDERATION INTERFACES, it MUST be possible
to
324 create a federation FSL by specifying a specific local
volume,
325 path, export path, and export options.
326
327 R3. USING THE FEDERATION INTERFACES, and given the FSN of a
target
328 fileset, it MUST be possible to create a junction to that
fileset
329 at a named place in another fileset.
330
331 After a junction has been created, clients that access the
332 junction transparently interpret it as a reference to the
FSL(s)
333 that implement the FSN associated with the junction.
334
335 R3a. It SHOULD be possible to have more than one junction
whose
336 target is a given fileset. In other words, it SHOULD be
337 possible to mount a fileset at multiple named places.
338
339 R3b. If the fileset in which the junction is created is
340 replicated, then the junction MUST appear in all of its
341 replicas.
342
343 R4. USING THE FEDERATION INTERFACES, it MUST be possible to
delete a
344 specific junction from a fileset.
345
346 If a junction is deleted, clients who are already viewing
the
347 fileset referred to by the junction after traversing the
junction
348 MAY continue to view the old namespace. They might not
discover
349 that the junction no longer exists (or has been deleted and
350 replaced with a new junction, possibly referring to a
different
351 FSN).
352
353 After a junction is deleted, another object with the same
name
354 (another junction, or an ordinary filesystem object) may be
355 created.
356
357 R5. USING THE FEDERATION INTERFACES, it MUST be possible to
358 invalidate an FSN.
359
360 R5a. If a junction refers to an FSN that is invalid,
attempting
361 to traverse the junction MUST fail.
362
363 An FSN that has been invalidated MAY become valid again if
the FSN
364 is recreated (i.e., as part of a disaster recovery process).
365
366 R6. USING THE FEDERATION INTERFACES, it MUST be possible to
367 invalidate a FSL.
368
369 R6. An invalid FSL MUST NOT be returned as the result of
370 resolving a junction.
371
372 An FSL that has been invalidated MAY become valid again if
the FSL
373 is recreated (i.e., as part of a disaster recovery process).
374
375 R7. It MUST be possible for the federation of servers to
provide
376 multiple namespaces. Each fileset MUST NOT appear in more
than
377 one namespace.
378
379 R8. USING THE FEDERATION INTERFACES, it MUST be possible to
perform
380 queries about the state of objects relevant to the
implementation
381 of the federation namespace:
382
383 R8a. It SHOULD be possible to query a fileserver to get a
list of
384 exported filesystems and the export paths.
385
386 This information is necessary to bootstrap the namespace
387 construction.
388
389 R8b. It MUST be possible to query the fileserver named in a
FSL
390 to get attributes, such as the appropriate mount
options, for
391 the underlying filesystem for that FSL.
392
393 This information is necessary for the client to properly
394 access the FSL.
395
396 R8c. It MUST be possible to query the fileserver named in
an FSL
397 to discover whether a junction exists at a given path
within
398 that FSL.
399
400 R9. The projected namespace MUST be accessible to clients via
at
401 least one standard filesystem access protocol.
402
403 R9a. The namespace SHOULD be accessible to clients via the
CIFS
404 protocol.
405
406 R9b. The namespace SHOULD be accessible to clients via the
NFSv4
407 protocol.
408
409 R9c. The namespace SHOULD be accessible to clients via the
NFSv3
410 protocol.
411
412 R9d. The namespace SHOULD be accessible to clients via the
NFSv2
413 protocol.
414
415 R10. USING THE FEDERATION INTERFACES, it MUST be possible to
modify
416 the NSDB mapping from an FSN to a set of FSLs to reflect the
417 migration from one FSL to another.
418
419 R11. FSL migration SHOULD have little or no impact on the
clients,
420 but this is not guaranteed across all federation members.
421
422 Whether FSL migration is performed transparently depends on
423 whether the source and destination servers are able to do
so. It
424 is the responsibility of the administrator to recognize
whether or
425 not the migration will be transparent, and advise the system
426 accordingly. The federation, in turn, MUST advise the
servers to
427 notify their clients, if necessary.
428
429 For example, on some systems, it may be possible to migrate
a
430 fileset from one system to another with minimal client
impact
431 because all client-visible metadata (inode numbers, etc) are
432 preserved during migration. On other systems, migration
might be
433 quite disruptive.
434
435 R12. USING THE FEDERATION INTERFACES, it MUST be possible to
modify
436 the NSDB mapping from an FSN to a set of FSLs to reflect the
437 addition/removal of a replica at a given FSL.
438
439 R13. Replication SHOULD have little or no negative impact on
the
440 clients.
441
442 Whether FSL replication is performed transparently depends
on
443 whether the source and destination servers are able to do
so. It
444 is the responsibility of the administrator initiating the
445 replication to recognize whether or not the replication will
be
446 transparent, and advise the federation accordingly. The
447 federation MUST advise the servers to notify their clients,
if
448 necessary.
449
450 For example, on some systems, it may be possible to mount
any FSL
451 of an FSN read/write, while on other systems, there may be
any
452 number of read-only replicas but only one FSL that can be
mounted
453 read-write.
454
455 R14. USING THE FEDERATION INTERFACES, it SHOULD be possible to
456 annotate the objects and relations managed by the federation
457 protocol with arbitrary name/value pairs.
458
459 These annotations are not used by the federation protocols
-- they
460 are intended for use by higher-level protocols. For
example, an
461 annotation that might be useful for a system administrator
462 browsing the federation would be the "owner" of each FSN
(i.e.,
463 "this FSN is for the home directory of Joe Smith."). As
another
464 example, the annotations may express hints used by the
clients
465 (such as priority information for NFSv4.1).
466
467 Example objects and relationships to annotate:
468
469 - FSN properties (i.e., "Joe Smith's home directory.")
470
471 - FSL properties (i.e., "Is at the remote backup site.")
472
473 R14a. USING THE FEDERATION INTERFACES, it MUST be possible
to
474 query the system to find the annotations for a junction.
475
476 F14b. USING THE FEDERATION INTERFACES, it MUST be possible
to
477 query the system to find the annotations for a FSN.
478
479 F14c. USING THE FEDERATION INTERFACES, it MUST be possible
to
480 query the system to find the annotations for a FSL.
481
482 NON-REQUIREMENTS
483
484 N1. It is not necessary for the namespace to be shadowed within
a
485 fileserver.
486
487 The projected namespace can exist without individual
fileservers
488 knowing the entire organizational structure, or, indeed,
without
489 knowing exactly where in the projected namespace the
filesets they
490 host exist.
491
492 Fileservers do need to be able to handle referrals from
other
493 fileservers, but they do not need to know what path the
client was
494 accessing when the referral was generated.
495
496 N2. It is not necessary for updates and accesses to occur in
497 transaction or transaction-like contexts.
498
499 One possible requirement that is omitted from our current
list is
500 that updates and accesses to the state of the system be made
501 within a transaction context. We were not able to agree
whether
502 the benefits of transactions are worth the complexity they
add
503 (both to the specification and its eventual implementation)
but
504 this topic is open for discussion.
505
506 Below is the the draft of a proposed requirement that
provides
507 transactional semantics:
508
509 There MUST be a way to ensure that sequences of
operations,
510 including observations of the namespace (including find
the
511 locations corresponding to a set of FSNs) and changes to
the
512 namespace or related data stored in the system
(including the
513 creation, renaming, or deletion of junctions, and the
514 creation, altering, or deletion of mappings between FSN
and
515 filesystem locations), can be performed in a manner that
516 provides predictable semantics for the relationship
between
517 the observed values and the effect of the changes.
518
519 It MUST be possible to protect Sequences of operations
by
520 transactions with NSDB or server-wide ACID semantics.
521
522 EXAMPLES AND DISCUSSION
523
524 CREATE A FILESET AND ITS FSL(s):
525
526 Export a given fileset (and its replicas) to become
FSL(s).
527
528 There are many possible variations to this procedure,
529 depending on how the FSN that binds the FSL is created,
and
530 whether other replicas of the fileset exist, are known
to the
531 federation, and need to be bound to the same FSN.
532
533 It is easiest to describe this in terms of how to create
the
534 initial implementation of the fileset, and then describe
how
535 to create add replicas.
536
537 CREATING A FILESET AND A FSN
538
539 1. Choose a NSDB that will keep track of the FSL(s)
and
540 related information for the fileset.
541
542 2. Request that the NSDB register a new FSN for the
543 fileset.
544
545 The FSN may either be chosen by the NSDB or by
the
546 server. The latter case is used if the fileset
is
547 being restored, perhaps as part of disaster
recovery,
548 and the server wishes to specify the FSN in
order to
549 permit existing junctions that reference that
FSN to
550 work again.
551
552 At this point, the FSN exists, but its location
is
553 unspecified.
554
555 3. Send the FSN, the local volume path, the export
path,
556 and the export options for the local
implementation of
557 the fileset to the NSDB. Annotations about the
FSN or
558 the location may also be sent.
559
560 The NSDB records this info and creates the
initial FSL
561 for the fileset.
562
563 ADDING A REPLICA OF A FILESET
564
565 Adding a replica is straightforward: the NSDB and
the FSN
566 are already known. The only remaining step is to
add
567 another FSL.
568
569 Note that the federation interfaces do not include
methods
570 for creating or managing replicas: this is assumed
to be
571 a platform-dependent operation (at least at this
time).
572 The only interface required is the ability to
register or
573 remove the registration of replicas for a fileset.
574
575 JUNCTION RESOLUTION:
576
577 Given a junction, find the location(s) of the object to
which
578 the junction refers.
579
580 There are many possible variations to this procedure,
581 depending on how the junctions are represented and how
the
582 information necessary to perform resolution is
represented by
583 the server. In this example, we assume that the only
thing
584 directly expressed by the junction is the junction key;
its
585 mapping to FSN can be kept local to the server hosting
the
586 junction.
587
588 Step 5 is the only step that interacts directly with the
589 federation interfaces. The rest of the steps may use
590 platform-specific interfaces.
591
592 1. The server identifies the object being accessed as a
593 junction.
594
595 2. The server finds the junction key for the junction.
596
597 3. Using the junction key, the server does a local
lookup to
598 find the FSN of the target fileset.
599
600 4. Using the junction key, the server finds the NSDB
601 responsible for the target object.
602
603 5. The server contacts the NSDB and asks for the set of
FSLs
604 that implement the target FSN. The NSDB responds
with a
605 set of FSLs.
606
607 6. The server converts the FSL to the location type
used by
608 the client (e.g., fs_location for NFSv4).
609
610 7. The server redirects (in whatever manner is
appropriate
611 for the client) the client to the location(s).
612
613 JUNCTION CREATION:
614
615 Given a local path, a remote export and a path relative
to
616 that export, create a junction from the local path to
the path
617 within the remote export.
618
619 There are many possible variations to this procedure,
620 depending on how the junctions are represented and how
the
621 information necessary to perform resolution is
represented by
622 the server. In this example, we assume that the only
thing
623 directly expressed by the junction is the junction key;
its
624 mapping to FSN can be kept local to the server hosting
the
625 junction.
626
627 Step 1 is the only step that uses the federation
interfaces.
628 The rest of the steps may use platform-specific
interfaces.
629
630 1. Contact the server named by the export and ask for
the FSN
631 for the fileset, given its path relative to that
export.
632
633 2. Create a new local junction key.
634
635 3. Insert, in the local junction info table, a mapping
from
636 the local junction key to the FSN.
637
638 4. Insert the junction, at the given path, into the
local
639 filesystem.
640
More information about the Federated-fs
mailing list