Filesystems
From Makers Local 256
Status
- Research
Roles
- Hold data
- GmailFS
- DavFS
- harddrives, ext3
- ram, tmpfs
- Serve data to other machines
- NFS
- Samba
- Apache (webdav)
- GlusterFS
- Encrypt data
- EncFS
- TrueCrypt
- raid
- software raid
- GlusterFS
Things mentioned above
- GmailFS
- relies on internet upload and download speeds
- Uses Fuse
- DavFS
- Can use Fuse, or native
- box.net gives away davfs space for free
- Each account is limited to at max 1G of 20MB files
- Can have ACLs
- harddrives
- Cheap and getting cheaper
- The tried and true way of storing data
- Not extremely fast, but we've been living with them just fine so far
- ram
- Expensive, but getting cheaper
- Extremely fast, fastest storage out there
- NFS
- Old, it works, but has it's quirks
- Mainly only Linux
- Samba
- Windows, Linux, Mac, everyone can talk and serve this
- GlusterFS
- Mainly used for raid 0, raid 1, or striping over harddrives on a network
- Linux only that I've seen
- Clients use Fuse
- EncFS
- Encrypts at the file level
- Uses Fuse
- GmailFS has this as an intergrated option
- TrueCrypt
- Encrypts at the disk level
- Only Linux and Windows
- software raid
- Only Linux
- Can do any raid level
- Uses disks/partitions only
- loopback drives
- Can make a file into a disk
Combination tricks
- GmailFS + Samba
- Now your windows machines can access your files stored on google
- Gluster with disks
- Setup
- X Machines, each with equal Y (probably 2) amounts and Z sizes of partitions
- First layer
- AFR machine X partition 1 with machine X+1 partition 2
- You should now have X number of Z sized directories/partitions
- Second Layer
- Stripe each of the AFR directories/partitions
- This should be X * Z sized
- Example
- 3 machines of 2 50GB sized partitions yields 1 150GB with the ability for 1 machine to go down at any point
- Pros
- Any client with the glusterfs config can access the storage nodes
- Cons
- Not the most efficient data usage
- Doesn't scale, Y = 2, Thus you get (X * Z) / 2 total storage with the ability to lose floor(X / 2) nodes
- Let me know if you find a way for 3+ partitions Y
- Setup
- Gluster with files
- I think it's possible, got to try it first though
- Network Raid 5
- X machines each with 1 Z sized partitions
- one machine has all of these mounted and loopback files of Z sized on each
- add all of these loopback files as disks to a software raid
- has the same limitations as real raid 5 or 6 with real disks, but with machines
- X machines will give you (X-1) * Z sized disk
- Example
- 3 machines with 1 100GB partition in raid 5 yields 1 200GB disk with the ability for any machine but the master node to go down
- Pros
- Most efficient with data
- Cons
- Only one machine can access all of the storage, it then has to share it for the others