Difference between revisions of "Guerilla filesystem"
From Makers Local 256
m (→Brimstone's Notes: Not so much possible) |
(→Brimstone's Notes: Added in info about block format) |
||
Line 27: | Line 27: | ||
==Brimstone's Notes== | ==Brimstone's Notes== | ||
+ | ===Tests=== | ||
Create a sizable null file on disk | Create a sizable null file on disk | ||
<pre><nowiki>$ dd if=/dev/zero of=/path/to/file count=(bytes)</nowiki></pre> | <pre><nowiki>$ dd if=/dev/zero of=/path/to/file count=(bytes)</nowiki></pre> | ||
Line 40: | Line 41: | ||
While this sounds practical in theory, linux raid support just isn't ready for a raid that losses devices up and down at random. The best way to do this is going to be a FUSE app that runs something like a RAID6+1 across all the little devices on the victim filesystems. | While this sounds practical in theory, linux raid support just isn't ready for a raid that losses devices up and down at random. The best way to do this is going to be a FUSE app that runs something like a RAID6+1 across all the little devices on the victim filesystems. | ||
+ | |||
+ | ===FUSE Notes=== | ||
+ | * The files will have some sort of unique name so we can tell what's part of guerillafs and what's not. | ||
+ | * 4K "block" size, however more then one block can, and probably will, be stored in a file, which is doable. | ||
+ | ** First byte will be the version of the block, in case it ever changes. | ||
+ | ** Next X bytes will be a unique hash identifying where that data the block carries will be in the total filesystem | ||
+ | ** Next X bytes will be the checksum for the remaining block. |
Revision as of 12:06, 27 January 2007
Overview
A script or app that auto-searches for writeable space of any type, mounts them using FUSE or something similar, and then RAIDs them all together potentially creating vast amounts of storage seemingly out of the ether.
Notes
- JBOD with some sort of external parity would be ideal, but RAID 5+0 would work as well.
- The need for parity is imperative because since you don't own the space you never know when a part might disappear and need to be replaced.
- It will definitely need to group found space based on size for optimal use in different RAID arrays.
- Might need to keep a list of backup space for quick and easy replacement.
- I have no idea how long it would take to calculate parity across all of that, but that would need to be determined to weigh feasibility.
- Could sniff network traffic to find available space rather than actively scanning for it.
Pros
- Potentially limitless storage that you didn't even know existed at your disposal.
- It's free!
- Plausible deniability for questionable content.
- "It's not my hardware sir, I don't know what it was doing mounted..."
- "No sir, I don't know where it's located..."
- "It's not my hardware sir, I don't know what it was doing mounted..."
- Christmas everytime you df -k!
- You never know how much it'll find from day to day.
Cons
- While there is parity, it could take a long time to rebuild.
- May or may not support multiple volume failures.
- Might have to be limited to unimportant/temporary data.
- Potentially not very mobile.
Brimstone's Notes
Tests
Create a sizable null file on disk
$ dd if=/dev/zero of=/path/to/file count=(bytes)
Setup a file as a loopback device with disregard to /dev/loop* entry
# losetup -f /path/to/file
Create Raid
# mdadm --create /dev/md0 -a yes --level=raid0 --raid-devices=2 /dev/loop0 /dev/loop1
Append to Raid
# mdadm --add /dev/md0 /dev/loop2
While this sounds practical in theory, linux raid support just isn't ready for a raid that losses devices up and down at random. The best way to do this is going to be a FUSE app that runs something like a RAID6+1 across all the little devices on the victim filesystems.
FUSE Notes
- The files will have some sort of unique name so we can tell what's part of guerillafs and what's not.
- 4K "block" size, however more then one block can, and probably will, be stored in a file, which is doable.
- First byte will be the version of the block, in case it ever changes.
- Next X bytes will be a unique hash identifying where that data the block carries will be in the total filesystem
- Next X bytes will be the checksum for the remaining block.