Guerilla filesystem

From Makers Local 256
Jump to: navigation, search


A script or app that auto-searches for writeable space of any type, mounts them using FUSE or something similar, and then RAIDs them all together potentially creating vast amounts of storage seemingly out of the ether.


Planning, with as you can see, a little work done by Brimstone


  • JBOD with some sort of external parity would be ideal, but RAID 5+0 would work as well.
    • The need for parity is imperative because since you don't own the space you never know when a part might disappear and need to be replaced.
  • It will definitely need to group found space based on size for optimal use in different RAID arrays.
    • Might need to keep a list of backup space for quick and easy replacement.
  • I have no idea how long it would take to calculate parity across all of that, but that would need to be determined to weigh feasibility.
  • Could sniff network traffic to find available space rather than actively scanning for it.


  • Potentially limitless storage that you didn't even know existed at your disposal.
  • It's free!
  • Plausible deniability for questionable content.
    • "It's not my hardware sir, I don't know what it was doing mounted..."
      • "No sir, I don't know where it's located..."
  • Christmas everytime you df -k!
    • You never know how much it'll find from day to day.


  • While there is parity, it could take a long time to rebuild.
    • May or may not support multiple volume failures.
    • Might have to be limited to unimportant/temporary data.
  • Potentially not very mobile.

Brimstone's Notes


Create a sizable null file on disk

$ dd if=/dev/zero of=/path/to/file count=(bytes)

Setup a file as a loopback device with disregard to /dev/loop* entry

# losetup -f /path/to/file

Create Raid

# mdadm --create /dev/md0 -a yes --level=raid0 --raid-devices=2 /dev/loop0 /dev/loop1

Append to Raid

# mdadm --add /dev/md0 /dev/loop2

While this sounds practical in theory, linux raid support just isn't ready for a raid that losses devices up and down at random. The best way to do this is going to be a FUSE app that runs something like a RAID6+1 across all the little devices on the victim filesystems.

FUSE Notes

  • The files will have some sort of unique name so we can tell what's part of guerillafs and what's not.
  • 4K "block" size, however more then one block can, and probably will, be stored in a file, which is doable.
    • First byte will be the version of the block, in case it ever changes.
    • Next X bytes will be locking information
      • Identifier of who's locking that segment
      • When it was locked as to determine when to unlock it
    • Next X bytes will be a unique hash identifying where that data the block carries will be in the total filesystem
    • Next X bytes will be the checksum for the remaining block.

Filesystem sources

  • SMB shares
  • NFS shares
  • Gmail accounts source
  • FTP servers
  • HTTP upload directories
  • DNS records
    • 20K/server = ~80 records @ 256bytes source
    • ~30 day lifespan on server
  • Tinyurls source
    • and it's ilk.
  • Alternate Data Streams(Forks) source
    • NTFS, HFS, HFS+, etc
      • not FAT based filesystems
    • Multiple alternate data streams per file are permitted, on NTFS at least.
      • There is no size limit that I'm aware of.
    • Files containing alternate data streams can be transported over SMB so long as the source a destination filesystems both support them.
    • I don't know if it's possible to write an alternate data stream over SMB.


  • Discovered a couple of papers regarding [RAIF which looks very promising. --strages 14:56, 31 January 2011 (CST)


OFFSystem --strages (talk) 15:09, 17 December 2012 (CST)