• 0 Posts
  • 34 Comments
Joined 4 years ago
cake
Cake day: September 1st, 2021

help-circle



  • everett@lemmy.mltoMemes@sopuli.xyzYes
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    Hey, how’s it going? I managed to do about 40 disks this week (still a fraction of my hoard), though most of them had errors and I’m not sure if my method is the best way to image corrupted disks, to allow for future error correction.






  • everett@lemmy.mltoMemes@sopuli.xyzYes
    link
    fedilink
    arrow-up
    4
    ·
    9 days ago

    Great, thanks! Full disclosure: this is how long mine has been on my to-do list.

    Created 2014-01-03 22:43

    Modified 2020-06-24 11:45

    I’m finally setting myself a due-date: later this week. Check in with me, bud, I’ll check in with you!


  • everett@lemmy.mltoMemes@sopuli.xyzYes
    link
    fedilink
    arrow-up
    2
    ·
    9 days ago

    By your own admission, 10 MB of data could be a shit-ton of stuff that sounds important to you. Just get it done.

    And to not be a hypocrite, I’ll get going on my own similar project I’ve been putting off for years, haha. Do we have a deal?








  • I ain’t about to play headgames on what I have and haven’t salvaged already, I must keep track of what device stores what, what filename is what, and what dates are what.

    This is precisely the headache I’m trying to save to you from: micromanaging what you store for the purpose of saving storage space. Store it all, store every version of every file on the same filesystem, or throw it into the same backup system (one that supports block-level deduplication), and you won’t be wasting any space and you get to keep your organized file structure.

    Ultimately, what we’re talking about is storing files, right? And your goal is to now keep files from these old systems in some kind of unified modern system, right? Okay, then. All disks store files as blocks, and with block-level dedup, a common block of data that appears in multiple files only gets stored once, and if you have more than one copy of the file, the difference between the versions (if there is any) gets stored as a diff. The stuff you said about filenames, modified dates and what ancient filesystem it was originally stored on… sorry, none of that is relevant.

    When you browse your new, consolidated collection, you’ll see all the original folders and files. If two copies of a file happen to contain all the same data, the incremental storage needed to store the second copy is ~0. If you have two copies of the same file, but one was stored by your friend and 10% of it got corrupted before the sent it back to you, storing that second copy only costs you ~10% in extra storage. If you have historical versions of a file that was modified in 1986, 1992 and 2005 that lived on a different OS each time, what it costs to store each copy is just the difference.

    I must reiterate that block-level deduplication doesn’t care what files the common data resides in, if it’s on the same filesystem it gets deduplcated. This means you can store all the files you have, keep them all in their original contexts (folder structure), without wasting space storing any common parts of any files more than once.