• 0 Posts
  • 15 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • So the SSD is hiding extra, inaccessible, cells. How does blkdiscard help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells with blkdiscard?

    The idea is that blkdiscard will tell the SSD’s own controller to zero out everything. The controller can actually access all blocks regardless of what it exposes to your OS. But will it do it? Who knows?

    I feel that, unless you know the SDD supports secure trim, or you always use -z, dd is safer, since blkdiscard can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.

    After reading all of this I would just do both… Each method fails in different ways so their sum might be better than either in isolation.

    But the actual solution is to always encrypt all of your storage. Then you don’t have to worry about this mess.


  • I don’t see how attempting to over-write would help. The additional blocks are not addressable on the OS side. dd will exit because it reached the end of the visible device space but blocks will remain untouched internally.

    The Arch wiki says blkdiscard -z is equivalent to running dd if=/dev/zero.

    Where does it say that? Here it seems to support the opposite. The linked paper says that two passes worked “in most cases”, but the results are unreliable. On one drive they found 1GB of data to have survived 20 passes.



  • From the article:

    Those joining from unsupported platforms will be automatically placed in audio-only mode to protect shared content.

    and

    “This feature will be available on Teams desktop applications (both Windows and Mac) and Teams mobile applications (both iOS and Android).”

    So this is actually worse than just blocking screen capturing. This will break video calls for some setups for no reason at all since all it takes to break this is a phone camera - one of the most common things in the world.


  • The only thing I’ve been claiming is that AI training is not copyright violation

    What’s the point? Are you talking specifically about some model that was trained and then put on the shelf to never be used again? Cause that’s not what people are talking about when they say that AI has a copyright issue. I’m not sure if you missed the point or this is a failed “well, actually” attempt.



  • You don’t have to trust Drew, though. Vaxry is pretty clear on his stance on the subject.

    if I run a discord server around cultivating tomatoes, I should not exclude people based on their political beliefs, unless they use my discord server to spread those views.

    which means even if they are literally adolf hitler, I shouldn’t care, as long as they don’t post about gassing people on my server

    that is inclusivity

    Source: https://blog.vaxry.net/articles/2023-inclusiveActivists

    Note how this article is not where he first stated the above. This article is where he doubles down on the above statement in the face of criticism. In the rest of the article he presents nazism as an opinion people might have that you disagree with. He argues that his silent acceptance of nazis is the morally correct stance while inclusive communities are toxic actually.

    This means that it’s not just Drew or the FDO who are arguing that Vaxry’s complete lack of political stance is creating safe spaces for fascists. It’s Vaxry himself that explicitly states this is happening and that it’s intentional on his part.


  • C is pretty much the standard for FFI, you can use C libraries with Rust and Redox even has their own C standard library implementation.

    Right, but I’m talking specifically about a kernel which supports building parts of it in C. Rust as a language supports this but you also have to set up all your processes (building, testing, doc generation) to work with a mixed code base. To be clear, I don’t image that this part is that hard. When I called this a “more ambitious” approach, I was mostly referring to the effort of maintaining forks of linux drivers and API compatibility.

    Linux does not have a stable kernel API as far as I know, only userspace API & ABI compatibility is guaranteed.

    Ugh, I forgot about that. I wonder how much effort it would be to keep up with the linux API changes. I guess it depends on how many linux drivers you would use, since you don’t need 100% API compatibility. You only need whatever is used by the drivers you care about.




  • Learning what a character looks like is not a copyright violation

    And nobody claimed it was. But you’re claiming that this knowledge cannot possibly be used to make a work that infringes on the original. This analogy about whether brains are copyright violations make no sense and is not equivalent to your initial claim.

    Just find the case law where AI training has been ruled a copyright violation.

    But that’s not what I claimed is happening. It’s also not the opposite of what you claimed. You claimed that AI training is not even in the domain of copyright, which is different from something that is possibly in that domain, but is ruled to not be infringing. Also, this all started by you responding to another user saying the copyright situation “should be fixed”. As in they (and I) don’t agree that the current situation is fair. A current court ruling cannot prove that things should change. That makes no sense.

    Honestly, none of your responses have actually supported your initial position. You’re constantly moving to something else that sounds vaguely similar but is neither equivalent to what you said nor a direct response to my objections.


  • The NYT was just one example. The Mario examples didn’t require any such techniques. Not that it matters. Whether it’s easy or hard to reproduce such an example, it is definitive proof that the information can in fact be encoded in some way inside of the model, contradicting your claim that it is not.

    If it was actually storing the images it was being trained on then it would be compressing them to under 1 byte of data.

    Storing a copy of the entire dataset is not a prerequisite to reproducing copyright-protected elements of someone’s work. Mario’s likeness itself is a protected work of art even if you don’t exactly reproduce any (let alone every) image that contained him in the training data. The possibility of fitting the entirety of the dataset inside a model is completely irrelevant to the discussion.

    This is simply incorrect.

    Yet evidence supports it, while you have presented none to support your claims.


  • When an AI trains on data it isn’t copying the data, the model doesn’t “contain” the training data in any meaningful sense.

    And what’s your evidence for this claim? It seems to be false given the times people have tricked LLMs into spitting out verbatim or near-verbatim copies of training data. See this article as one of many examples out there.

    People who insist that AI training is violating copyright are advocating for ideas and styles to be covered by copyright.

    Again, what’s the evidence for this? Why do you think that of all the observable patterns, the AI will specifically copy “ideas” and “styles” but never copyrighted works of art? The examples from the above article contradict this as well. AIs don’t seem to be able to distinguish between abstract ideas like “plumbers fix pipes” and specific copyright-protected works of art. They’ll happily reproduce either one.