• 0 Posts
  • 26 Comments
Joined 2 months ago
cake
Cake day: March 17th, 2025

help-circle

  • Also most people who have only used Windows, bought their computers with Windows pre-installed, where the manufacturer loaded a custom Windows image that already has all of their drivers installed and configured. So it’s not just that they’ve never used Linux before, they’ve often never actually installed any operating system from scratch on any computer and had to deal with the setup process.

    Not too long ago I was messaging with someone who kept complaining that Linux was taking HoUrS to get drivers configured and how it clearly wasn’t for them because Windows “just works”. Meanwhile I’m sitting there thinking of the last time I installed a Linux distro on a machine it took a few minutes to install the proprietary Nvidia drivers and I was done, while the last time I installed Windows on a machine it took ~4 hours to get all of the drivers loaded properly, including blacklisting the f*****g Windows Update utility so it would stop trying to replace my network driver with a broken version that kept taking down the network connection on the machine, and the insanity of having to update, reboot, update, reboot, update, reboot, update, reboot over and over again for half a day until finally all the updates are actually installed and running.



  • Fun fact, Edge still has this stupid behavior even on Linux, so highlight and middle click doesn’t work properly since as soon as you highlight it pops up that stupid menu. You have to go into the menu and disable it before highlighting works correctly again.

    Signed - someone who is fortunate enough to be able to use Linux on my work machine (yay!) but is still forced to use Edge on it (boo!)



  • suicidaleggroll@lemm.eetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    21 days ago

    I agree option 1 is the correct choice, though it does appear they are slowly going that direction…

    Really? Because every new Windows version is even worse than the one before it. There are now 3? 4? different places to change network settings, but only one of them actually works correctly, if you modify the wrong one it will act like it worked but will silently break all networking on the machine instead.



  • suicidaleggroll@lemm.eetoSelfhosted@lemmy.worldVersion Dashboard
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    22 days ago

    Just FYI - you’re going to spend far, FAR more time and effort reading release notes and manually upgrading containers than you will letting them run :latest and auto-update and fixing the occasional thing when it breaks. Like, it’s not even remotely close.

    Pinning major versions for certain containers that need specific versions makes sense, or containers that regularly have breaking changes that require you to take steps to upgrade, or absolute mission-critical services that can’t handle a little downtime with a failed update a couple times a decade, but for everything else it’s a waste of time.




  • suicidaleggroll@lemm.eetoLinux@lemmy.mlFirefox Finally Did It (Tab Groups)
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    10
    ·
    25 days ago

    I’ve never understood this. You guys know you can have multiple Firefox windows, right? What’s the point of tab groups when you can just group related tabs in a different window? Between multiple workspaces, multiple monitors, and multiple browser windows, I never feel the need to have more than 5-10 tabs open on any one of them at a time. More than that and I’m clearly doing something wrong and need to clean up anyway.




  • They likely streamed from some other Plex server in the past, and that’s why they’re getting the email. The email specifically states that if the server owner has a plex pass, you don’t need one.

    I got the email earlier today and it couldn’t be clearer:

    As a server owner, if you elect to upgrade to a Plex Pass, anyone with access to your server can continue streaming your server content remotely as part of your subscription benefits.


  • I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself (cAdvisor + node_exporter + Promethus + Grafana + Alert Manager). If I find that the VM is creeping up on its load or memory limits, I’ll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.

    Theoretically I could implement per-container resource limits, but I’ve never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.


  • People always say to let the system manage memory and don’t interfere with it as it’ll always make the best decisions, but personally, on my systems, whenever it starts to move significant data into swap the system starts getting laggy, jittery, and slow to respond. Every time I try to use a system that’s been sitting idle for a bit and it feels sluggish, I go check the stats and find that, sure enough, it’s decided to move some of its memory into swap, and responsiveness doesn’t pick up until I manually empty the swap so it’s operating fully out of RAM again.

    So, with that in mind, I always give systems plenty of RAM to work with and set vm.swappiness=0. Whenever I forget to do that, I will inevitably find the system is running sluggishly at some point, see that a bunch of data is sitting in swap for some reason, clear it out, set vm.swappiness=0, and then it never happens again. Other people will probably recommend differently, but that’s been my experience after ~25 years of using Linux daily.



  • Market self regulation assumes informed consumers that are smart enough to know what things mean

    Not just smart enough, but informed enough. That means every person spending literally hundreds/thousands of hours per week researching every single aspect of every purchase they make. Investigating supply chains, performing chemical analysis on their foods and clothing, etc. It’s not even remotely realistic.

    So instead, we outsource and consolidate that research and testing, by paying taxes to a central authority who verifies all manufacturers keep things safe so we don’t have to worry about accidentally buying Cheerios that are laced with lead. AKA: The government and regulations.



  • I self-host Bitwarden, hidden behind my firewall and only accessible through a VPN. It’s perfect for me. If you’re going to expose your password manager to the internet, you might as well just use the official cloud version IMO since they’ll likely be better at monitoring logs than you will. But if you hide it behind a VPN, self-hosting can add an additional layer of security that you don’t get with the official cloud-hosted version.

    Downtime isn’t an issue as clients will just cache the database. Unless your server goes down for days at a time you’ll never even notice, and even then it’ll only be an issue if you try to create or modify an entry while the server is down. Just make sure you make and maintain good backups. Every night I stop and rsync all containers (including Bitwarden) to a daily incremental backup server, as well as making nightly snapshots of the VM it lives in. I also periodically make encrypted exports of my Bitwarden vault which are synced to all devices - those are useful because they can be natively imported into KeePassXC, allowing you to access your password vault from any machine even if your entire infrastructure goes down. Note that even if you go with the cloud-hosted version, you should still be making these encrypted exports to protect against vault corruption, deletion, etc.