• 0 Posts
  • 23 Comments
Joined 3 years ago
cake
Cake day: June 20th, 2023

help-circle


  • The bug was fixed, but it still adds itself as co-author by default if you as much as use code completion powered by Copilot.

    Combined with the fact that this doesn’t show up in your commit message dialog, and that is nothing but blatant advertising, this is just unacceptable.

    I don’t necessarily mind crediting Copilot if it did substantial amount of the work, but it also seems redundant nowadays when AI has become as ubiquitous as using an IDE. Having used it for code completion just doesn’t seem to warrant co-author credit in that context. In other words if I had been able to edit that part of the commit message I’d probably be a lot less annoyed by this.

    As it is, it’s just blatant overreach by Microsoft. Microsoft doing Microsoft things. Nothing has changed since the 90s.







  • I don’t see anyone addressing the question from the post: whether it is a problem that Docker Desktop on Linux runs in a separate VM.

    The page says:

    Docker Desktop on Linux runs a Virtual Machine (VM) which creates and uses a custom docker context, desktop-linux, on startup. This means images and containers deployed on the Linux Docker Engine (before installation) are not available in Docker Desktop for Linux.

    To expand on what that means: If you install Docker as usual (the CLI) on Linux, it runs as a process (running as root). The process will isolate the container processes from the rest of the system using Linux kernel features, but you’re really just running processes on your host kernel that have limited access to the file system, network, etc.

    When you run in it a separate VM, which is how Docker Desktop is also run on Windows and MacOS, you are running it in a separate Linux instance (VM) that cannot communicate with the outside by default. So, if you’re running Docker on the host computer and inside a VM, those are separate Docker installs and can’t talk to each other. That is what the warning is about.

    You can absolutely expose the VM to the outside, the same as if you ran it on Windows. Docker will let you expose those ports and it handles the messy bits of the networking for you. You just have to tell Docker when you run the container (on the command line or in a docker compose file) which ports to expose. By default, nothing is exposed. To do that you can use the -p option. For example:

    docker run --rm -it -p 8080:80 httpd

    Will run an instance of Apache HTTPd and expose it on port 8080. The container itself listens to port 80, but on the outside it’s 8080. If you then hit http://localhost:8080/ you should see “It works!”.

    A note on Docker networking: from within the container, localhost is referring to the container itself, not the host. So if you try to do e.g. curl http://localhost:8080/ inside the container, your connection would be refused.

    Docker Desktop is often frowned upon because you have to pay to use it in a commercial setting (there was some backlash because it used to be free), it’s quite expensive, and they require a minimum license count for enterprise licenses (I know because we bought one at work). So, I suggest exploring free alternatives like Podman Desktop. However, note that they do not always have feature parity with Docker Desktop.

    I like Docker Desktop because it gives me a nice dashboard to see all my containers, resource usage, etc. I would not have requested it for work, though, if it weren’t for my IDE (Visual Studio) requiring it at the time (they have added Podman support since).

    Final note: I recommend just diving into using Docker from the command line and learn that. Docker complicates networking a little bit because it adds more layers, but understanding Docker is very useful if you’re into self hosting or software development.






  • They have many ways to do this if they want:

    • go after US entities, like the Linux Foundation. They only need to go after enough to make them fall in line
    • Make it illegal to distribute non compliant systems, including downloads
    • Make it illegal to operate or even possess a non compliant system

    Even if they don’t go after individuals they can do a lot of damage in restricting trade and business use of something. The mere threat of legal action is though to make business owners nope out.

    It’s already risky to draw attention to yourself by using privacy focused phones when traveling. It’s the ultimate “if your have nothing to hide why are you worried” situation.

    They’re forcing legitimate users to either give up or go underground and risk being seen as criminals.


  • I’m not a cyber security expert, but I think about it this way:

    First, consider your threat model. What could possibly go wrong? What do I do if the worst thing happens? What information do I need to protect? If everything is already public (like blog posts), maybe there isn’t much of a threat of information loss. If you keep your tax documents on there, maybe rethink that.

    Second: think defense in depth. None of these measures will make you totally safe, but every barrier is another thing that can make a hacker’s life more difficult. You move the ssh port and it’s not as easily found by someone who’s just literally scanning the entire Internet for open ssh ports. It’s trivial to find, sure, but at least you dodged one bullet.

    OK, they found your ssh port. Now they’re gonna start scanning for common username/password combinations. Fail2ban will stop this by blocking access after a few failures. If your credentials have leaked somewhere, the hackers may have a good guess at it though. But you’re OK because you’re using a key pair not your usual password (please don’t have a “usual password”).

    Bad luck: they guessed your password. Or maybe they exploited a bug in your web server software (must have been a zero-day because you kept things up to date). Their exploit needs to open a server port for them to talk to, though. You blocked it on your firewall so that didn’t work. They try a reverse shell, but you blocked outgoing connections, too. Well done.

    And on it goes.

    If they keep trying, they will eventually succeed, but they have to try a lot harder when you lock things down, and the longer they are at it, the more opportunity you have to notice.


  • It really depends what you want out of your computer, how much you like to tinker, and how comfortable you are getting your hands dirty. I got back onto a daily driver Linux desktop a little under two years ago, but I’ve been running Linux on servers since um…mid 90s? I’ve had Linux desktops mostly on secondary computers, but didn’t go back fully until more recently.

    I don’t run Arch, but I feel like that community is probably closest to the feeling Linux had back in the day–when we recompiled the kernel with the specific drivers we needed for everything to save memory, I knew every process running, every program I installed. I compiled most of my own programs from source. Or maybe Gentoo is the current version of that. If that’s your jam, go that route.

    For a while in the early aughts I ran a ton of servers with RedHat and developed an aversion to rpm and its mess of dependencies. Debian felt so much more stable and I’ve been picking Debian for servers ever since. If you want boring and stable, you can’t go wrong with Debian. I have many times just set up Debian with automatic update and reboot, and those things just keep going for years. I can’t remember when a Debian update broke my system, which I definitively can’t say for every OS.

    Then, I started wanting to game on Linux. The flip side of boring and stable is outdated. So when I planned my new Linux desktop build I went distro shopping a bit. I tried out a few live distros at first. I knew I wanted up-to-date drivers (for new hardware), but not a lot of tinkering, because I got a lot older and less patient at this point.

    I ended up on Fedora this time. My choice was driven by the balance of being up to date enough for my (simple) gaming needs, yet mainstream enough (read: boring) that if anything broke, there would be forums available and I could get back to just enjoying my computer. I prefer KDE Plasma over Gnome, so that’s what I ended up with.

    I’m happy with it and not planning to change. But I do get that sinking feeling of not really knowing what my computer is doing, because, just like on Windows, there are a hundred processes running in the background and I don’t know what half of them do. It’s just that at this point I’m not curious enough anymore to go digging into the man pages and the wikis and peruse the source code to find out. I just want it to work and let me get to my doom scrolling.

    So for mainstream and boring, I recommend Debian or Fedora, maybe one of the Arch derivatives like CachyOS. If you want to customize and tinker, probably plain Arch or one of the smaller distros that are well documented and less opinionated. I didn’t mention Mint, because I think it’s a bit too simplified for someone with some Linux experience. I would install it for my parents, though.