• 0 Posts
  • 105 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • Depends on the vendor for the specifics. In general, they don’t protect against an attacker who has gained persistent privileged access to the machine, only against theft.
    Since the key either can’t leave the tpm or is useless without it (some tpms have one key that it can never return, and will generate a new key and return it encrypted with it’s internal key. This means you get protection but don’t need to worry about storage on the chip), the attacker needs to remain undetected on the server as long as they want to use it, which is difficult for anyone less sophisticated than an advanced persistent threat.

    The Apple system, to its credit, does a degree of user and application validation to use the keys. Generally good for security, but it makes it so if you want to share a key between users you probably won’t be using the secure enclave.

    Most of the trust checks end up being the tpm proving itself to the remote service that’s checking the service. For example, when you use your phones biometrics to log into a website, part of that handshake is the tpm on the phone proving that it’s made by a company to a spec validated by the standards to be secure in the way it’s claiming.


  • Package signing is used to make sure you only get packages from sources you trust.
    Every Linux distro does it and it’s why if you add a new source for packages you get asked to accept a key signature.

    For a long time, the keys used for signing were just files on disk, and you protected them by protecting the server they were on, but they were technically able to be stolen and used to sign malicious packages.

    Some advanced in chip design and cost reductions later, we now have what is often called a “secure enclave”, “trusted platform module”, or a general provider for a non-exportable key.
    It’s a little chip that holds or manages a cryptographic key such that it can’t (or is exceptionally difficult) to get the signing key off the chip or extract it, making it nearly impossible to steal the key without actually physically stealing the server, which is much easier to prevent by putting it in a room with doors, and impossible to do without detection, making a forged package vastly less likely.

    There are services that exist that provide the infrastructure needed to do this, but they cost money and it takes time and money to build it into your system in a way that’s reliable and doesn’t lock you to a vendor if you ever need to switch for whatever reason.

    So I believe this is valve picking up the bill to move archs package infrastructure security up to the top tier.
    It was fine before, but that upgrade is expensive for a volunteer and donation based project and cheap for a high profile company that might legitimately be worried about their use of arch on physical hardware increasing the threat interest.






  • So, you’re correct that active emergencies take priority.

    That being said, in essentially every place that has 911, both numbers connect to the same place and the only real difference is pick-up order and default response.
    It’s the emergency number not simply because it’s only for emergencies but because it’s the number that’s the same everywhere that you need to know in the event of an emergency.

    It should be used in any situation where it should be dealt with by someone now, and that someone isn’t you. Finding a serious crime has occurred is an emergency, even if the perpetrator is gone and the situation is stable.
    A dead person, particularly a potential murder, generally needs to be handled quickly.

    It’s also usually better to err on the side of 911, just in case it is an emergency that really needs the fancy features 911 often gives, like location lookups.


  • In the sense that they have a manager? Sure. In the sense that there’s one individual dictating the design of the software? I’ve never even been on a team with that dynamic, to say nothing of the entire codebase.

    Modern software teams tend to eschew design by decree.

    What’s the dynamic that you’re thinking is typically what teams use?


  • I’m not sure I’d construe a manual you can find, or a variety of guides, as a negative. :) most days my usage of git consists of “pull, commit, push, merge” in different orders. You might be overestimating how much effort goes in to managing the tool.

    Most of my professional experience has been working on projects that consist of multiple teams of between 4-6 developers, and between 5 and 40 teams. I’m not entirely sure what you mean about git not mirroring the development patterns of most “real life” projects.
    “Real” projects are frequently developed by groups of people working on the same goal adjacent to other groups working on related but distinct goals.


  • We very clearly work in different professional environments. :)

    In no particular order: Administrating a git server is similarly trivial. A repository is a folder (easy to backup, easy to repair, easy to host), and setting up a new server usually a matter of ssh key management. Don’t even need to install sqlite or anything beyond the git package. Or, because the tool has wide support, you can install a wide selection of tools that manage it for you, or use a free hosting service, or a paid one.

    I’m startled that you would say you can’t think of anyone who would care. My entire professional experience has been developer stories about bad jobs often include details about using old or esoteric VCS systems, usually met with “ew” or “wtf” comments. Sets the flavor of the story.
    Personally, in a business environment, I would take using anything except git for the org as a red flag. It’s a sign that someone in leadership at the company values doing things unrelated to the core mission “their way” above doing it the easy or “paved path” way.

    The standard tool is indeed not constant. Before git existed, using CVS would have been the better choice, as well as for years afterwards until it had clearly been usurped. Most projects aren’t Linux when it made the switch to git.

    You joke that no one really “knows” git, but… This is literally the first time I’ve ever seen a fossil command. I just searched for “fossil manual” and I get analog watches. It’s not even available in any of my systems package managers.
    Developer familiarity is a big advantage that I think you’re downplaying in comparison to “there are metadata files in .git”, which I don’t know has ever been relevant to me in any significant way.
    (Also, I thought the different systems all work basically the same? 😛)

    I’d handily agree people should be using the best tool for the job. Familiarity and ease of use are significant factors in what makes a tool better.
    Ability to integrate with other tools is also a major factor. Setting up continuous integration or code review tools with git is trivial with any number of different systems.

    What are any of the tools you’re using doing better than git? The biggest selling point you’ve shared for fossil is that it’s functionally similar to git, and that it has better merging. I can’t find anything related to merge conflicts outside of years old forum posts, and barely anything relating to merges at all, so I’m not entirely certain what makes it “better”.

    If it’s biggest advantage is that it’s similar enough to git that you can pick it up fast, why wouldn’t I just use git?


  • Like I said, there are always factors.

    For a company starting from scratch though, the usage base factor becomes vastly more significant.
    Using a tool that radically limits your integration capabilities is a poor choice, to say nothing of most likely needing to onboard every new employee to an entirely new VCS.

    I don’t know that I’ve encountered anyone using svn that wasn’t interested in moving in recent memory, so “developer experience” would be a reason to move.




  • File1, file2, file_3.new, etc would be bizarrely stupid. A home rolled solution involving rsync, tar, gzip, crons or inotify would also be bizarrely stupid.

    https://en.wikipedia.org/wiki/List_of_version-control_software anything on that list that’s marked anything other than “active” as a more serious answer. So like DCVS, visual source safe, or bitkeeper. Anything that’s not getting bug fixes or maintenance.

    Anything that doesn’t have significant enough usage to give confidence that bugs or glitches are being caught by common usage would be risky, since you don’t want to be the person to find that edge case.

    There’s things other than git that aren’t wrong, but I see little compelling reason not to use the most ubiquitous tool.


  • There’s a difference between “can’t code” and “can’t work”.

    A lot of people use git for version control: super good idea, basically anything else is at best unorthodox, at worst bizarrely stupid.
    A lot of people also use github for repository hosting, continuous integration, code review, deployment, packaging, etc, etc. this is more of an opinion thing than a standard practice thing, and there are plenty of other ways to get the same tools, either all in one package or from a variety of different ones, self hosted, in the cloud, or some hybrid in between.

    If GitHub goes down, you can make code changes and everything to your hearts content. But you might not be able to run your full integration testing pipeline on it, get a code review, or package your software.

    If your local build process pulls packages from GitHub or refreshes a remote repository automatically, it can also powerfully mess that up, but that’s nothing to do with git. You can use “ctrl-c/v” backups and still have a build process that tips over when GitHub goes down.



  • Google analytics is loaded by JavaScript. There are also other things like Google analytics that are also loaded by JavaScript.

    Updating a website can take time, and usually involves someone with at least a passing knowledge of development.

    Google tag manager is a service that lets you embed one JavaScript thing in your page, and then it will handle loading the others. This lets marketing or analytics people add and manage such things without needing to make a full code deployment.
    It also lets you make choices about when and how different tracking events for different services are triggered.

    It’s intended usage is garbage tracking metrics and advertising. Some sites are built more by marketing than developers, and they’ll jam functional stuff in there which causes breakage if you block it. These sites are usually garbage though, so nothing of value was lost.