• 3 Posts
  • 538 Comments
Joined 9 months ago
cake
Cake day: October 4th, 2023

help-circle

  • Right now when updates get applied to the NAS, if it gets powered off during the update window that would be really bad and inconvenient require manual intervention.

    You sure? I mean, sure, it’s possible; there are devices out there that can’t deal with power loss during update. But others can: they’ll typically have space for two firmware versions, write out the new version into the inactive slot, and only when the new version is committed to persistent storage, atomically activate it.

    Last device I worked on functioned that way.

    you might lose data in flight if you’re not careful.

    That’s the responsibility of the application if they rely on the data to be persistent at some point; they need to be written to deal with the fact that there may be in-flight data that doesn’t make it to the disk if they intend to take other actions that depend on that fact; they’ll need to call fsync() or whatever their OS has if they expect the data to be on-drive.

    Normally, there will always a period where some data being written out is partial: the write() could complete after handing the data off to the OS’s buffer cache. The local drive could complete because data’s in its cache. The app could perform multiple write() calls, and the first could have completed without the second. With a NAS, the window might be a little bit longer than it otherwise would be, but something like a DBMS will do the fsync(); at any point, it’d be hypothetically possible for the OS to crash or power loss or something to happen.

    The real problem, that I need an nas for, is not the loss of some data, it’s when the storms hit and there’s flooding, the power can go up and down and cycle quite rapidly. And that’s really bad for sensitive hardware like hard disks. So I want the NAS to shut off when the power starts getting bad, and not turn on for a really long time but still turn on automatically when things stabilize

    Like I said in the above comment, you’ll get that even without a clean shutdown; you’ll actually get a bit more time if you don’t do a clean shutdown.

    Because this device runs a bunch of VMs and containers

    Ah, okay, it’s not just a file server? Fair enough – then that brings the case #2 back up again, which I didn’t expect to apply to the NAS itself.



  • I’m assuming that your goal here is automatic shutdown when the UPS battery gets low so you don’t actually have the NAS see unexpected power loss.

    This isn’t an answer to your question, but stepping back and getting a big-picture view: do you actually need a clean, automatic shutdown on your Synology server if the power goes out?

    I’d assume that the filesystems that the things are set up to run are power-loss safe.

    I’d also assume that there isn’t server-side state that needs to be cleanly flushed prior to power loss.

    Historically, UPSes providing a clean shutdown were important on personal computers for two reasons:

    • Some filesystems couldn’t deal with power loss, could produce a corrupted filesystem. FAT, for example, or HFS on the Mac. That’s not much of an issue today, and I can’t imagine that a Synology NAS would be doing that unless you’re explicitly choosing to use an old filesystem.

    • Some applications maintain state and when told to shut down, will dump it to disk. So maybe someone’s writing a document in Microsoft Word and hasn’t saved it for a long time, a few minutes will provide them time to save it (or the application to do an auto-save). Auto-save usually partially-mitigates this. I don’t have a Synology system, but AFAIK, they don’t run anything like that.

    Like, I’d think that the NAS could probably survive a power loss just fine, even with an unclean shutdown.

    If you have an attached desktop machine, maybe case #2 would apply, but I’d think that hooking the desktop up to the UPS and having it do a clean shutdown would address the issue – I mean, the NAS can’t force apps on computers using the NAS to dump state out to the NAS, so hooking the NAS up that way won’t solve case #2 for any attached computers.

    If all you want is more time before the NAS goes down uncleanly, you can just leave the USB and RS-232 connection out of the picture and let the UPS run until the battery is exhausted and then have the NAS go down uncleanly. Hell, that’d be preferable to an automated shutdown, as you’d get a bit more runtime before the thing goes down.


  • Your age 30 is fine. Age is always an excuse, but mostly not true.

    It’s fine for single-player shooters, which are less demanding, but speaking as someone who has packed on some decades, your reaction time definitely becomes a noticeable factor over the years for competitive multiplayer games. I definitely can’t play competitive twitch shooters nearly as well as when I was 18, which is about when your reaction time is at its best.

    That being said, there are shooters where twitch time is less-critical or roles or play-styles that focus less on it.

    And I don’t see how someone couldn’t learn to play with a dual-stick or trackpad (or trackball, for that matter), which is what I think OP is talking about. I haven’t had any problems picking up new input methods…that just takes time. Took time to learn when I was 18, too.


  • I mean, twin stick gamepad or to lesser extent touchpad just isn’t going to be as good as a mouse for an FPS. A good mouse player will beat a good touchpad or gamepad player.

    And the problem with the Deck is that it has a PC game library, and a lot of those are designed with a mouse in mind. Console FPSes usually adjust the game difficulty so that playing with twin sticks are practical. Enemies give you more time to slowly turn around without inflicting enormous amounts of damage. Auto-aim assist is common. Ranges are shorter. Stuff like that.

    If this is a single-player game – which it sounds like you’re playing – you can reduce the difficulty to compensate for the input mechanism.

    There’s an input mechanism that some people developed for twin-stick gyro controllers called Flick Stick, which someone else mentioned; Steam Input supports this. The mouse is still going to win, but it’s an improvement over traditional pure-stick input.

    There’s also some input mechanism which I think was different from the “Flick Stick” approach – though maybe I’m wrong and misremembering, didn’t have an interest in exploring it – that IIRC someone put together using Steam Input. The way it worked, as I recall, was that one could tap the thumbstick in a direction and it’d immediately do a 90 degree turn. The idea was to provide for a rapid turn while keeping sensitivity low enough to still permit for accurate aiming. But I’m not able to find the thing with Kagi in a few searches, and it’s not impossible that I’m misremembering…this was only a single video that I’m thinking of.

    I don’t think that there’s any trick to learning this, just playing games and picking it up over time. I mean, I was atrocious at using a keyboard+mouse when I first started doing it, and ditto with twin-stick FPSes.

    You could also attach a keyboard and mouse, though I think that that kind of eliminates the point of the Deck, at least as long as one also has a PC to play on – it might make sense for someone who just uses a Deck and a phone.

    is there an easy FPS game where I don’t have to move or shoot too fast

    Play games that are designed for consoles or which have a gamepad mode, rather than a keyboard+mouse PC game. They’ll be tuned for controller limitations. Like, can you play Halo comfortably with the Deck? That was designed for a gamepad originally, and it’s available on Steam (though I’d note that it requires a Microsoft account, which you may-or-may-not be willing to do).

    https://old.reddit.com/r/truegaming/comments/8f7oyr/the_core_reasons_thumbsticks_are_inaccurate/

    This also talks about some limitations of thumbstick aiming (if you’re using thumbsticks and not trackpads). It might be possible to tweak some of these, like sensitivity or dead zone, but I’d assume that for a given game, the developers have already chosen pretty reasonable defaults.


  • For those who haven’t played the series, VATS is an alternate aiming mode where one can pause (or in later games in the 3d series, greatly slow) the game, select a certain number of targets depending upon available action points, and then have all those shots taken in rapid succession, with the game aiming.

    I’d say that VATS is kind of a “path” than a purely alternate input method in those games; you need to make a VATS-oriented build, though it’s true that it makes it possible to play the game with minimal FPS elements. Like, in Fallout: New Vegas, VATS provides major benefits close-up. While VATS is active, there’s enormous damage reduction applied to your character, IIRC 90%, so for short periods of time, they have enormous damage output and little risk. They can also turn rapidly and target multiple enemies, probably better than a player manually-playing could. At close ranges, VATS is just superior.

    But VATS suffers severe accuracy penalties at range. Whether-or-not a target is moving doesn’t affect VATS accuracy, but range does a lot, whereas with manual aiming, whether-or-not a target is moving makes a big difference and range doesn’t matter much. As a result, VATS isn’t great for sniping, which is also an aspect of the game. You can do it (especially, oddly-enough, with pistols, in Fallout 4, where the Concentrated Fire perk lets later shots in a flurry of pistol shots at range be very accurate.

    In Fallout 76, VATS provides such dramatic damage benefits that I’d say that it’s impractical to play a non-VATS build – VATS is required to get damage up to a reasonable level later in the game.


  • Yes. I wouldn’t be preemptively worried about it, though.

    Your scan is going to try to read and maybe write each sector and see if the drive returns an error for that operation. In theory, the adapter could respond with a read or write error even if a read or write worked or even return some kind of bogus data instead of an error.

    But I wouldn’t expect this to likely actually arise or be particularly worried about the prospect. It’s sort of a “could my grocery store checkout counter person murder me” thing. Theoretically yes, but I wouldn’t worry about it unless I had some reason to believe that that was the case.


  • I haven’t used it recently, but last time I did, I used MO2 with vanilla WINE, just setting my WINE prefix to the Skyrim Proton prefix. WINE and Proton would convert the registry in the WINE prefix back and forth each time one launched. I haven’t used SteamTinkerLaunch.

    Prior to that, I used Wrye Bash, which was a mess to get working in Linux – but could run natively, at least at one point, with some prodding. I’ve also run it under WINE. It took a lot of massaging. I don’t recommend that route unless you can program, know Python and are willing to get your hands dirty.

    And I also had a stint where I wrote my own scripts to reconstruct the modded environment from scratch.

    My most-recent attempt for Bethesda modding was in Starfield, with a much-simpler CLI mod manager, this. I have gotten some mods working but not others; don’t know if it’s a case-folding issue. Will need more experimentation. It doesn’t have the conflict-diagnosis tools that Wrye Bash does, or I assume MO2 probably does (though I haven’t run into). I don’t think it supports Skyrim, Fallout 4, or Fallout 76; that probably matters at least insofar as mod managers for those need to merge leveled lists. My (brief) impression is that the Starfield modding community is heading down the direction of avoiding needing the mod manager to do that, having a mod that merges that stuff dynamically at game runtime.

    the performance is not great.

    Uh. The performance of MO2 or Skyrim?

    MO2…I don’t recall, it might not have been snappy, but I don’t recall it being especially unusable. Certainly not at the level that I wouldn’t use the software. I was using a reasonably high-end system, but I don’t think that it’s particularly resource-intensive. I was running off SSD, and maybe some of the stuff might have been I/O intensive.

    Skyrim was fine from a performance standpoint. I mean, you can obviously kill performance with the right mods, but I assume that you mean “modding at all”.

    EDIT: If you put a lot of mods into Skyrim, like, hundreds, it can take a while to launch. IIRC, one problem – not Linux-specific – there is that loose files aggravate launch performance issues. My understanding is that, where possible, use mods that merge files into a .BSA rather than loose files. A number of mods have multiple versions; pick the .BSA one.

    EDIT2: Skyrim, Fallout 4, and the Fallout 76 versions of Bethesda’s engine don’t really take much advantage of multiple cores the way the way the Starfield version does. I get buttery-smooth performance in Starfield; Fallout 76 invariably is a bit jerky when loading resources in a new cell. I don’t get a pretty consistent framerate at 165 Hz in Fallout 76 the way I can in Starfield. But I don’t know if that’s what you’re running into, without specifics of the performance issues. And that’s not gonna be a Linux-specific issue or anything that can realistically be resolved short of forward-porting the Skyrim, Fallout 4, and Fallout 76 games to the Starfield engine.


  • IIRC Russia was talking about detaching their modules and using them to help bootstrap some new station. So I dunno if those will get brought down.

    That being said, that was also when that rather pugnacious guy was running Roscosmos, and I dunno if doing a new space station is the top of Russia’s priority list for their limited budget.

    kagis

    Dmitry Rogozin.

    kagis further

    It looks like they canceled the idea of reusing the Russian ISS modules back in 2021. So I guess those are destined for SpaceX’s deorbit too.

    https://en.wikipedia.org/wiki/Orbital_Piloted_Assembly_and_Experiment_Complex

    The Orbital Piloted Assembly and Experiment Complex (Russian: Орбитальный Пилотируемый Сборочно-Экспериментальный Комплекс, Orbital’nyj Pilotirujemyj Sborochno-Eksperimental’nyj Kompleks;[1][2] ОПСЭК, OPSEK) was a 2009–2017 proposed third-generation Russian modular space station for low Earth orbit. The concept was to use OPSEK to assemble components of crewed interplanetary spacecraft destined for the Moon, Mars, and possibly Saturn. The returning crew could also recover on the station before landing on Earth. Thus, OPSEK could form part of a future network of stations supporting crewed exploration of the Solar System.

    In early plans, the station was to consist initially of several modules from the Russian Orbital Segment (ROS) of the International Space Station (ISS). However, after studying the feasibility of this, the head of Roscosmos stated in September 2017 the intention to continue working together on the ISS.[3] In April 2021, Roscosmos officials announced plans to exit from the ISS programme after 2024, stating concerns about the condition of its aging modules. The OPSEK concept had by then evolved into plans for the Russian Orbital Service Station (ROSS), which would be built without modules from the ISS, and was anticipated to be launched starting in the mid-2020s.[4][5]

    https://en.wikipedia.org/wiki/Russian_Orbital_Service_Station

    The Russian Orbital Service Station (Russian: Российская орбитальная служебная станция, Rossiyskaya orbital’naya sluzhebnaya stantsiya) (ROSS, Russian: РОСС)[3] is a proposed Russian orbital space station scheduled to begin construction in 2027. Initially an evolution of the Orbital Piloted Assembly and Experiment Complex (OPSEK) concept, ROSS developed into plans for a new standalone Russian space station built from scratch without modules from the Russian Orbital Segment of the ISS.[4]

    I still dunno if they’re gonna get the money for a new space station. Like, deciding to have a war in Ukraine may have kind of killed off the viability of doing a new space station.


  • Yeah, I use Steam as a deb too.

    I haven’t done it, but as long as Steam itself is isolated – as I expect flatpak Steam is – anything it launches will be too, and you can add arbitrary binaries. AFAIK, that works with Windows binaries in Proton.

    https://help.steampowered.com/en/faqs/view/4B8B-9697-2338-40EC.

    Referring to your response to dillekant, I’m not sure how much Steam buys you in terms of security, though, unless you’re buying from Valve. The flatpak might provide some isolation by virtue of being flatpak (though I dunno how many permissions the Steam flatpak is granted…I assume that at bare minimum, it has to grant games access to stuff like your microphone to let VoIP chat work).

    https://docs.flatpak.org/en/latest/sandbox-permissions.html

    Steam, itself as of today, doesn’t provide isolation, at all.

    Adding a non-Steam game to Steam lets you launch from Steam, which might be convenient. Maybe use Proton, which has a few compatibility patches.

    If I wanted to run an untrusted Windows binary game today on my Linux box, if it needs 3d acceleration, I don’t have a great answer. If it doesn’t, then running it in a Windows VM with qemu is probably what I’d do – I keep a “throwaway” VM for exactly that. It has read access to a shared directory, and write access to a “dropbox” directory. I wouldn’t bring Steam into the picture at all. I don’t want it near my Steam credentials (Steam credentials have been a target of malware in the part) or a big software package like Steam that may-or-may-not have been well-hardened.

    It does get network access to my internal network – I haven’t set up an outbound firewall on the bridge, so a hostile binary could get whatever unauthenticated access it could get from my LAN. And it could use my Internet connection, maybe participate in a DDoS of someone or such. But it doesn’t otherwise have access to the system. It isn’t per-app isolation, but if the VM vanished today, it wouldn’t be a problem – there’s nothing sensitive on it. It doesn’t know my name. It can’t talk to my hardware, outside of what’s virtualized. It doesn’t have access to my data. There are no credentials that enter that VM. Unless qemu itself has security holes, software in the thing is limited to the VM.

    I have used firejail to sandbox some Linux-native apps, but while it’s a neat hack, and has a lot of handy tools to isolate software, I have a hard time recommending it as a general solution for untrusted binaries. I don’t know how viable it is to use with WINE, which it sounds like is what you want. It has a lot of “default insecure” behavior, where you need to blacklist a program for access to a resource, rather than whitelisting it. From a security standpoint I’d much rather have something more like Android, where firejail starts a new app with no permissions, warns me if it’s trying to use a resource (network, graphical environment certain directories) and asks me if I want to whitelist that access. It requires some technical and security familiarity to use. I think the most-useful thing I’ve used it for is that it mostly can isolate Ren’Py games, cut network access, disk write access, and a number of games (though not all; arbitrary Python libraries can be bundled) can work with a reasonably-generic restrictive firejail renpy profile. It just requires too much fiddling and knowledge to be a general solution for all users, and “default insecure” is trouble, IMHO.

    I do wish that there was some kind of reliable, no-fiddling, lighter-weight per-game isolation available for both Windows binaries and Linux binaries out-of-box. Like, that Joe User can use and I could recommend.

    I did see something the other day when reading about an unrelated Proxmox issue, talking about Nvidia apparently having some kind of GPU virtualization support. And searching, it looks like AMD has some kind of “multiuser GPU” thing that they’re billing. I don’t know how hardened either’s drivers are, but running VMs with 3d games may have become more practical since last I looked.

    EDIT: Hmm, yeah, sounds like QEMU does have some kind of GPU virtualization these days:

    https://ubuntu.com/server/docs/gpu-virtualization-with-qemu-kvm

    Need native performance, but multiple guests per card: Like with PCI passthrough, but using mediated devices to shard a card on the host into multiple devices, then passing those:

    -display gtk,gl=on -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/4dd511f6-ec08-11e8-b839-2f163ddee3b3,display=on,rombar=0

    You can read more about vGPU at kraxel and Ubuntu GPU mdev evaluation. The sharding of the cards is driver-specific and therefore will differ per manufacturer – Intel, Nvidia, or AMD.

    I haven’t looked into that before, though. Dunno what, if any, issues there are.

    EDIT2: Okay, I am sorry. I am apparently about four years out of date on Steam. Steam didn’t have any form of isolation, but apparently in late 2020, they added Pressure Vessel, some form of lxc-based per-game containerization.

    I don’t know what it isolates, though. I may need to poke more at that. Pretty sure that it doesn’t block network access, and I dunno what state the container gets access to.



  • I was just wondering what would happen if I downloaded a game that was infected by a computer virus and ran it in Linux using Proton.

    Depends on the mechanism. Some viruses will target stuff that WINE doesn’t emulate – like, if it tries to fiddle with Windows system files, it’s just not going to work. But, sure, a Windows executable could look for and infect other Widows executables.

    Has this happened to anyone?

    I don’t know specifically about viruses or on Proton. But there has been Windows malware that works under WINE. Certainly it’s technically possible.

    How would the virus behave?

    Depends entirely on the virus in question. Can’t give a generic answer to that.

    What files, connections or devices would it have access to?

    WINE itself doesn’t isolate things (which probably is reasonable, given that it’s a huge, often-changing system and not the best place to enforce security restrictions). On a typical Linux box, anything that you, as a user, would, since Linux user-level restrictions would be the main place where security restrictions would come into play.

    I do think that there’s a not-unreasonable argument that Valve should default to having games – not just Proton stuff – run in some kind of isolation by default. Basically, games generally are gonna need 3d access, and some are gonna need access to specialized input devices. But Steam games mostly don’t need general access to your system. But as things stand, Steam doesn’t do any kind of isolation either.

    You can isolate Steam as a whole – you can look at installing Steam via flatpak, for one popular option. I don’t use flatpaks, so I’m not terribly familiar with the system, but I understand that those isolate the filesystem that Steam and its games have access to. That being said, it doesn’t isolate games from each other, or from Steam (e.g. I can imagine a Steam-credentials-stealing piece of malware making it into the Steam Workshop). On the other hand, I’m not totally sure how much I’d trust Valve to do a solid job of having the Steam API be really hardened against a malicious game anyway – that’s not easy – so maybe isolating Steam too is a good idea.

    Could it be as damaging as running in in Windows?

    Sure. If it’s not Linux-aware, it probably isn’t going to do anything worse than deleting all the files that your user has access to, but in general, that’d be about as bad anyway. If it is Linux-aware, it could probably do something like intercept your password next time you invoke sudo, then make use of it to act as root and do anything.


  • tal@lemmy.todaytoSelfhosted@lemmy.worldServer for a boat
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    6 days ago

    What hardware and Linux distro would you use in this situation?

    The distro isn’t likely to be a factor here. Any (non-super-specialized) distro will be able to solve issues in about the same way.

    I mean, any recommendation is going to just be people mentioning their preferred distro.

    I don’t know whether saltwater exposure is a concern. If so, that may impose some constraints on heat generation (if you have to have it and storage hardware in a waterproof case).



  • EU won’t commit to answering whether games are goods or services.

    I think I’d have a category for both.

    You can’t call an SNES cartridge a service, but similarly, you can’t call, oh, an online strip poker service a good.

    I suspect that most good-games have at least some characteristics of a service (like patches) and most service-games have at least some characteristics of a good (like software that could be frozen in place).

    I think that the actual problem is vendors unnecessarily converting good-games into service-games, as that gives them a route to get leverage relative to the consumer. Like, I can sell a game and then down the line start data-mining players or something. I think that whatever policy countries ultimately adopt should be aimed at discouraging that.


  • If there’s a better way to configure Docker, I’m open to it, as long as it doesn’t require rebuilding everything from scratch.

    You could try using lvmcache (block device level) or bcachefs (filesystem level caching) or something like that, have rotational storage be the primary form of storage but let the system use SSD as a cache. Dunno what kind of performance improvements you might expect, though.