• 0 Posts
  • 64 Comments
Joined 4 months ago
cake
Cake day: March 8th, 2024

help-circle
  • Hah. I almost wrote that I also think the two Ultima Undergrounds are better than Deus Ex despite being much older and having an objectively very clumsy interface. Then I thought that’d get us in the weeds and pull us too far back, so I took it out.

    Look, yeah, Deus Ex rolled in elements from CRPGs and had good production values for the time. But all those things were nothing new for an RPG, they were just new for a shooter. Baldur’s Gate and Fallout were a few years old. The entire Ultima franchise had been messing around with procedural, simulated worlds for almost a decade at that point, which in the 90s was a technological eon.

    And yeah, System Shock had created a template for a shooter RPG, they just applied it to a lone survivor dungeon crawly horror thing, rather than try to marry it to the narrative elements of NPC-focused CRPGs, which is admittedly a lot more complicated. And Deus Ex was fully voiced and had… well, a semblance of cutscenes. In context it’s hilariously naive compared to what Japanese devs were doing in Metal Gear or Final Fantasy, but it was a lot for western PC game standards.

    But it wasn’t… great to play? I don’t know what to tell you. Thief and Hitman both had nailed the clockwork living stage thing, and at the time I was more than happy to give up the Matrix-at-home narrative and the DnD-style questing for that. The pitch was compelling, but it didn’t necessarily make for a great playable experience against its peers.

    I didn’t hate it or anything. I spent quite a bit of time messing with it. That corny main theme still pops up in my head with no effort on demand. I spent more time using it as a benchmark than Unreal, which I also thought wasn’t a great game.

    Also, while I’m here pissing people off, can we all agree that “immersive sim” is a terrible name for a genre? What exactly is “simulated”? Why is it immersive? Immerisve as opposed to what? At the time we tended to lump them in with stealth games, so the name is just an attempt to reverse engineer a genre name by using loose words that weren’t already taken, and I hate it. See also: character action game. Which action games do NOT have characters?

    Man, I am a grumpy old fart today.


  • The closest thing we had was the System Shock duology, since both predate Deus Ex. Deus Ex was basically accessible System Shock. Having dialogue trees and NPCs without losing the open-ended nature of System Shock’s more dungeon crawl-y approach was the real selling point. Well, that and the trenchcoats and shades. The Matrix was such a big deal.

    But even then, each of those elements were already present in different mixes in several late 90s games. Deus Ex by some counts was one of the early culminations of the genre blending “everything game” we were all chasing during the 90s. The other was probably GTA 3. I think both of those are fine and they are certainly important games, but I never enjoyed playing them as much as less zeitgeist-y games that were around at the same time. I did spend a lot of time getting Deus Ex to look as pretty as possible, but I certainly didn’t finish it and, like a lot of people, I mostly ran around Liberty Island a bunch.

    I played more Thief 2 that year, honestly. I played WAY more Hitman than Deus Ex that year. I certainly thought System Shock 2 was better. Deus Ex is a big, ambitious, important game, for sure, but I never felt it quite stuck the landing when playing it, even at the time.



  • Kind of overrated? I mean, it was cool to see a bit more of a palatable cinematic presentation in real time to go along with the late 90s PC jank, and that theme did kick ass, but it’s less groundbreaking in context than I think people give it credit for. And it doesn’t hold up nearly as well as System Shock 2, in my book.




  • I guess that depends on the use case and how frequently both machines are running simultaneously. Like I said, that reasoning makes a lot of sense if you have a bunch of users coming and going, but the OP is saying it’s two instances at most, so… I don’t know if the math makes virtualization more efficient. It’d pobably be more efficient by the dollar, if the server is constantly rendering something in the background and you’re only sapping whatever performance you need to run games when you’re playing.

    But the physical space thing is debatable, I think. This sounds like a chonker of a setup either way, and nothing is keeping you from stacking or rack-mounting two PCs, either. Plus if that’s the concern you can go with very space-efficient alternatives, including gaming laptops. I’ve done that before for that reason.

    I suppose it’s why PC building as a hobbyist is fun, there are a lot of balance points and you can tweak a lot of knobs to balance many different things between power/price/performance/power consumption/whatever else.


  • OK, yeah, that makes sense. And it IS pretty unique, to have a multi-GPU system available at home but just idling when not at work. I think I’d still try to build a standalone second machine for that second user, though. You can then focus on making the big boy accessible from wherever you want to use it for gaming, which seems like a much more manageable, much less finicky challenge. That second computer would probably end up being relatively inexpensive to match the average use case for half of the big server thing. Definitely much less of a hassle. I’ve even had a gaming laptop serve that kind of purpose just because I needed a portable workstation with a GPU anyway, so it could double as a desktop replacement for gaming with someone else at home, but of course that depends on your needs.

    And in that scenario you could also just run all that LLM/SD stuff in the background and make it accessible across your network, I think that’s pretty trivial whether it’s inside a VM or running directly on the same environment as everything else as a background process. Trivial compared to a fully virtualized gaming computer sharing a pool of GPUs, anyway.

    Feel free to tell us where you land, it certainly seems like a fun, quirky setup etiher way.


  • Yeah, but if you’re this deep into the self hosting rabbit hole what circumstances lead to having an extra GPU laying around without an extra everything else, even if it’s relartively underpowered? You’ll probably be able to upgrade it later by recycling whatever is in your nice PC next time you upgrade something.

    At this point most of my household is running some frankenstein of phased out parts just to justify my main build. It’s a bit of a problem, actually.



  • Alright, alright, just because I got myself excited. Top three gaming laptops, rating for sheer cool factor with no regard for practicality or value for money, but in no particular order:

    1- MSI GS65. It could be the Razer Blade, which is the OG, but the GS65 was legitimately the best of that first batch of thin and light gaming laptops that looked classy without looking tacky. It had a 1070 in it, it could run every contemporary game just fine and it made you look downright stylish working on a Starbucks. So cool.

    2- ASUS ROG Flow Z series. Asus put a dedicated GPU. In a tablet. Like, up to a 4070, you can get in one of these. It’s fat, it’s clunky, it’s underpowered for the hardware, it’s heavy, it sounds like the speaker in your first smartphone… but guys, 4070 in a tablet, are you kidding me? How cool is that?

    3- Framework Laptop 16. It’s a modular laptop with a dedicated GPU module and a bunch of random configuration options. Gaming laptop lego. Again, how cool is that?


  • I love both. And handhelds. And consoles.

    I just like videogames and things that can run videogames. Videogame tech is cool.

    I genuinely don’t get why people have such a grudge against gaming laptops. It’s like they got stuck regurgitating talking points from the mid 2000s. There have been so many super cool gaming laptops in the past couple of decades. Big, chonky powerhouses, sleek stealth workhorses, quirky nonsense builds… It’s awesome.


  • OK, but why?

    Well, for fun and as a cool hobby project, I get that. That is enough to justify it, like any other crazy hobbyist project. Don’t let me stop you.

    But in the spirit of practicality and speaking hypothetically: Why set it up that way?

    For self-hosting why not build a few standalone machines and run off that instead? The reason to do this large scale is optimizing resources so you can assign a smaller pool of hardware to users as they need it, right? For a home set of two or three users you’d probably notice the fluctuations in performance caused by sharing the resources on the gaming VMs and it would cost you the same or more than building a couple reasonable gaming systems and a home server/NAS for the rest. Way less, I bet, if you’re smart about upgrades and hand-me-downs.


  • Yeah, on that I’m gonna say it’s unnecessary. I don’t know what “integration with the desktop” gets you that you can’t get from having a web app open or a separate window open. If you need some multimodal goodness you can just take a screenshot and paste it in.

    I’d be more concerned about model performance and having a well integrated multimodal assistant that can do image generation, image analysis and text all at once. We have individual models but nothing like that that is open and free, that I know of.



  • That is a stretch. If you try to download and host a local model, which is fairly easy to do these days, the text input and output may be semi-random, but you definitely have control over how to plug it into any other software.

    I, for one, think that fuzzy, imprecise outputs have lots of valid uses. I don’t use LLMs to search for factual data, but they’re great to remind you of names of things you know but have forgotten, or provide verifiable context to things you have heard but don’t fully understand. That type of stuff.

    I think the AI shills have done a great disservice by presenting this stuff as a search killer or a human replacement for tasks, which it is not, but there’s a difference between not being the next Google and being useless. So no, Apple and MS, I don’t want it monitoring everything I do at all times and becoming my primary interface… but I don’t mind a little search window where I can go “hey, what was that movie from the 50s about the two old ladies that were serial killers? Was that Cary Grant or Jimmy Stewart?”.



  • Yeah, for sure. If you just drop Ubuntu or Fedora or whatever on a machine where everything works for you out of the box the experience is not hard to wrap your head around. Even if one thing needs you to write something in a terminal following a tutorial, that’s also frequent in Windows troubleshooting.

    The problem is that all those conversations about concurrent standards for desktop environments, display protocols, software distribution methods and whatnot are hard to grasp across the board. If and when you hit an issue that requires wrapping your head around those that’s where the familiarity with Winddows’ messy-but-straightforward approach becomes relevant.

    In my experience it’s not going through the motions while everything works or using the system itself, it’s the first time you try to go off the guardrails or you encounter a technical issue. At that point is when the hidden complexity becomes noticeable again. Not because the commands are text, but because the underlying concepts are complex and have deep interdependencies that don’t map well to other systems and are full of caveats and little differences depending on what combination of desktop and distro you’re trying to use.

    That’s the speed bump. It really, really isn’t the terminal.


  • Well, the good news is that of course you can use Linux with only as much command line interaction as you get in Windows.

    The bad news is that the command line REALLY isn’t what’s keeping people away from Linux.

    Hell, in that whole list, the most discouraging thing for a new user isn’t the actually fairly simple and straightforward terminal commands, it’s this:

    Here’s where it gets a little trickier: Scrolling on Firefox is rough, cause the preinstalled old version doesn’t have Wayland support enabled. So you either have to enable Wayland support or install the Flatpak version of Firefox.

    This is a completely inscrutable sentence. It is a ridiculous notion, it brings up so many questions and answers none. It relates to concepts that have no direct equivalent in other platforms and even a new user that successfully follows this post and gets everything working would come out the other end without understanding why they had to do what they did or what the alternative was.

    I’ve been saying it for literal decades.

    It’s not the terminal, it’s not the UX not looking like Windows.


  • Hm. So are we all the way there to Win 11 not being installable in fully offline machines, or…? Because niche as that application is, it does sound like the start of a use case for a natively compatible Windows alternative from a third party (say, a FreeWin to go with FreeDOS). I know there are or have been some attempts, but… yeah, long term that seems like it would prompt more focus on something like that.

    I suppose it’s more likely that compatibility layers in other OSs would get there first and more practically, but still. Maybe it’s time to move Windows applications from an ecosystem to a standard.