Yeah. Doesn’t take much optimising of disk writes to make things run much better on a Pi; they’re quite capable machines as long as disk i/o isn’t your limiting factor. Presumably the devs have been doing some tidying up.
Yeah. Doesn’t take much optimising of disk writes to make things run much better on a Pi; they’re quite capable machines as long as disk i/o isn’t your limiting factor. Presumably the devs have been doing some tidying up.
My workplace is a strictly BitBucket shop, was interested in expanding my skillset a little, experiment with different workflows. Was using it as a fancy ‘todo’ list - you can raise tickets in various categories - to remind myself what I was wanting to do next in the game I was writing. It’s a bit easier to compare diffs and things in a browser when you’ve been working on several machines in different libraries than it is in the CLI.
Short answer: bit of timesaving and nice-to-haves, but nothing that you can’t do with the command line and ssh. But it’s free, so there’s no downside.
Ah, nice. Had been experimenting with using my Raspberry Pi 3B as my home Git server for all my personal projects - easy sync between my laptop and desktop, and another backup for the the stuff that I’d been working on.
Tried running Gitea on it to start with, but it’s a bit too heavy for a device like that. Forgejo runs perfectly, and has almost exactly the same, “very Github inspired” interface. Time to run some updates…
Nah - Doom (DOS): and Doom Eternal are on there, as are Baldur’s Gates 2 and 3.
Most common example would be a bicycle, I think - your pedals tighten on “in the same direction the wheel turns” as you look at them. So your left pedal has left-hand thread, and goes on and comes off backwards.
The effect of precession also means that you can tighten the pedals on finger tight and a good long ride will make them absolutely solid - need to bounce up and down on a spanner to loosen them.
When I was still dual-booting Windows and Linux, I found that “raw disk” mode virtual machines worked wonders. I used VirtualBox, so you’d want a guide somewhat like this: https://superuser.com/questions/495025/use-physical-harddisk-in-virtual-box - other VM solutions are available, which don’t require you to accept an agreement with Oracle.
Essentially, rather than setting aside a file on disk as your VM’s disk, you can set aside a whole existing disk. That can be a disk that already has Windows installed on it, it doesn’t erase what you have. Then you can start Windows in a VM and let it do its updates - since it can’t see the bootloader from within the VM, it can’t fuck it up. You can run any software that doesn’t have particularly high graphics requirement, too.
I was also able to just “restart in Windows” if I wanted full performance for a game or something like that, but since Linux has gotten very good indeed at running games, that became less and less necessary until one day I just erased my Windows partition to recover the space.
Yes, because it doesn’t do as much to protect you from data corruption.
If you have a use case where a barely-measurable increase in speed is essential, but not so essential that you wouldn’t just pay for more RAM to keep it in cache, and also it doesn’t matter if you get the wrong answer because you’ve not noticed the disk is failing, and you can afford to lose everything in the case of a power cut, then sure, use a legacy filesystem. Otherwise, use a modern one.
Got this installed on all my work machines - if you’re wanting to stick a screenshot on Jira or Slack with a couple of arrows, wavy lines, or a bit blurred out then it’s dead quick and has just the functionality that you need. Yes, it’s simple and lacks a lot of ‘power tools’. Sometimes that’s just what you need, tho.
emerges from a brand you’ve probably never heard of
Writing this on a Tuxedo Pulse 14 / gen 3 as we speak. Great little laptop. I’d wanted something with a few more pixels than my previous machine, and there’s a massive jump from bog-standard 1080p to extremely expensive 4K screens. Three megapixel screen at a premium-but-not-insane price, compiles code like a champion, makes an extremely competent job of 3D gaming, came with Linux and runs it all perfectly.
“Tuxedo Linux”, which is their in-house distro, is Ubuntu + KDE Plasma. Seemed absolutely fine, although I replaced it with Arch btw since that’s more my style. Presumably they’re using Debian for the ARM support on this new one? This one runs pretty cold most of the time, but you definitely know that you’ve got a 54W processor in a very thin mobile device when you try eg. playing simulation games - it gets a bit warm on the knees. “Not x64” would be a deal-breaker for my work, but for most uses the added battery life would be more valuable than the inconvenience.
Finest advice possible for any Linux sysadmin.
Yeah.
There’s a couple of ways of looking at it; general purpose computers generally implement ‘soft’ real time functionality. It’s usually a requirement for music and video production; if you want to keep to a steady 60fps, then you need to update the screen and the audio buffer absolutely every 16 ms. To achieve that, the AV thread runs at a higher priority than any other thread. The real-time scheduler doesn’t let a lower-priority thread run until every higher-priority thread is finished. Normally that means worse performance overall, and in some cases can softlock the system - if the AV thread gets stuck in a loop, your computer won’t even respond to keyboard input.
Soft real-time is appropriate for when no-one will die if a timeslot is missed. A video stutter won’t kill you. Hard real-time is for things like industrial control. If the anti-lock breaks in your car are meant to evaluate your wheels one hundred times a second, then taking 11 ms to evaluate that is a complete system failure, even if the answer is correct. Note that it doesn’t matter if it gets the right answer in 1 ms or 9 ms, as long as it never ever takes more than 10. Hard real-time performance does not mean good performance, it means predictable performance.
When we program up PLCs in industrial settings, for our ‘critical sections’, we’ll processor interrupts, so that we know our code will absolutely run in time. We use specialised languages as well - no loops, no recursion - that don’t let you do things that can’t be checked for an upper time bound. Lots of finite state machines! But when we’re done, we know that we’ve got code that won’t miss a time slot in the next twenty years of operation.
That does mean, ironically, that my old Amiga was a better music computer than my current desktop, despite being millions of times less powerful. OctaMED could take over the whole CPU whenever it liked. Whereas a modern desktop might always have to respond to a USB device or a hard drive, leading to a potential stutter at any time. Tiny probability, but not an acceptable one.
For some reason, I thought that was going to be a twenty-minute video, not a five-and-a-half hour sequence. Dang.
Been playing it on Arch all morning - runs beautifully straight out the box on a gaming desktop. Forgotten how (a) dark (b) bastard hard it is. Superb game, tho, and all the loading screens being essentially gone adds back a bit of pace it was missing.
And yeah, mapping the weird N64 controller to an xbox pad is always going to be strange - been wasting a lot of items when I’d been intending to look around.
That’s because Arch is the best, so any additional comparisons are just wasting everyone’s time ;-)
Ah, that sounds a bit unfortunate. I’ve run AMD CPUs on Linux desktops with Bulldozer / Piledriver / Ryzen 7, my current laptop is a Ryzen 7 as well, never run into that at all. Hopefully the Arch wiki will sort you out. If not that, the third option would be ‘install Linux on an M-series Mac’ - don’t know how feasible it is at the moment, and paying the ‘Mac premium for hardware and software integration and then overwriting the software’ doesn’t make a lot of sense to me.
Not all of the light would have been wasted on the wall. If your wall is painted green, then the ‘rest of the rainbow’ (red, orange, yellow, blue, violet wavelengths) would be absorbed and converted into heat. Paint is quite rough on a microscopic level, and the green light reflected would be scattered in every direction.
Things that have a colour do so because they reflect those frequencies. Mirrors reflect pretty much all frequencies of visible light with very little scattering - that’s the definition of the word, really.
If you had a black feature wall behind your lamp, such that very little was reflected off it into the rest of the room, then with a mirror there would be about twice the photons illuminating the room. If your wall was pure brilliant white, much less of a difference. Your eyes don’t perceive ‘twice the photons’ as ‘twice as bright’ - they scale from absorbing thousands a second when fully dark-adjusted at night, to trillions per second at midday - but you might find it a bit easier to eg. read a book elsewhere in the room.
Light output from the lamp doesn’t change, but depending on the colours of things in your room, the light output that is useful for seeing might do.
Really? If it’s a big enough treatment works to warrant a SCADA, then I doubt an automation engineer with the experience to set it all up would be asking this question, but here goes. You’ve a couple of obstacles:
every contract I’ve ever seen for industrial automation has either specified which control plane they want directly, or they’ll have a list of approved suppliers which you must use. Someone after you will have to maintain this. Those maintainers will only accept the things that they have been trained on. Those things are Windows PCs running Windows software. They will reject anything else. The people running network security on those machines will have a very short list of the acceptable operating systems for running SCADA systems. That list will be a couple of versions of Windows Server. They will also reject anything else.
that’s not nearly enough information to make a recommendation. Which PLCs? Allen Bradley, Siemens, Mitsubishi, …? I can’t think of a job I’ve ever been on where the local HMI hasn’t matched the PLCs. The SCADA software almost invariably matches the PLCs used in the main motor control centre, with perhaps a couple of oddball PLCs for proprietary panels and such like. Could maybe ask the supplier if they’ve a Linux alternative? Siemens will laugh at you and Mitsi won’t understand the question, but AB just might.
Sorry - I’m a Linux evangelist, but I don’t think it’s a good fit for here. SCADA performance generally isn’t bad due to Windows Server - it’s fine, does what it’s intended to - but because eg. STEP 7 is an appallingly slow and bloated piece of software which would bring a mainframe to its knees. Which is bizarre - the over-the-wire protocol connecting the machines is generally a short binary blob described in the PLC configuration - these bits are the drive statuses, these bits are an int or a float for an instrument readout - and it shouldn’t be at all slow updating it all, but slow it is.
Nah, it’s repeating the installation process until you finally get enough stuff working to have internet, and then you can bootstrap installing every other bit of software that you need. Thank goodness for rolling release - I can’t imagine having to go through that again.
Writing in ASM is not too bad provided that there’s no operating system getting in the way. If you’re on some old 8-bit microcomputer where you’re free to read directly from the input buffers and write directly to the screen framebuffer, or if you’re doing embedded where it’s all memory-mapped IO anyway, then great. Very easy, makes a lot of sense. For games, that era basically ended with DOS, and VGA-compatible cards that you could just write bits to and have them appear on screen.
Now, you have to display things on the screen by telling the graphics driver to do it, and so a lot of your assembly is just going to be arranging all of your data according to your platform’s C calling convention and then making syscalls, plus other tedious-but-essential requirements like making sure the stack is aligned whenever you make a jump. You might as well write macros to do that since you’ll be doing it a lot, and if you’ve written macros to do it then you might as well be using C instead, since most of C’s keywords and syntax map very closely to the ASM that would be generated by macros.
A shame - you do learn a lot by having to tell the computer exactly what you want it to do - but I couldn’t recommend it for any non-trivial task any more. Maybe a wee bit of assembly here-and-there when you’ve some very specific data alignment or timing-sensitive requirement.