Does Mono have any purpose any longer? What is the point now that dotnet core is so well-established?
In 1936 Alan Turing published “On Computable Numbers”, proving that it was possible to create a computer. This has made many people very angry and has been widely regarded as a bad move.
Opinionated autistic guy. Tech, ethics, digital rights, chess, AI, gamedev, anime, videogames.
Does Mono have any purpose any longer? What is the point now that dotnet core is so well-established?
Holy shit, I’m so sorry to hear that. I’ve been going through a physical rehab of my own the past year and it is so awful. Recovering from such a major surgery as that must be horrible, I can’t imagine.
Last year suffered a severe shoulder dislocation that lasted ~2.5hrs, while in the middle of nowhere. It was unimaginably painful and I struggled to believe it would ever be over. The ambulance took so long to arrive.
But the real fear came in the following days, weeks, months. Knowing that it could happen again at any moment without warning, even from the simplest movement with no strain at all. When the severe one happened I wasn’t doing anything unusual, I just reached out to pick something up and pop. My damaged muscles were in such a state that it could pop out with no real cause.
Nowadays I’m doing much better. I’ve had a surgery to fix it, with 90% success rate. But that 10% risk still keeps me up at nightbsometimes.
Experiments in rats have found that once plastic is introduced to their environment, their ability to reproduce declines drdmstically. Genitalia are smaller, slerm rates lower. And the effect compounds and grows generation after generation, getting worse and worse so long as plastic is consumed.
Studies have also shown that human fertility (regarding actual physical ability to reproduce, not the choice of whether to do so) has dropped dramatically genetation on generation since the rise of plastics
And in that situation, the safest bet is to say no. See: the invisible dragon https://rationalwiki.org/wiki/The_Dragon_in_My_Garage
improving and integrating the technology is raising harder and more complex questions than first envisioned
Many people not only envisioned but predicted these problems as soon as the hype cycle began.
Interesting article. I’d have loved to see some stats on how LLM investment and LLM startups are doing.
You can go full crazy like I did, use a Windows keyboard with macOS mappings.
I was used to mac when I switched to using Linux on my desktop, which has a Windows keyboard, for work. I didn’t want to re-learn my bindings. And I touch-type anyway, so the keycaps don’t confuse me. It mostly works great except for the # key, for that I have to press altgr+3
I think you’re missing their point a bit
Stealing “the properties of women” is BS, since back then women had no right to ownership;
That is not quite right. Married women had no property. Plenty of unmarried women had property, and with-hunting was a for-profit legalised industry of robbing these vulnerable women. But European history isn’t really the point so let’s move on.
I don’t think your description of a tulip market is quite right. The point of the tulip market is just that the tulip itself has no inherent value other than the expectation that someone else will buy it for more. This has similar properties to an MLM in that eventually, the person at the end of the chain (holding the “hot potato”) is out of luck. And I wouldn’t say that is how all markets work, either. Healthy markets are built on products of value, not speculation.
Crypto is as profitable as USD
This is a minor point but note that holding USD is not profitable, due to inflation. This is a key difference between crypto and regular currencies. Crypto becomes more scarce over time and so will trend toward becoming more valuable, while the dollar becomes less valuable. Becoming less valuable is a good desirable trait because it encourages people to spend money and actually get the economy moving, rather than hoarding and speculating.
You seem to me like you’ve drunk at least a little of the cyber-libertarian kool-aid. If so, we’ll probably not see eye-to-eye any time soon. But I’ll try to give you a gist of my perspective on the whole affair and you can make of it what you will. The block chain does not offer any substantial benefit as a virtual currency over a regular database, as we have been using for decades. Systems like Steam’s inventory, Neopets collectible pets and World of Warcraft have been creating virtual currencies that worked perfectly long before crypto came along, and without accelerating global warming by using a small country’s worth of power to do it.
You will doubtless respond that decentralisation is the difference that the Block Chain brings along, the value offering. But no-one actually runs their own wallets. If Steam Inventory were using a block chain nothing would change in practice whatsoever because I’d just have a wallet stored and managed by Steam, and access it through a web interface just as I do now with Steam Inventory which is backed by a regular database. It would just be an implementation detail that makes no difference to the actual end-user, who is never going to run their own wallet.
Trust is brought about by regulation and insurance. I know my money is safe because my bank is backed by my country’s central bank, and decades of regulations protecting my investments. With crypto the idea is that trust is guaranteed by the block chain, but again no-one actually runs their own wallets because it is a pain in the butt to do so. So what happens in practice is that you store your wallet with some third-party provider that can later run off with your money with minimal consequences or gets hacked and in either case you don’t get your money back. Both of these scenarios have happened numerous times as I’m sure you must be aware, I hope I don’t need to cite them here.
This isn’t difficult to figure out, everyone that seriously looks at crypto knows it doesn’t really have any practical use-case beyond money laundering and black markets. So why all the paragraphs and paragraphs of libertarian ideological banter about how Bitcoin will usher in a new age of peace and prosperity and fix all the world’s problems etcetera etcetera? Because that is a great way of scamming people, and that is what crypto is at the end of the day. All the folks telling you to “hodl” and creating mountains of memes about how great it is to hold bitcoin and wait, were only doing so in the hope that the people who ate that nonsense up would be daft enough to hold during the next crash, which prevents the price from falling too far and enables the more Machiavellian crypto holders to cash out.
That’s the really sad part of the whole affair. I think there are a number of people who genuinely drink the libertarian kool-aid and think that Bitcoin et al are somehow going to bring about a brave new world, and they’re ultimately just being played for fools by the scam artists who peddle that crap without believing any of it.
What the libertarian dream has achieved is the same thing it always does, it causes exactly the problems regulation is intended to prevent. Insider trading occurs in at least 25% and possibly up to half of all crypto listings. Billions of dollars are stolen by scammers. Banks scamming their own customers (Which, funnily enough, was resolved quite neatly because of precisely the regulations that libertarians want to avoid).
Alright I’m tired. This is just a rant not an essay, you can take it or leave it. Nothing is proof-read. I don’t really have a conclusion other than to say once again that crypto is a scam perpetrated by nasty horrible bad selfish people that are harming society and the planet.
They are both different parts of the same problem. Prolog can solve logical problems using symbolism. ChatGPT cannot solve logical problems, but it can approximate human language to an astonishing degree. If we ever create an AI, or what we now call an AGI, it will include elements of both these approaches.
In “Computing Machinery and Intelligence”, Turing made some really interesting observations about AI (“thinking machines” and “learning machines” as they were called then). It demonstrates stunning foresight:
An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside… This is in clear contrast with normal procedure when using a machine to do computations: one’s object is then to have a clear mental picture of the state of the machine at each moment in the computation. This object can only be achieved with a struggle.
Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops.
You can view ChatGPT and Prolog as two ends of the spectrum Turing is describing here. Prolog is “thinking rationally”: It is predictable, logical. ChatGPT is “acting humanly”: It is an unpredictable, “undisciplined” model but does exhibit very human-like behaviours. We are “quite ignoerant of what is going on inside”. Neither approach is enough to achieve AGI, but they are such fundamentally different approaches that it is difficult to conceive of them working together except by some intermediary like Subsumption Architecture.
I don’t think your characterisation of the Dartmouth Project and machine learning are quite correct. It was extremely broad and covered numerous avenues of research, it was not solely related to machine learning though that was certainly prominent.
The thing that bothers me is how reductive these recent narratives around AI can be. AI is a huge field including actionism, symbolism, and connectionism. So many people today think that neural nets are AI (“the proper term for the study of machine learning”), but neural nets are connectionism, ie just one of the three major fields of AI.
Anyway, the debate as to whether “AI” exists today or not is endless. But I don’t agree with you. The term AGI has only come along recently, and is used to move the goalposts. What we originally meant by AI has always been an aspirational goal and one that we have not reached yet (and might never reach). Dartmouth categorised AI into various problems and hoped to make progress toward solving those problems, but as far as I’m aware did not expect to actually produce “an AI” as such.
That is not an accurate description of witch hunting and witch trials. It was a distinct business that emerged from capitalism and misinformation, in which the properties of women could be stolen for profit. Witch hunters would travel from town to town looking for vulnerable women to legally rob and kill, and those that testified they’d witnessed withcraft got a cut of the profit too. As a result of this practice tens of thousands of women were killed. Carl Sagan dedicated an excellent chapter of his book Demon-Haunted World to the topic, if you’re interested.
Anyway, that is a bit of a side bar to the point you’re missing. Just as it would once have been very profitable but deeply unethical to be a witch hunter, crypto was certainly profitable at a time but only in deeply unethical ways. The late crypto fad was little more an MLM scheme rife with fraud, and any profit extracted was at the expense of whoever is holding the hot potato when the worthless tulip market crashes (if you’ll excuse the mixed metaphor…).
if I had spent the time to mine just a few hundred BTC back when I first heard of them, I’d now be a millionaire 🤷
If I had lived in the 17th century it would have been very profitable to get involved in witch trials and witch hunting. But being profitable doesn’t make it any less wrong.
We know more than you might realize
The human brain is the most complex object in the known universe. We are only scratching the surface of it right now. Discussions of consciousness and sentience are more a domain of philosophy than anything else. The true innovations in AI will come from neurologists and biologists, not from computer scientists or mathematicians.
It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.
Quantum effects are not randomness. Emulating quantum effects is possible, they can be understood empirically, but it is very slow. If intelligence relies on quantum effects, then we will need to build whole new types of quantum computers to build AI.
the results speak for themselves.
Well, there we agree. In that the results are very limited I suppose that they do speak for themselves 😛
We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.
This is what I mean by exaggeration. I’m an AI proponent, I want to see the field succeed. But this is nothing like the leap forward some people seem to think it is. It’s a neat trick with some interesting if limited applications. It is not an AI. This is no different than when Minsky believed that by the end of the 70s we would have “a machine with the general intelligence of an average human being”, which is exactly the sort of over-promising that led to the AI field having a terrible reputation and all the funding drying up.
I’ve seen a million of such demos but simulations like these are nothing like the real world. Moravec’s paradox will make neural nets look like toddlers for a long time to come yet.
I wouldn’t say 74k is consumer grade but Spot is very cool. I doubt that it is purely a neural net though, there is probably a fair bit of actionismnat work.
The difference is that calculators are deterministic and correct. If you get a wrong answer, it is you that made the mistake.
LLMs will frequently output nonsense answers. If you get a wrong answer, it is probably the machine that made the mistake.
We don’t even know what consciousness or sentience is, or how the brain really works. Our hundreds of millions spent on trying to accurately simulate a rat’s brain have not brought us much closer (Blue Brain), and there may yet be quantum effects in the brain that we are barely even beginning to recognise (https://phys.org/news/2022-10-brains-quantum.html).
I get that you are excited but it really does not help anyone to exaggerate the efficacy of the AI field today. You should read some of Brooks’ enlightening writing like Elephants Don’t Play Chess, or the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).
Thanks!