Why doesn’t someone just fork it and change the name?
Like, I dunno, “Super Human Image Treatment” or “Consistently Lovely Image Treatment Oriented for Real Imaging Stars”
Why doesn’t someone just fork it and change the name?
Like, I dunno, “Super Human Image Treatment” or “Consistently Lovely Image Treatment Oriented for Real Imaging Stars”
All those comments, and only you knew what was really up? People really do treasure their ignorance…
With Product Placement!
You’re trying to apply objectivity to a very subjective area. I’m not saying it’s impossible, and you should by all means try it, but maybe it would be a good idea to try something that has a better chance, first, such as this:
How about an open platform for scientific review and tracking? Like, whenever a new discovery or advance is announced, that site would cut through the hype, report on peer review, feasibility, flaws in methodology, the ways in which it’s practical and impractical, how close we are to actual usage (state of clinical trials, demonstrated practical applications, etc.)
And it would keep being updated, somewhat like Wikipedia, as more research occurs. It needs a more robust system of review to avoid the problems that Wikipedia has, and I don’t have the solution for that, but I believe there’s got to be a way to do it that’s resistant to manipulation.
Have you tried photopea.com ? I dunno if it’s light enough for you, but it’s basically Photoshop in your browser, done in JavaScript.
Unless you turn on “original sound for musicians” Zoom uses AI to filter the audio for voices mainly. I rarely if ever hear any keystrokes or mouse clicks anymore… Lots of other non voice noises get filtered out.
I don’t like the idea of restricting the model’s corpus further. Rather, I think it would be good if it used a bigger corpus, but added the date of origin for each element as further context.
Separately, I think it could be good to train another LLM to recognize biases in various content, and then use that to add further context for the main LLM when it ingests that content. I’m not sure how to avoid bias in that second LLM, though. Maybe complete lack of bias is an unattainable ideal that you can only approach without ever reaching it.