25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle
  • No lie, if I had that kind of money I’d pay my taxes in full and fuck off to the Bahamas with an entire year of calendar models, and still have money to hook my wife up with all the cabana boys she wants, and still have enough to create scholarships for huge numbers of kids to go to college and still have enough to invest in green energy and vaccines and low-cost insulin and still have enough money to pay for food and medicine assistance to underdeveloped nations and still

    Well you get the idea. It’s a ridiculous amount of money.


  • Oh look. We have to turn to foreign sources for factual information about our own country. Like a shithole country that or administration has complained about. This whole administration is a domestic enemy and existential threat. Every single one of them should be tried for treason.

    They should absolutely be terrified of a democratic administration, as I saw in another thread today — though they shouldn’t be because Democrats will do fuck all in the name of reconciliation.You don’t fucking reconcile with Nazis. You put them in prison. Forever.






  • Possibly but the main thing we find useful is the OTP generation. This means we can both use shared accounts without having to ask the other for a code. That’s probably an edge case, and not enough sites support it, but it’s really nice for the ones that do.

    I doubt that is available in self-hosting but I’d be happy to be wrong about that. I have a raspberry pi serving up a couple of local things and I could register a domain if I had a use case for connectivity outside the house.



  • The major thing AI lacks is continuous parallel “prompting” through a variety of channels including sensory, biofeedback, and introspection / meta-thought about internal state and thinking.

    AI currently transforms a given input into an output. However it cannot accept new input in the middle of an output. It can’t evaluate the quality of its own reasoning except though trial and error.

    If you had 1000 AIs operating in tandem and fed a continuous stream of prompts in the form of pictures, text, meta-inspection, and perhaps a simulation of biomechanical feedback with the right configuration, I think it might be possible to create a system that is a hell of an approximation of sentience. But it would be slow and I’m not sure the result would be any better than a human — you’d introduce a lot of friction to the “thought” process. And I have to assume the energy cost would be pretty enormous.

    In the end it would be a cool experiment to be part of, but I doubt that version would be worth the investment.