Lee Duna alt account

  • 817 Posts
  • 53 Comments
Joined 3 years ago
cake
Cake day: July 4th, 2023

help-circle






















  • This could happen to anyone including people without having mental issues, simply by having long conversations with AI.

    On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.

    Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.

    Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

    Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.

    These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

    For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.

    After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.

    I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

    ‘I Worked on Google’s AI. My Fears Are Coming True’


  • This could happen to anyone including to people with no mental issues.

    Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.

    These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

    For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.

    After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.

    I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

    ‘I Worked on Google’s AI. My Fears Are Coming True’



  • This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.

    These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

    For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.

    After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.

    I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

    ‘I Worked on Google’s AI. My Fears Are Coming True’








  • Sahwa@reddthat.comtoFediverse@lemmy.worldwe need more users
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    They know there are other options, but they won’t join us on the fediverse. That’s because no celebrities or influencers have moved to the fediverse.

    And as for reddit users, most of them just like “huh? I like staying here” or “the lemmy devs are tankies and lemmy is a toxic tankies dumpster”

    But that’s okay with me, at least we have fewer toxic people.