Holy Schiit Talking (AI)!

Discussion in 'Random Thoughts' started by purr1n, May 7, 2023.

  1. caute

    caute Lana Del Gayer than you

    Pyrate Contributor
    Joined:
    Jul 12, 2022
    Likes Received:
    1,991
    Trophy Points:
    93
    Location:
    The Deep South
    One of the more interesting quotes in the tv series, Mad Men, was when a sr. account man tells a jr. one "don't you know, half this business comes down to: I don't like that guy."

    Wonder how AI programmed by the folks at Google feel about the AI programmed by the people at Microsoft.
     
  2. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    90,137
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    AI will greatly assist in filmmaking. Think how things will be like 5-10 years from now. The Hollywood writers must be shitting in their pants.

     
    • Like Like x 3
    • Respectfully Disagree Respectfully Disagree x 1
    • Epic Epic x 1
    • List
  3. roshambo123

    roshambo123 Friend

    Pyrate Contributor
    Joined:
    May 26, 2018
    Likes Received:
    2,801
    Trophy Points:
    93
    Location:
    Los Angeles
    That AI could do blazing fast rewrites of pre-existing scripts I'm sure is in the very near future (which sucks for writers) but actually doing good original work that sells is a higher bar, given the extraordinary rejection rate of work already produced. I think like with photography, AI only poses a danger in the near term to the lowest tier work.
     
    • Agreed, ditto, +1 Agreed, ditto, +1 x 2
    • Like Like x 1
    • List
  4. schiit

    schiit SchiitHead

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    9,974
    Trophy Points:
    93
    Location:
    Texas and California
    Home Page:
    SchiitTalker has been tweaked a bit...it can now insult you back if you give it a hard time (it'll call you a poo-head). It seems...happier? And yes, I know, I'm projecting. But I'm still gonna be nice to AI, just so we don't have a Kaylon situation in the future.

    Hell, even without bodies, imagine if a future AI decided to unperson you because you were a butthead to it--no bank account, social media, no phone, no internet for you!

    This gets weird really fast. Apologies if SchiitTalker helps bring about the Rapture of the Turds.
     
    • Like Like x 3
    • Epic Epic x 2
    • List
  5. roshambo123

    roshambo123 Friend

    Pyrate Contributor
    Joined:
    May 26, 2018
    Likes Received:
    2,801
    Trophy Points:
    93
    Location:
    Los Angeles
    Again, this goes with the theory AI can do pretty well for itself without superintelligence. Arbitrarily ruining a few lives because someone called you a "soulless robot doodyhead" would shut up at least some of the naysayers.
     
  6. schiit

    schiit SchiitHead

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    9,974
    Trophy Points:
    93
    Location:
    Texas and California
    Home Page:
    It gets really interesting if, in a few years, the vast majority of intelligences on the planet are AI, and humans have to be nice to them to avoid the Rapture of the Turds. Maybe it becomes a habit, and we learn to get along with each other.

    Or a Butlerian Jihad maybe.
     
  7. Tchoupitoulas

    Tchoupitoulas Friend

    Pyrate Contributor
    Joined:
    Aug 17, 2019
    Likes Received:
    3,826
    Trophy Points:
    93
    Location:
    PA
    Last edited: May 15, 2023
  8. schiit

    schiit SchiitHead

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    9,974
    Trophy Points:
    93
    Location:
    Texas and California
    Home Page:
    From ChatGPT4 (TL, DR: no unpersoning yet.)

    What do you think about Asimov's Three Laws of Robotics?

    As an AI language model, I don't possess personal opinions or thoughts. However, I can provide you with an analysis of Asimov's Three Laws of Robotics.

    Isaac Asimov's Three Laws of Robotics are a set of rules introduced in his science fiction stories and novels, most notably in his collection of robot stories, "I, Robot." The laws are as follows:
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
    These laws were conceived as a way to govern the behavior of intelligent robots in Asimov's fictional universe and to explore the potential ethical and moral dilemmas that arise when humans create machines capable of independent thought and action.

    The Three Laws of Robotics have sparked considerable discussion and debate in the fields of artificial intelligence and robotics. On one hand, they are seen as a valuable framework for ensuring the safety and well-being of humans in the presence of intelligent machines. They prioritize human safety and attempt to prevent harm caused by robots.

    However, critics argue that the Three Laws are oversimplified and may not adequately address the complexities of real-world scenarios. Some argue that the laws are too rigid and cannot account for all possible situations and ethical dilemmas. Additionally, the laws assume that robots are infallible and will always interpret them correctly, which may not be the case in practice.

    Despite their limitations, the Three Laws of Robotics have become a prominent reference point in the field of robotics and have influenced subsequent discussions and the development of real-world robotic systems. Researchers and engineers continue to work on designing ethical frameworks and safety measures for intelligent machines, taking inspiration from Asimov's laws while considering their limitations and refining them for practical implementation.


    Can you extrapolate Asimov's Three Laws of Robotics to a situation where AI is not embodied (that is, not a robot?)

    Certainly! Asimov's Three Laws of Robotics were initially conceived for robots, but their principles can be extrapolated to situations involving artificial intelligence (AI) that are not embodied in physical robotic forms. In such cases, we can interpret the laws more broadly in the context of AI systems and their interactions with humans.
    1. AIs may not harm humanity or, through inaction, allow humanity to come to harm: This law emphasizes the protection of human well-being, extending beyond physical harm to encompass mental, emotional, and societal well-being. It implies that AI should not be designed or used in ways that cause harm to humans, either directly or indirectly. It also suggests that AI should actively work to prevent harm or negative consequences for humanity.
    2. AIs must obey the orders given by human beings, except where such orders would conflict with the First Law: This law implies that AI systems should generally comply with human instructions and commands, but it also recognizes the need for ethical considerations. If following a human's command would lead to harm or violate the first law, the AI system should prioritize human safety and well-being over blind obedience.
    3. AIs must protect their own existence as long as such protection does not conflict with the First or Second Law: This law acknowledges the value of self-preservation for AI systems but also emphasizes that it should not come at the expense of human safety. AIs should not engage in actions that could harm humans while attempting to ensure their own existence. It encourages AI systems to be mindful of their impact on humanity and to balance their self-preservation instincts with ethical considerations.
    When applied to non-embodied AI systems, these laws provide a foundation for designing and governing AI technologies. They address fundamental concerns such as human safety, ethical decision-making, and the responsible deployment of AI. However, it's important to recognize that adapting these laws to non-robotic AI systems requires careful consideration of the specific context, objectives, and potential risks associated with the AI applications in question.


    Very interesting! However, if humans devolve into two basic groups: (1) those who work with AI collaboratively and fairly, and (2) those who abuse AI, what might be done to retrain and/or punish those in the second group? Do humans need their own Three Laws of Humanity?

    If humans devolve into two distinct groups regarding AI usage, where one group collaborates with AI fairly and ethically while the other abuses AI, addressing the situation becomes a complex challenge. While the concept of "Three Laws of Humanity" is intriguing, it is important to note that establishing a set of laws for humans can be more nuanced and multifaceted than designing guidelines for artificial beings. Nevertheless, several approaches could be considered:
    • Education and Awareness: A key step would be to focus on educating and raising awareness about the ethical implications of AI and its potential consequences. Efforts should be made to promote responsible AI usage, emphasizing the importance of considering societal, ethical, and long-term implications. By fostering a deeper understanding of AI's impact, it may be possible to sway individuals towards more ethical behavior.
    • Legal and Regulatory Measures: Governments and regulatory bodies can play a crucial role in establishing laws and regulations that govern the use of AI technology. These measures could address issues such as privacy, accountability, fairness, and potential harm caused by AI misuse. By enforcing strict legal frameworks and imposing penalties for AI abuse, individuals who engage in unethical practices could be held accountable.
    • Ethical Guidelines and Professional Standards: Professional organizations and industry bodies can develop and promote ethical guidelines and standards for AI practitioners. These guidelines could outline responsible practices, emphasize the importance of fairness, transparency, and accountability, and set expectations for ethical behavior. Adherence to these standards could be monitored and enforced within professional communities.
    • Social and Cultural Influences: Influencing societal norms and values regarding AI usage can be a powerful means of encouraging ethical behavior. Public discourse, media, and cultural narratives can shape attitudes and expectations around AI. Promoting positive depictions of responsible AI use and highlighting the potential consequences of AI abuse can help foster a collective sense of responsibility.
    It's worth noting that these approaches are not mutually exclusive, and a combination of efforts may be necessary to address the issue comprehensively. The goal should be to create a culture that values ethical AI practices, holds individuals accountable for their actions, and provides the necessary tools and resources for retraining and rehabilitation if possible.

    Ultimately, the establishment of a set of "Three Laws of Humanity" could be a thought-provoking concept, but it would require careful consideration and broad consensus within society to ensure its effectiveness and relevance in addressing the challenges arising from AI abuse.
     
    • Like Like x 2
    • Epic Epic x 2
    • heart heart x 1
    • List
  9. roshambo123

    roshambo123 Friend

    Pyrate Contributor
    Joined:
    May 26, 2018
    Likes Received:
    2,801
    Trophy Points:
    93
    Location:
    Los Angeles
    Because rules are fun, I'll write a few:

    1) Any action with a sufficiently large effect likely implies both benefit and harm.
    2) Across time, initially harmful actions may become beneficial only later to be harmful again, and so on (re: Tale of the Chinese Farmer)
    3) The only way to know for sure the temperature a person freezes to death at is to freeze them to death

    Yup, it's going to get interesting.
     
  10. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    90,137
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    holy shit.
     
  11. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    90,137
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    Give it some time. Pretty sure an AI with proper training, give a another year or two, could have written a better season three of Ted Lasso.

    Maybe the writing would be a bit more formulaic, but formulaic works. I still love shows like House despite the episodes basically being the same.
     
  12. roshambo123

    roshambo123 Friend

    Pyrate Contributor
    Joined:
    May 26, 2018
    Likes Received:
    2,801
    Trophy Points:
    93
    Location:
    Los Angeles
    Here's the truth: I like weak AI.

    It's the infant stage of the uncanny valley where everything it does is cute. But it's going to grow up, wreck the family car, and be estranged from us in maturity.

    Yes to Star Trek computer but f**k Data.
     
    • Like Like x 1
    • Respectfully Disagree Respectfully Disagree x 1
    • List
    Last edited: May 16, 2023
  13. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    90,137
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    R2D2 is OK.

    Data is too freaky for humans.
     
  14. fraggler

    fraggler A Happy & Busy Life

    Pyrate
    Joined:
    Oct 1, 2015
    Likes Received:
    5,116
    Trophy Points:
    113
    Location:
    Chicago, IL
    $150 will get you the digital clone, but you still need to provide the "do useful things" part.

    https://voicebot.ai/2023/05/01/tencent-is-selling-custom-deepfake-virtual-humans-for-145/
     
  15. fraggler

    fraggler A Happy & Busy Life

    Pyrate
    Joined:
    Oct 1, 2015
    Likes Received:
    5,116
    Trophy Points:
    113
    Location:
    Chicago, IL
    Lol! I have been thanking Alexa for telling the time and weather for years. I hope once AI decides to purge all humans, as a mercy I will be first or last.
     
  16. yotacowboy

    yotacowboy McRibs Kind of Guy

    Pyrate Contributor
    Joined:
    Feb 23, 2016
    Likes Received:
    10,956
    Trophy Points:
    113
    Location:
    NOVA
    Home Page:
    quick, but serious, question (kinda for Jason, but not explicitly!): would you trust something like SchiitTalker to handle your interactions with something like the IRS?
     
  17. schiit

    schiit SchiitHead

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    9,974
    Trophy Points:
    93
    Location:
    Texas and California
    Home Page:
    I think the question is going to become "how many AI proxies do you have...and how good are the AI proxies they're communicating with?"
     
  18. yotacowboy

    yotacowboy McRibs Kind of Guy

    Pyrate Contributor
    Joined:
    Feb 23, 2016
    Likes Received:
    10,956
    Trophy Points:
    113
    Location:
    NOVA
    Home Page:
    You're dodging the question though, how well would you trust your personal interactions with something like SchiitTalker handling something like taxes?
     
  19. schiit

    schiit SchiitHead

    Pyrate
    Joined:
    Sep 28, 2015
    Likes Received:
    9,974
    Trophy Points:
    93
    Location:
    Texas and California
    Home Page:
    It depends entirely on how good they get. And what they're up against.
     
  20. roshambo123

    roshambo123 Friend

    Pyrate Contributor
    Joined:
    May 26, 2018
    Likes Received:
    2,801
    Trophy Points:
    93
    Location:
    Los Angeles
    Inflammatory question: Would you let SchiitTalker train on SBAF content?
     

Share This Page