• Lazycog@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    This was an interesting raising-awareness project.

    And the article says they didn’t let the chatbot generate its own responses (and therefore produce LLM hallucinations) but rather used an LLM in the background to categorize user’s question and return an answer from said category.

    • enkers@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      I mean, couldn’t you just use any of a plethora of other uncensored LLMs from huggingface if you want those sorts of answers?