Can’t be bothered to read the whole news article? Get a quick AI summary in the post itself. Uses a specialised summariser (not just asking an LLM “summarise this”). Summaries are 60% identical to human summaries and >95% accurate in keeping original meaning.

News summary had moved to !news_summary@hilariouschaos.com the bot has been updated to use a better we scraping method and improved summarisation.

If u don’t like this please just block the community no need to complain or downvote.

  • PhilipTheBucketA
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    4 days ago

    If u don’t like this please just block the community no need to complain or downvote.

    Best of luck with that!

    I’m actually not trying to poop on some cool new thing you’re setting up, but I think it is pretty clear at this point that building a system that uses an LLM to produce factual information for people, is a recipe for your system getting well-deserved criticism.

    Also, pay your journalists. Anything that takes them out of the equation will at some point lead to X and Youtube being the only sources of news, sending everybody anything that somebody feels like paying to produce and distribute for “free.”

  • missingno@fedia.io
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    4 days ago

    This is genuinely harmful. LLMs will hallucinate, which means that using them as a substitute for reading the news will result in the spread of misinformation. And in an era where we see just how dangerous misinformation can be, I beg you to please not do this.

    “95% accurate” means 5% lies, which is 5% too many.

    • I’m using an LLM architecture that’s better suited to summarisation meaning it won’t invent false facts like traditional gpts do. The worse errors it has made are a couple cases of missattribution of actions that are easily spotted from within the context of the whole summary.

      The ai has no more misinformation than a human journalist. It is not biass in its summary. It does not assert falsehoods in malice. I have been accused of spreading misinformation for many articles and yes the bot is saying misinformation but that is the misinformation stated in the original human authored article. It is not my bots job to pass judgement but simply to make ur ability to do so easyer.

      • PhilipTheBucketA
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago

        I’m using an LLM architecture that’s better suited to summarisation meaning it won’t invent false facts like traditional gpts do.

        What architecture is that? If you have an LLM that doesn’t hallucinate, there will surely have been papers written about the breakthrough.

        The ai has no more misinformation than a human journalist.

        And that dear reader was when the work of foolishness became something much more sinister.

        Humans, and trust in humans, are important. The internet divorced the human face and the accumulation of trust from the news, which has allowed engineered alternative facts to enter the mainstream consciousness, which might be the single biggest harmful development in a modern age which has no shortage of them. I am not trying to tell you that your summarizer project is automatically responsible for that. But be cautious about what future you’re constructing.

  • regrub@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    4 days ago

    1/20 chance of introducing secondhand inaccuracies to news when it’s already hard enough for news to be accurate.

    Fuck HilariousChaos. I’m glad I blocked them and their low-effort communities a long time ago