• DoubleSpace@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    I’ve noticed the opposite with Seek. It’s way better now, especially with insects and fungi.

    • Remy Rose@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Oh interestingggg… I can only tell it’s messing up with plants, since plants are what I know. I’ve just been mostly taking it at it’s word for insects/fungi/etc, good to hear that’s been working out!

      Could the problem be area specific? My initial thought was, if people are submitting lots of erroneous IDs near me, maybe it’s messing up the data?

      • delgato@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        Plants, depending on the year, are really hard to identify even with a trained eye. If an AI algorithm is training on, say, only the blooming season versions of a plant, then it will do a poor job of identifying the plant in the Fall. Same for human submissions, usually they are photographed when they are in bloom.

        I teach an environmental science course and we have a lab where I have students use iNaturalist for plant identification but the Fall semester students are always at a disadvantage, we have to crack open the dying plants for identification.

        • Remy Rose@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 day ago

          I’ve been comparing its results to plantnet, same time of year, same exact pictures even, and Seek is getting wrecked in comparison. That said I don’t have any idea how either app even works.