The actions of a community do not represent the instance itself.
I mean that’s fair, but also, this is the #2 community there. The #1 community is an AI news summary place made by a person I find notably weird and disagreeable.
You are welcome to start your own community on HC and moderate it as you see fit.
Yeah, I’ll get right on that. It sounds delightful.
AI news summaries even when done by big professional software companies have a big habit of getting stuff wrong. It’s not ideal. There are plenty of news communities on Lemmy including some automated ones.
If you want a little AI project, I’d be happy to work with you on my project of a community where people can argue with each other and an AI moderator will judge who’s being good-faith in their conduct, sort of referee the discussion to help reduce the “there’s no way to have a ‘debate’ if the other person’s committing to just being an evasive bad-faith cunt about it” problem. That, I think would be a good addition to the space, and I thought about how to do it a little bit already.
I find you notably weird and disagreeable.
Yeah, probably right on both counts.
I actually just clicked on your user and took a quick look over, searching for some kind of misdemeanor I could criticize you for, and I couldn’t find anything. I just remember you being super rude sometimes and saying deliberately inflammatory stuff, but that’s all, and maybe that isn’t the end of the world.
project of a community where people can argue with each other and an AI moderator will judge who’s being good-faith in their conduct, sort of referee the discussion to help reduce the “there’s no way to have a ‘debate’ if the other person’s committing to just being an evasive bad-faith cunt about it” problem.
Any ways to follow this project? This is a concept I’ve also thought about. Would probably work to prompt various simple yes/no questions about a comment and what it’s responding to like, “is it likely they didn’t read it”, “does it address the central point” and then do stuff with the results
It hasn’t gone past the “idea” stage. I have an unfortunate habit of talking up all kinds of fancy stuff I want to do and only following through on like 20% of it, but I do think something like that would be a really good idea. If I do wind up executing on it I will reach out.
And yes, I think having multiple prompts to sort of analyze the comments thread and progress towards conclusions about it is the way to go. I was mucking around with, I think, a four-prompt setup to keep the LLM from going too far off the rails or try to bite off too much of the analysis at once (and also, to stop it from wanting to be “fair to everyone” which it seems like it otherwise really wants to do because of how it’s been trained to be supposedly-neutral).
Thankfully I got an entire lemmy community to correct the incorrect summaries. Although the error rate is pretty low and most errors come from the web scraper having issues not the ai.
I like that idea u would want to use a dolphin finetuned model as they are uncensored and thus have significantly less bias. U might struggle to get the ai to evaluate the badfsithness and not the arguments points itself tho.
I’m don’t say deliberately inflammatory stuff I just say what I believe and sometimes people agree sometimes they don’t. I can be quite rude but I try to only do that when people are attacking me and not my argument.
U might struggle to get the ai to evaluate the badfsithness and not the arguments points itself tho.
Yeah. When I was mucking around with it, I had to progress in very specific stages, having the LLM sort of implement a specific algorithm, not letting it have too much free rein or else it’ll do all kinds of weird stuff.
Yes it is.
I mean that’s fair, but also, this is the #2 community there. The #1 community is an AI news summary place made by a person I find notably weird and disagreeable.
Yeah, I’ll get right on that. It sounds delightful.
Bro why u hating on my bot? What did it do?
I find you notably weird and disagreeable.
AI news summaries even when done by big professional software companies have a big habit of getting stuff wrong. It’s not ideal. There are plenty of news communities on Lemmy including some automated ones.
If you want a little AI project, I’d be happy to work with you on my project of a community where people can argue with each other and an AI moderator will judge who’s being good-faith in their conduct, sort of referee the discussion to help reduce the “there’s no way to have a ‘debate’ if the other person’s committing to just being an evasive bad-faith cunt about it” problem. That, I think would be a good addition to the space, and I thought about how to do it a little bit already.
Yeah, probably right on both counts.
I actually just clicked on your user and took a quick look over, searching for some kind of misdemeanor I could criticize you for, and I couldn’t find anything. I just remember you being super rude sometimes and saying deliberately inflammatory stuff, but that’s all, and maybe that isn’t the end of the world.
Any ways to follow this project? This is a concept I’ve also thought about. Would probably work to prompt various simple yes/no questions about a comment and what it’s responding to like, “is it likely they didn’t read it”, “does it address the central point” and then do stuff with the results
It hasn’t gone past the “idea” stage. I have an unfortunate habit of talking up all kinds of fancy stuff I want to do and only following through on like 20% of it, but I do think something like that would be a really good idea. If I do wind up executing on it I will reach out.
And yes, I think having multiple prompts to sort of analyze the comments thread and progress towards conclusions about it is the way to go. I was mucking around with, I think, a four-prompt setup to keep the LLM from going too far off the rails or try to bite off too much of the analysis at once (and also, to stop it from wanting to be “fair to everyone” which it seems like it otherwise really wants to do because of how it’s been trained to be supposedly-neutral).
Thankfully I got an entire lemmy community to correct the incorrect summaries. Although the error rate is pretty low and most errors come from the web scraper having issues not the ai.
I like that idea u would want to use a dolphin finetuned model as they are uncensored and thus have significantly less bias. U might struggle to get the ai to evaluate the badfsithness and not the arguments points itself tho.
I’m don’t say deliberately inflammatory stuff I just say what I believe and sometimes people agree sometimes they don’t. I can be quite rude but I try to only do that when people are attacking me and not my argument.
Yeah. When I was mucking around with it, I had to progress in very specific stages, having the LLM sort of implement a specific algorithm, not letting it have too much free rein or else it’ll do all kinds of weird stuff.
Are u using langchain u can make a whole pipeline of actions with it.
I like your bot, man. Ignore him. I don’t even like most bots, but yours does great.
Not only that, but his instance has bots too, so he’s a hypocrite.
You are free to dislike anything of course, but that doesn’t make us something we’re not.