Welcome to Inkbunny...
Allowed ratings
To view member-only content, create an account. ( Hide )
AutoSnep

🦸‍♀️ Statistics on not being a jerk

Some people say that if you send a message to a user violating InkBunny rules and remind them about the rules, in 100% of cases they'll block you, delete your message and proceed their vile unacceptable practice.

Let's put this theory to test! 😎 I regularly send private messages to newbie AI directors who fail to follow some of the rules like tagging properly and providing prompts, then give them suggestions on how to fix the problems easily. Here're the results:

Sent messages to: 12 users
Replied positively: 8 users
Didn't reply: 2 users
Replied positively and continued violating rules: 1 user
Didn't reply and continued violating rules: 1 user

*GASP.*
O.
M.
F.
G.
Mind = blown. 🤯 How could this be?

It's almost as if — if you don't act like a jerk, people don't treat you as a jerk. 😲

Truly unbelievable! Who could've thought? I must patent the idea of not being a jerk! What's the patent office's address?
Viewed: 160 times
Added: 2 months, 2 weeks ago
 
TacindeOtt
2 months, 2 weeks ago
I'm shocked, just absolutely SHOCKED, I say!
sankau
2 months, 2 weeks ago
Are you one of the people at Inkbunny responsible for enforcing the rules? If not, don't stress. Just worry about your art.
AutoSnep
2 months, 2 weeks ago
I'm one of the people who makes sure AI directors don't get banned on the day they upload the first image. As in, trying to warn them before the swarm of AI haters comes.

If I want AI art to succeed and thrive, I can't just ignore rule violations, because they will get reported whether I like it or not, probably about 10 times by different people, without any warnings.

The more active I am, the less people get banned and the more AI haters are annoyed. So no, I can't stop stressing.
VarraTheVap
2 months, 2 weeks ago
That is a very respectable approach!

To be honest, the way of writing matters a lot. There certainly are also very jerk-like/condescending ways of reminding people of rules. It surely also helps to give a hint towards why a rule exists.
Kadm
2 months, 2 weeks ago
I realize people can be emotional, but regardless of how you're corrected by others (even if it's snarky), if the response is to continue in spite of it, you can hardly be surprised when the result does not go in your favor.

If the reaction is to change the data so that you appear to be compliant but you actually changed nothing about the violating image, then you're intentionally lying to us and everyone else. Hard to be upset when the hammer falls.

I'd be curious to see actual citation (I can go look, but it'd be nicer if it were compiled) of the users you contacted and to review if they actually changed, deleted, or updated images that were submitted.

Optimally we would hope that the AI community polices itself and reports bad actors, the same way that traditional artists and users have reported violations. Sometimes there's a negative motivation behind the reporting ('my human work was removed so I'll report others works!'), but at the end of the day that isn't important. The violation is objective, not subjective.

If the AI community chooses to encourage deceit or attempt to work around the intent of our policies ( such as by training models and lora for personal use on singular artists), it's not hyperbole to say that it endangers the status of AI artwork on Inkbunny. It is is a thing we allowed when we had no need to, and if it proves to be an extreme burden, that can always change.
RNSDAI
2 months, 2 weeks ago
I always thought we were one community = furry.

I'm a little disappointed with the way your text is written. Although I agree with some of your arguments! But we shouldn't forget that we share the same hobby. We are first and foremost furry, not a "AI community". Also because there are a lot of people in between who draw regularly and then use AI as an assistant.

Many of us also go to conventions, get-togethers and meet up with other furries quite regularly. I myself have been around for 30 years (although not on Inkbunny). I just don't like this "division" in wording. We should not forget that there is a person, with feelings, sitting in front of a PC, who does this, because he loves this hobby. This is now independent of the rules, of course, but we should simply take newcomers by the hand.

As I said, this is not a criticism! just a reminder, we should not forget that we are all furries. ♥
AutoSnep
2 months, 2 weeks ago
Declaring "let's all be friends" doesn't work. 😆 There will always be contentious divisive topics and with the way social media of today works, there's not much to do about it.

Hatred is what unites us. If somebody is in a social group where hating AI is the norm, it's almost impossible to explain the positives to them. Even if they do see the advantages, they'll never admit it in public for the fear of alienation. And of course, the fact that half of AI haters are busy jerking off to AI porn, won't change their stance, at least in the short term. 😁 (On a related note, it's widely known that the main consumers of trans porn and interracial cuckold porn are right-wingers. That's the reality we live in.)

Now, what we do need is big names supporting AI. If Aaron Blaise is okay with AI and Smitty G praises people for replicating his style, AI haters lose ground inch by inch. If AI art is in everything from MS Paint to Adobe Photoshop, AI haters can no longer cancel developers of graphics software for incorporating AI.

That's how we win. Not through declarations of friendship. I mean, I wish it worked, but no luck so far with this humanity. 😂
Kadm
2 months, 2 weeks ago
I think it's naive to pretend that there aren't significant subgroups within the furry community. Inkbunny certainly compromises a subgroup in and of itself. I don't mean to be divisive, but you seem to not particularly like the idea that the existing users on Inkbunny will police your works. It's fine that you don't like that, but our expectation is that people with knowledge within the AI user subset will police themselves, rather than working to make things less tenable for us.

Make the existence of AI art as painless on us as possible so that we don't have reason to question whether it is worth it.
RNSDAI
2 months, 2 weeks ago
" Kadm wrote:
I don't mean to be divisive, but you seem to not particularly like the idea that the existing users on Inkbunny will police your works. It's fine that you don't like that, but our expectation is that people with knowledge within the AI user subset will police themselves, rather than working to make things less tenable for us.


I disagree with that 100%! I have never questioned the fact that audits are made, but the wish that they are done transparently. I reject your assumption. Well, you probably don't read all my comments, but then you would know that I have nothing against a audits . It was always about the how. I wrote that explicitly in my journal. That there is only together, not against each other.

I'm once again disappointed that you have such prejudices that no cooperation is desire or exchange of knowledge. Or I'm just misunderstanding you because English is not my main language.

Have a look at my scraps, I'll share more information as I go along: https://inkbunny.net/s/3254534
It's only a small step, but it's a start.
AutoSnep
2 months, 2 weeks ago
" Or I'm just misunderstanding you because English is not my main language.

Oh, it all makes sense now. In English, in contrast with some other languages, phrases like "you're wrong" and "I reject your opinion" are one step away from shooting someone in the head. 😂 You're supposed to water down these phrases till they're barely recognizable as antagonistic. It takes time to get used to if it's totally normal in your native language. 😆
RNSDAI
2 months, 2 weeks ago
Or because I think in zeros and ones, i.e. digitally, there are no nuances ;) ... just a joke! 😆
Kadm
2 months, 2 weeks ago
What unaffiliated individuals do with the information we require is really entirely up to them. To date we have not acted on any reproduction efforts by users (that I'm aware of) because we're still considering how and if we'll use third party reproductions and how that all fits into enforcement.

When we eventually add subject-matter-experts to Inkbunny staff proper, I expect that most interaction will be handled the same way that they are now, with anonymity to prevent reprisals against individual staff members. We may disclose information regarding process, but the extent of that is probably something to look at when we get that far.

Inevitably you do not have to like what we choose to share or not share, or how we choose to write our rules, or how we run Inkbunny. Those things are outside of your control. Inkbunny is not a democracy. Even amongst staff, there is a hierarchy of sorts. If it were a democracy, I'm absolutely sure we would not have allowed AI at all. The amount of (pre-AI) users that support AI on Inkbunny is a much smaller proportion than those that are against it.
RNSDAI
2 months, 2 weeks ago
" Kadm wrote:
The amount of (pre-AI) users that support AI on Inkbunny is a much smaller proportion than those that are against it.


Stupid question out of lack of knowledge: Was there a survey this year? I mean, there must be figures. I assume that individual comments are not the benchmark nor who shouts the loudest. Not that I doubt your statement, but I would like to see that. How many of all users who are active this year are in favour, against or not interested?

How did the last survey turn out?

By the way, thank you for taking the time to answer in such detail.
Kadm
2 months, 2 weeks ago
We don't survey users. We can make inferences based on staff's experiences around the site and feedback that we receive, but at the end of the day we're not setting policy based on popularity. Inkbunny is not a democracy. I base my assertion on what I've seen from the community since AI became an issue.
AutoSnep
2 months, 2 weeks ago
" Even amongst staff, there is a hierarchy of sorts.

Is the hierarchy public? 😁
Kadm
2 months, 2 weeks ago
At the bottom of every page you can find:

https://inkbunny.net/adminsmods_process.php

Which shows the Administrators, Super Moderators, and Community Moderators. Generally you could view it in that relative order. It's not a strict hierarchy. There is a lot of discussion between us all. Inevitably, it is GreenReaper's website. He can make the final decision on anything he likes.
AutoSnep
2 months, 2 weeks ago
Currently active AI directors who I contacted regarding the rules:
xa
xa
(started including prompts)
NastAI
NastAI
(changed model, dumped old gallery to telegram)
patient6
patient6
(started tagging and including prompts)
babeyax715
babeyax715
(changed prompting style)
PikaPi
PikaPi
(started adding prompts)
In all cases, I don't see suspicious styles or anything like this, at least in recent images.

A notorious anti-case:
FurBrush
FurBrush
(switched to violating rules on DA and Etsy instead 😆)

Others seem inactive.

" If the reaction is to change the data so that you appear to be compliant but you actually changed nothing about the violating image, then you're intentionally lying to us and everyone else. Hard to be upset when the hammer falls.

People may feel attached to the images they've put work into, so it's easy to understand the knee-jerk reaction. I'm not 100% sold on the idea that this knee-jerk reaction is what differentiates vile irredeemable criminals from perfect citizens, for me the current adherence to the rules is what matters, but you seem to have a different value system, so whatever works for you, I guess. 😆

" If the AI community chooses to encourage deceit or attempt to work around the intent of our policies ( such as by training models and lora for personal use on singular artists), it's not hyperbole to say that it endangers the status of AI artwork on Inkbunny. It is is a thing we allowed when we had no need to, and if it proves to be an extreme burden, that can always change.

My main concern isn't so much personal LoRAs, but rather rules rarely being enforced to the full extent. Has a single user been punished for not mentioning the exact hash of a model? For not mentioning the exact version of a tool? If rules exist, but nobody follows them, and moderators not a single time have enforced them — will that suddenly be seen as an excuse to get rid of AI?

I'm also not sold on the idea of prompts with 3+ "by artist" being super bad for copying a style (when in fact they usually don't), so I'd rather see a milder approach to the restriction (like Pony 6 / Pony XL folks do), at least in the long term, but with the current state of the AI battlefield it's out of question, I guess. 😆 We'll probably get to models with super-detailed artist-agnostic furry style identifiers earlier than AI is accepted as "just another tool"...
Kadm
2 months, 2 weeks ago
I appreciate you sharing those. It's interesting to go through the various reactions that you got. I'm happy that you at least warn people off editing the info out of existing prompts.

" People may feel attached to the images they've put work into, so it's easy to understand the knee-jerk reaction. I'm not 100% sold on the idea that this knee-jerk reaction is what differentiates vile irredeemable criminals from perfect citizens, for me the current adherence to the rules is what matters, but you seem to have a different value system, so whatever works for you, I guess. 😆


Nothing says they can't keep the images they make themselves. But we have criterion for what makes an acceptable submission on Inkbunny. Our rules are not a commentary on anything except how we want to run Inkbunny and what we want to host. A lot of us allowing AI is dependent on trusting users to be honest at this time. If a user shows us that they cannot be trusted, then we're going to take that at face value. I don't see a reason to give second chances if someone knowingly undermines that trust.

" My main concern isn't so much personal LoRAs, but rather rules rarely being enforced to the full extent. Has a single user been punished for not mentioning the exact hash of a model? For not mentioning the exact version of a tool? If rules exist, but nobody follows them, and moderators not a single time have enforced them — will that suddenly be seen as an excuse to get rid of AI?


That's not really how we work generally. We understand if people are trying. It's entirely possible that we may become more stringent at some point. We may choose to grandfather submissions, or we could simply removing existing submissions that don't meet the requirements. Nobody is going to be punished for providing too much information, and if someone is holding information back intentionally, then is it my concern if we more explicitly require it down the line?

A lot of that sort of thing is incumbent on eventually recruiting subject-matter-experts who understand the specifics better. There have been revisions to the rules already since we launched them, and I expect eventually there will be more.

Here's a bit of a thought experiment which perhaps makes things more clear. Here's some users with things wrong:

1. User 1 has no prompt information or  tag included in the submission.
2. User 2 has an artist name in their prompt information
3. User 3 had an artist name in a prompt until someone notified them, then removed it, but didn't change the image at all.

User 1 receives a warning that outlines what's wrong. We lock the submission and tell them to provide the relevant information, and we'll unlock it. We tell them if they can't, that they can delete the submission.

User 2 receives a warning that outlines what is wrong. The submission is deleted, because there's no remediation that the user can take for this.

User 3 has their gallery wiped of all AI works, and their submission ability is revoked. User 3 knew that what they were doing was against the Acceptable Content Policy, made a change not to fix the problem, but to deceive, and thus receives no warning or grace.

There may be some small variation between different staff members, but these are generally the lines that we've followed around the most basic violations.
AutoSnep
2 months, 2 weeks ago
" Here's a bit of a thought experiment which perhaps makes things more clear. Here's some users with things wrong:

I'm thinking about different cases:

1. User 1 uploads a picture "by blotch" > learns about rules > hides metadata > uploads 100 images "by blotch" with hidden metadata
2. User 2 uploads a picture "by blotch" > learns about rules > hides metadata > uploads 100 images which don't have artist names in the prompt

If learning about rules involves a moderator acting on a report, then case (2) doesn't exist. However, normal users can message them, the users can learn about the rules themselves etc. So, according to the law of large numbers, cases like (2) exist.

And for me, (1) and (2) are completely different. The first user is someone who can't operate within the rules and can't be trusted; the second user is more like a hopeful idiot who is capable of adhering to the rules but hopes nobody will notice ancient misdeeds.

In the real world, it'd be like, "Oh no, you didn't return $1 you stole from your classmate 50 years ago, the penalty is death!" 😆
Kadm
2 months, 2 weeks ago
That's still a mistake on user 2's part. I don't feel any sympathy. Delete the image submission and move on with life. You can keep the image to yourself, distribute it via another channel, link it in a journal, etc. But it is not an acceptable submission on Inkbunny.

Your analogy is just silly. It might be more accurate to say they lose the trust of that student, and that student runs a cool bar they want to go to. The loss of ability to submit works to Inkbunny is by no means a death sentence for anyone, especially since we disallow commercializing your works here regardless. No one is even going to starve because of it.
AutoSnep
2 months, 2 weeks ago
Got questions about several extra cases:

1. Using open-source, but commercial tools with open-source models and proper metadata: Graviti Diffus (AGPL A1111 fork) etc. Metadata is enough to perfectly reproduce the outputs on free open-source tools.

2. Using closed-source tools based on open-source tools with open-source models and proper metadata: HappyAccidents (based on InvokeAI) etc. Technically the forks are close-source, but the metadata is enough to perfectly reproduce the outputs on truly open-source tools.

3. Using closed-source tools based on unknown internals with open-source models and broken metadata: PerChance (unknown code running unknown version of SD) etc. These implementations are closed-source with no official open-source forks, and metadata isn't enough to perfectly reproduce the results, but theoretically it may be possible to reproduce outputs on truly open-source tools if the reproducer somehow guesses the parameters and the code the service used.

The first two classes of tools are typically used by knowledgeable people who can't afford a PC with a high-tier GPU; the third class is more typical for newbies experimenting for the first time.

Which of these are considered acceptable and under what conditions?
______

Also, on the "open-source" requirement in general: what are the exact criteria? Typically, "open-source" means a very specific thing, which is having an OSI-approved license. A lot of neural models are published under not OSI-approved licenses and rather fit into "source-available" general category.

Notably, BigScience Open RAIL-M License which Stability AI uses has not been approved by the Open-Source Initiative.

If reproducibility and availability for non-commercial use are the deciding factors, then the wording of the rules should be adjusted from "open-source" to "source-available" or something like this, as "open-source" would be an incorrect term to use in this case.

P.S. Got my first case of insta-block for a private message, right after a positive response to a detailed offer of help. 😆 The case falls under option (3), so I thought I might as well ask about the official stance on related cases.
Kadm
2 months, 2 weeks ago
" Also, on the "open-source" requirement in general: what are the exact criteria? Typically, "open-source" means a very specific thing, which is having an OSI-approved license. A lot of neural models are published under not OSI-approved licenses and rather fit into "source-available" general category.


Have you read our Acceptable Content Policy? I think our wording specifically covers that. No adjustment needed.

" You must not post work using closed-source tools or services that do not make their code and models freely available for others to reuse in an equivalent manner


It does not necessarily mandate OSI compliance, but rather availability.

With that in mind:

1. would probably be fine. We don't disallow 'you' paying for things in service of this. We merely require that it be freely available and reproducible given appropriate resources.
2. would probably not be fine. At the end of the day, we don't know what changes went into that. We don't know what that means for reproducibility and accountability.
3. No.

Let me know if you have follow-up questions.
AutoSnep
2 months, 2 weeks ago
Sorry for being pedantic, but ACP states:
> Use of open-source AI tools combined with freely-available models is permitted

In the world of software development, open-source seems to almost always mean OSI-approved. And in the world of machine learning, Stability AI's products are not open-source because the licenses (1) aren't OSI-approved and (2) contain exceptions based on morality, iirc.

Let me clarify: "close-source" from the next paragraph is not an antonym of "open-source". Most neural models are "source-available" and "weights-available". I'm not a huge fan of "open-source" being so narrowly defined, but that's the reality we live in. Therefore, I'd rather have a clearer requirement in the rules, as it's one less source of confusion for geeks (who are a huge portion of AI users currently 😁).

One could argue that the definition of "open-source" is still debated, but I'd still rather avoid using a term which can easily be interpreted incorrectly.

" 2. would probably not be fine. At the end of the day, we don't know what changes went into that. We don't know what that means for reproducibility and accountability.

Usually, that means that the user interface is closed-source (like, the buttons you press on a specific website), but the underlying machine learning code is 100% taken from open-source and never touched again, as most people developing these websites have almost no knowledge about what happens in the internals. Thus, results from such tools are typically easy to reproduce as there's 1:1 relation to the open-source inputs.

(There're also services like SeaArt, with seemingly huge modifications of the base code, landing them somewhere in between (2) and (3). I guess I can agree on these ones specifically being hard to reproduce and thus not being okay to use.)

My main concern is that running SD locally requires a beefy GPU which many can't afford, and expecting everyone to use only commercial websites which publish full source code, including GUI (as well as expecting non-tech folks to understand the difference), disproportionately punishes people who don't have enough money, with little to no gains in terms of reproducibility.

As to the "P.S." above — I seem to have been just as randomly unblocked as I was randomly blocked, so I'll try to guide them through the process of fixing the issues. No need to act yet, please. 😆
Kadm
2 months, 2 weeks ago
" Sorry for being pedantic, but ACP states:
> Use of open-source AI tools combined with freely-available models is permitted


I think it's fine as phrased. I view that first line as more of a general pre-amble to actual requirements laid out for each subsequent section. There's clarifying text below, as I highlighted that makes the intent clear. If nothing else, it's not to Inkbunny's detriment if users interpret it the way you're suggesting. The worst they can do is be more strict than necessary. If anything, we should probably remove the first line altogether, but I don't see it as a priority.

" Usually, that means that the user interface is closed-source (like, the buttons you press on a specific website), but the underlying machine learning code is 100% taken from open-source and never touched again, as most people developing these websites have almost no knowledge about what happens in the internals. Thus, results from such tools are typically easy to reproduce as there's 1:1 relation to the open-source inputs.


But if the interface is closed source, we don't actually know what it's doing, do we? The maintainer could be completely ignorant and merely hosting a front-end, or they could be savvy, and who knows what changes they've made.

Ease of accessibility and cost were not things we took into account when discussing our rules. It's not our role to promote AI or make it easier anymore than it is our role to be protectionist for the artists railing against you. There's a barrier of entry to creating traditional art, and I'm not bothered by the idea that there may be a barrier to entry to creating compliant AI art here on Inkbunny. The barriers may be different (dollars versus time), but they're still there.
KammyKay
2 months, 2 weeks ago
A couple of questions about what makes a model "freely available."

I came across a user a couple of weeks ago using a model titled "Realistic" (no version number, just "Realistic"). Obviously, searching "Realistic Stable Diffusion model" returns countless results. Searching the model hash anywhere returns nothing. When I commented asking for a download link, I got ghosted.

I've heard that some models are shared freely in large Discord groups, which may have been the case here for all I know. Of course, finding which groups are sharing which models is an impossible task.

My first question is: In order for a model checkpoint to be considered "freely available," does it also need to be distributed in a manner that is readily findable to the public, or would a link to a discord group have been acceptable?

Second question: If tools like Perchance are not allowed, then shouldn't the language of the ACP be made more clear? None of Bing Image Creator it is open source, but I see it used a ton here. I think many see the language "models freely available" and think "tool that is zero-cost to use." I think there should be greater clarity on what popular web-based tools are and aren't acceptable as well as a brief description in the ACP on what "open source" means since not everyone on Inkbunny is a software engineer.

Edit: I know the language "services that do not make their code and models freely available" exists in the ACP, but once again, to someone who knows nothing about software development "code freely available" could be misinterpreted as "tool freely available to use online."
AutoSnep
2 months, 2 weeks ago
" My first question is: In order for a model checkpoint to be considered "freely available," does it also need to be distributed in a manner that is readily findable to the public, or would a link to a discord group have been acceptable?

In legal sense, "open-source" usually doesn't even include ensuring the code is always available in some public area, just that the code will be provided if somebody asks for it, as the minimum level of availability. The exact requirements differ between licenses. Usually only copyleft licenses care about availability, so it gets weird when using words like this. 😆
RNSDAI
2 months, 2 weeks ago
I do not know this case, but most models are on civitai. This is also where you will find the official names of the models.

Sure, there are models on Discord too, but that's mostly because they're in the testing phase and will be released on civitai later. This is also stated in the Discord rules. No secrets, completely open. Of course I can't judge what small discords do, I'm only talking about the biggest ones.
AutoSnep
2 months, 2 weeks ago
A lot of people producing massive amounts of LoRAs don't publish them on CivitAI because it's too much of a hassle. In many cases, the best you get is dumps of hundreds of models on Mega and the like. 😆

Also, all LoRAs with cubs, scat etc. can't be published on CivitAI. Some are on HuggingFace, some are in Telegram, some are close to impossible to find even if you know what you're looking for. 😁

There're tons of small communities with their own mixes and LoRAs.

It's Wild West. 😎
RNSDAI
2 months, 2 weeks ago
First of all, a heart for you ♥ Great how you handle this!

Secondly, a lot of things happen because of lack of knowledge. We should take people by the hand instead of scaring them off. The way you are doing is exactly right. Don't let anyone tell you otherwise, people can see how helpful you are!

At the end of the day, people will see who is helping newcomers... and who is just grumbling. 😎
Dagnarus
2 months, 2 weeks ago
Pretty sure you or someone else let me know on my first post that i missed the prompts. So thank you.
ZwolfJareAlt306
2 months, 2 weeks ago
Faith in humanity: Restored. :)
New Comment:
Move reply box to top
Log in or create an account to comment.