Viewed: | 160 times |
Added: | 2 months, 2 weeks ago 19 Feb 2024 19:13 CET |
" | Kadm wrote: |
I don't mean to be divisive, but you seem to not particularly like the idea that the existing users on Inkbunny will police your works. It's fine that you don't like that, but our expectation is that people with knowledge within the AI user subset will police themselves, rather than working to make things less tenable for us. |
" | Or I'm just misunderstanding you because English is not my main language. |
" | Kadm wrote: |
The amount of (pre-AI) users that support AI on Inkbunny is a much smaller proportion than those that are against it. |
" | Even amongst staff, there is a hierarchy of sorts. |
xa |
NastAI |
patient6 |
babeyax715 |
PikaPi |
FurBrush |
" | If the reaction is to change the data so that you appear to be compliant but you actually changed nothing about the violating image, then you're intentionally lying to us and everyone else. Hard to be upset when the hammer falls. |
" | If the AI community chooses to encourage deceit or attempt to work around the intent of our policies ( such as by training models and lora for personal use on singular artists), it's not hyperbole to say that it endangers the status of AI artwork on Inkbunny. It is is a thing we allowed when we had no need to, and if it proves to be an extreme burden, that can always change. |
" | People may feel attached to the images they've put work into, so it's easy to understand the knee-jerk reaction. I'm not 100% sold on the idea that this knee-jerk reaction is what differentiates vile irredeemable criminals from perfect citizens, for me the current adherence to the rules is what matters, but you seem to have a different value system, so whatever works for you, I guess. 😆 |
" | My main concern isn't so much personal LoRAs, but rather rules rarely being enforced to the full extent. Has a single user been punished for not mentioning the exact hash of a model? For not mentioning the exact version of a tool? If rules exist, but nobody follows them, and moderators not a single time have enforced them — will that suddenly be seen as an excuse to get rid of AI? |
" | Here's a bit of a thought experiment which perhaps makes things more clear. Here's some users with things wrong: |
" | Also, on the "open-source" requirement in general: what are the exact criteria? Typically, "open-source" means a very specific thing, which is having an OSI-approved license. A lot of neural models are published under not OSI-approved licenses and rather fit into "source-available" general category. |
" | You must not post work using closed-source tools or services that do not make their code and models freely available for others to reuse in an equivalent manner |
" | 2. would probably not be fine. At the end of the day, we don't know what changes went into that. We don't know what that means for reproducibility and accountability. |
" |
Sorry for being pedantic, but ACP states: > Use of open-source AI tools combined with freely-available models is permitted |
" | Usually, that means that the user interface is closed-source (like, the buttons you press on a specific website), but the underlying machine learning code is 100% taken from open-source and never touched again, as most people developing these websites have almost no knowledge about what happens in the internals. Thus, results from such tools are typically easy to reproduce as there's 1:1 relation to the open-source inputs. |
" | My first question is: In order for a model checkpoint to be considered "freely available," does it also need to be distributed in a manner that is readily findable to the public, or would a link to a discord group have been acceptable? |
All artwork and other content is copyright its respective owners.
Powered by Harmony 'Gravitation' Release 80.
Content Server: Virginia Cache - provided by Inkbunny Donors. Background: Blank Gray.
The Inkbunny web application, artwork, name and logo are copyright and trademark of their respective owners.