Some very great concern you raised @zfigura. Let's address them shall we?
Yes, probably. AI generated text is difficult to read and often full of statements that are blatantly or subtly wrong. This is no exception.
You might find it difficult to read but I don't. It helps me articulate what is happening.
That's great, but if you're posting explanations here that more or less implies you expect us to read them. If you want your patches accepted you should gear explanations, or lack thereof, toward the reviewer.
Does it makes mistakes? A lot, of course. I do too. Everyone does. That's how we grow and learn. Still better than staying in the dark.
Except AI doesn't learn from its mistakes. Also, quite frankly, it makes a lot more mistakes, with a lot more pretend confidence, than anyone else, apparently because it's not capable of recognizing when it doesn't know something.
But mainly I ask why you're using it when things would be an awful lot easier if you didn't.
I wouldn't have functional reporting of my system's topology under wine already otherwise.
And you still don't? This patch series is just a hack, in the AI's own words, and it's clearly not getting the data from anywhere real.
You don't need 600 words to explain things that are obvious from reading the patch. You don't need another 700 words to say what should just be "the kernel32:comm tests are failing in a way unrelated to this patch". Asking anyone to read all that garbage is wasting their time.
You might not need the words because I made the changes easy to read by keeping them where there are. Some people, on the other hand might need the context to understand the changes and their implication on the codebase. Nobody asked you to read anything that you don't find interesting and nobody forced you to subscribe to the RSS feed.
Any comments you're posting here, you are implicitly asking us to read. The assumption is that it's useful information.
And I suspect that using AI to generate the patch contents, if you did, is not going to be better than writing yourself.
Not sure about that since I don't write or read C derivatives that often. I usually prefer to hangout as far from the metal as possible. The higher the level of abstraction, the better. Talking to silicon is boring. Necessary, but boring. I can appreciate different perspectives on this but it's unlikely we agree. I don't mind.
I don't see what this has to do with anything. The same functionality is being implemented no matter how you do it. I'm just saying you will have a better time if you write it yourself. I've seen many, many first-time contributors submit many, many first-time patches, and this AI-written garbage is one of the worst ones I've ever seen. I fully believe that you can do better without using it.
Certainly not better than consulting existing Wine developers for help, ...
If I had to bother someone every-time I had an issue...
We always welcome questions and will attempt to provide advice and guidance, especially to those interested in writing their own patches.
I can understand and solve my problems myself, thank you. I learn a lot by making mistakes. That's what I do. Stupid stuff like this in an attempt to satiate my ever increasing curiosity.
which would have prevented a lot of mistakes I can see,
I am still waiting for your review btw.
I don't think there's a point reviewing LLM-generated code, not when it has intractable licensing problems. Incremental improvement can't fix that. It needs to be written from scratch by a human (or by an LLM unencumbered by licensing problems, but I'm not convinced those can feasibly exist at this point) before it can be improved.
With that said, so that you don't make the same mistakes when you do rewrite it:
* don't use environment variables to modify Wine's behaviour; * split up changes into small pieces, usually one function at a time; * implement no more behaviour than is absolutely necessary (don't stub every NUMA function, just the ones the application you care about needs); * try to follow the surrounding code style, don't change it, unless a reviewer specifically requests it.
I've come only but in good faith to raise awareness about this fact and was only met with mockery and unrelated queries. Talk about collaboration. I opened this submission out of courtesy because I care about this project I've been using since I'm 6 and want it to support modern CPU/memory architectures.
I apologize that you've felt mocked; I don't know what caused you to feel that way, but it certainly wasn't my intention. I've only attempted to ask questions relevant to development and to this patch set.
More pressingly, according to my understanding, AI are trained on code with a license incompatible to Wine, and I don't believe any output can be used safely in our project.
That's actually a very common misconception but I'm no lawyer myself. There is just no legal precedent. It's subject to interpretation and anyone's guess is as worthless as the next one. It up to courts to decide eventually. My take is that if I use a tool to produce something, unless that tools strictly forbids me or anyone to redistribute this thing that I made with it freely (in which case it not a useful tool), there is nothing preventing me or anyone to redistribute it with an open source licence. Again as worthless as anyone's guess so I get why anyone might be careful.
Which part is a misconception exactly? It's well-established that AI is trained on code with an incompatible license, and you also say that there is no legal precedent and no way to know if its output is safe, so "I don't believe it's safe" is hardly a misconception?
It's been established that AI is capable of producing what's clearly a derivative work (or even an exact reproduction) of other code when asked. It's also clear that AI is not capable of reliably determining, well, anything, so I don't think it can be trusted to judge whether its own output is a derivative work.
What application needs this?
NUMA topology? I could think of a few examples :smile:
What are they? We don't accept new features as a rule unless something needs them.
I don't think you understand what the [Non-Uniform Memory Access](https://en.wikipedia.org/wiki/Non-uniform_memory_access) (NUMA) architecture is. It's not about supporting a new feature... it's about fixing the stubs to report correctly when MSVC or anything else asks for them.
These functions were a stub, and are being turned into a non-stub. That's a new feature.
Are there applications that need this or aren't there? You seem to mention a bug report; can you please link to it?
Frankly, at this point, I'm confused at what your intention is. You say "it's about fixing the stubs to report correctly", but you are submitting a "dumb approximation" to "trick whatever app asks for it with a convincing enough reply". Neither approach is necessarily wrong—we do the latter all the time—but you seem to be simultaneously arguing for both?
@nsivov You can move it around, where you want afterwards.
That's not how we do things around here I'm afraid; we don't commit patches with known insufficiencies as a general rule.
And that's exactly why, still to this day, it has not yet been addressed after so many years. Nobody is asking wine to support a fully compilent NUMA implementation... but the current one is as bad (if worse) than the one I proposed.
I don't see where you're getting that? Patches are committed to Wine all the time. We remove their known insufficiencies through review.
There's probably been no NUMA attention because of a lack of applications that care.