Hi all,
As we all know, the Wine project suffers from a shortage of
maintainers/reviewers and timely reviews. This has always been a thing,
so it's not something recent.
While it's all too easy to dismiss this and say we simply need to get
more maintainers, the reality is that it's not that simple, or we
would've done so by now already.
Since Generative AI and Large Language Models (LLMs) are all the rage
these days, I figured it would be a good opportunity to join the trend.
I propose a LLM trained for reviewing Wine code with full authority over
the entire review process, so we can focus on writing code (and having
it ripped out by the AI, for good reasons of course). The plan is to
have it completely integrated with gitlab and the review process
automatically, with the goal of it becoming the ultimate—and
only—maintainer for the project.
I've been training one for a while now with cloud services, though it
needs more fine tuning of course, and it has no access to gitlab so far.
You can tap into the mailing lists.
Due to the training data, it exhibits a bias of review styles by famous
reviewers such as Linus Torvalds (from the Linux kernel), so expect a
lot of productive rants. I also gave it the capability to close MRs if
the code is simply unsalvageable, though obviously only when it gets
authorization to do so. In my tests, 98.657% of the code I sent it was
classified as "garbage" and "unsalvageable", proving its effectiveness.
The code tested were random patches and commits that were upstreamed to
the Wine project, which explains a lot why we still haven't reached
feature parity with Windows…
Gone will be the days of waiting weeks to even get a response to your
MR; now you'll just get bashed almost immediately and most likely even
have your MR instantly closed "as a lost cause" if it stinks that much
for the all-knowing LLM. I mean, computers don't make mistakes, so it
must be right.
For example we have MR !5432, where the LLM instantly rambled about how
the old code was even upstreamed in the first place when it was clearly
incorrect and didn't do what it was supposed to, but praised the MR for
"finally doing something about it."
I did tell it the new code doesn't compile, but that's obviously a
compiler bug, or so it says. Next I plan to give it the ability to
automatically submit bug reports to compiler vendors because obviously
they aren't working right. Unfortunately I'll need to find a way to tone
its language down a bit because I'm certain they'll be classified as
spam—they're not ready for the AI revolution yet.
Ideally we'd need to fine tune this a lot more on way more powerful
hardware if it sounds like a good way forward.
Thoughts?