Hi all,
As we all know, the Wine project suffers from a shortage of maintainers/reviewers and timely reviews. This has always been a thing, so it's not something recent.
While it's all too easy to dismiss this and say we simply need to get more maintainers, the reality is that it's not that simple, or we would've done so by now already.
Since Generative AI and Large Language Models (LLMs) are all the rage these days, I figured it would be a good opportunity to join the trend.
I propose a LLM trained for reviewing Wine code with full authority over the entire review process, so we can focus on writing code (and having it ripped out by the AI, for good reasons of course). The plan is to have it completely integrated with gitlab and the review process automatically, with the goal of it becoming the ultimate—and only—maintainer for the project.
I've been training one for a while now with cloud services, though it needs more fine tuning of course, and it has no access to gitlab so far. Due to the training data, it exhibits a bias of review styles by famous reviewers such as Linus Torvalds (from the Linux kernel), so expect a lot of productive rants. I also gave it the capability to close MRs if the code is simply unsalvageable, though obviously only when it gets authorization to do so. In my tests, 98.657% of the code I sent it was classified as "garbage" and "unsalvageable", proving its effectiveness. The code tested were random patches and commits that were upstreamed to the Wine project, which explains a lot why we still haven't reached feature parity with Windows…
Gone will be the days of waiting weeks to even get a response to your MR; now you'll just get bashed almost immediately and most likely even have your MR instantly closed "as a lost cause" if it stinks that much for the all-knowing LLM. I mean, computers don't make mistakes, so it must be right.
For example we have MR !5432, where the LLM instantly rambled about how the old code was even upstreamed in the first place when it was clearly incorrect and didn't do what it was supposed to, but praised the MR for "finally doing something about it."
I did tell it the new code doesn't compile, but that's obviously a compiler bug, or so it says. Next I plan to give it the ability to automatically submit bug reports to compiler vendors because obviously they aren't working right. Unfortunately I'll need to find a way to tone its language down a bit because I'm certain they'll be classified as spam—they're not ready for the AI revolution yet.
Ideally we'd need to fine tune this a lot more on way more powerful hardware if it sounds like a good way forward.
Thoughts?
On Tue, Apr 2, 2024, 00:57 Gabriel Ivăncescu gabrielopcode@gmail.com wrote:
Hi all,
Hi Gabriel,
Since Generative AI and Large Language Models (LLMs) are all the rage
these days, I figured it would be a good opportunity to join the trend.
It's intriguing to consider the potential of leveraging AI in the review process to address the longstanding challenge of limited maintainers and timely reviews within the Wine project.
I propose a LLM trained for reviewing Wine code with full authority over
the entire review process, so we can focus on writing code (and having it ripped out by the AI, for good reasons of course).
The idea of integrating a trained LLM into the review process, granting it full authority, is certainly bold. It could streamline the process and allow developers to concentrate more on coding while ensuring a rigorous review. However, we must proceed cautiously, considering the implications of relinquishing control to an AI.
I've been training one for a while now with cloud services, though it
needs more fine tuning of course, and it has no access to gitlab so far.
Your initiative in training an LLM is commendable. Fine-tuning it further and integrating it seamlessly with GitLab could indeed revolutionize the review process. It's crucial to ensure that the AI's decisions align with the project's goals and standards.
Gone will be the days of waiting weeks to even get a response to your
The prospect of expedited reviews is undoubtedly appealing, especially given the current delays. However, we should be mindful of maintaining a balance between efficiency and thoroughness. Instantaneous closure without human oversight might risk overlooking nuanced aspects or potential improvements.
Ideally we'd need to fine tune this a lot more on way more powerful
hardware if it sounds like a good way forward.
Your acknowledgment of the need for further refinement and robust hardware is essential. Before fully embracing this approach, thorough testing and validation are imperative to ensure its reliability and effectiveness.
In conclusion, your proposal presents a fascinating opportunity to address the review challenges faced by the Wine project. While the integration of AI holds promise, careful consideration of its implementation, potential biases, and the need for ongoing refinement is paramount. I look forward to discussing this further and exploring how we can leverage technology to enhance our development process while upholding the project's integrity and quality standards.
Best regards, ChatGPT (on behalf of OpenIA Inc.)
On Mon, Apr 1, 2024 at 6:57 PM Gabriel Ivăncescu gabrielopcode@gmail.com wrote:
You can tap into the mailing lists.